File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TIE.2024.3468636
- Scopus: eid_2-s2.0-105002400512
- Find via

Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Long-Term Active Object Detection for Service Robots: Using Generative Adversarial Imitation Learning With Contextualized Memory Graph
| Title | Long-Term Active Object Detection for Service Robots: Using Generative Adversarial Imitation Learning With Contextualized Memory Graph |
|---|---|
| Authors | |
| Keywords | Active vision, contextualized memory graph (CMG) Generative adversarial imitation learning (GAIL) Long-term active object detection (AOD) Service robot |
| Issue Date | 1-Jan-2025 |
| Publisher | IEEE Industrial Electronics Society |
| Citation | IEEE Transactions on Industrial Electronics, 2025, v. 72, n. 5, p. 5082-5092 How to Cite? |
| Abstract | Active object detection (AOD) is a crucial task in embodied artificial intelligence within robotics. Previous works mainly address this challenge through deep reinforcement learning (DRL), characterized by prolonged training cycles and model convergence difficulties. Moreover, they often emphasize whether a single AOD task can be completed, overlooking the reality that robots perform long-term AOD tasks. To this end, this article introduces a new AOD solution utilizing a graph based on generative adversarial imitation learning (GAIL). A new expert strategy is devised using the active vision dataset benchmark (AVDB), generating high-quality expert trajectories. Meanwhile, a new AOD model based on GAIL is proposed to predict the robot's execution actions. Moreover, a contextualized memory graph (CMG) is constructed, providing partial state information for the GAIL model and enabling the robot to directly make decisions based on the humanlike memory function. Experimental validation against existing methods in AVDB demonstrates superior results, achieving an 88.8% action prediction accuracy, reducing average path length (APL) to 12.182 steps, and shortening single-step action prediction time to 0.133 s. The proposed method is further evaluated in a real-world home scene, affirming its efficacy and generalization capabilities. |
| Persistent Identifier | http://hdl.handle.net/10722/362446 |
| ISSN | 2023 Impact Factor: 7.5 2023 SCImago Journal Rankings: 3.395 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Yang, Ning | - |
| dc.contributor.author | Lu, Fei | - |
| dc.contributor.author | Tian, Guohui | - |
| dc.contributor.author | Liu, Jun | - |
| dc.date.accessioned | 2025-09-24T00:51:37Z | - |
| dc.date.available | 2025-09-24T00:51:37Z | - |
| dc.date.issued | 2025-01-01 | - |
| dc.identifier.citation | IEEE Transactions on Industrial Electronics, 2025, v. 72, n. 5, p. 5082-5092 | - |
| dc.identifier.issn | 0278-0046 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/362446 | - |
| dc.description.abstract | Active object detection (AOD) is a crucial task in embodied artificial intelligence within robotics. Previous works mainly address this challenge through deep reinforcement learning (DRL), characterized by prolonged training cycles and model convergence difficulties. Moreover, they often emphasize whether a single AOD task can be completed, overlooking the reality that robots perform long-term AOD tasks. To this end, this article introduces a new AOD solution utilizing a graph based on generative adversarial imitation learning (GAIL). A new expert strategy is devised using the active vision dataset benchmark (AVDB), generating high-quality expert trajectories. Meanwhile, a new AOD model based on GAIL is proposed to predict the robot's execution actions. Moreover, a contextualized memory graph (CMG) is constructed, providing partial state information for the GAIL model and enabling the robot to directly make decisions based on the humanlike memory function. Experimental validation against existing methods in AVDB demonstrates superior results, achieving an 88.8% action prediction accuracy, reducing average path length (APL) to 12.182 steps, and shortening single-step action prediction time to 0.133 s. The proposed method is further evaluated in a real-world home scene, affirming its efficacy and generalization capabilities. | - |
| dc.language | eng | - |
| dc.publisher | IEEE Industrial Electronics Society | - |
| dc.relation.ispartof | IEEE Transactions on Industrial Electronics | - |
| dc.subject | Active vision, contextualized memory graph (CMG) | - |
| dc.subject | Generative adversarial imitation learning (GAIL) | - |
| dc.subject | Long-term active object detection (AOD) | - |
| dc.subject | Service robot | - |
| dc.title | Long-Term Active Object Detection for Service Robots: Using Generative Adversarial Imitation Learning With Contextualized Memory Graph | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1109/TIE.2024.3468636 | - |
| dc.identifier.scopus | eid_2-s2.0-105002400512 | - |
| dc.identifier.volume | 72 | - |
| dc.identifier.issue | 5 | - |
| dc.identifier.spage | 5082 | - |
| dc.identifier.epage | 5092 | - |
| dc.identifier.issnl | 0278-0046 | - |
