This work addresses the problem of retrieving a target object from cluttered environment using a robot manipulator. In the details, the proposed solution relies on a Task and Motion Planning approach based on a two-level architecture: the high-level is a Task Planner aimed at finding the optimal objects sequence to relocate, according to a metric based on the objects weight; the low-level is a Motion Planner in charge of planning the end-effector path for reaching the specific objects taking into account the robot physical constraints. The high-level task planner is a Reinforcement Learning agent, trained using the information coming from the low-level Motion Planner. In this work we consider the Q-Tree algorithm, which is based on a dynamic tree structure inspired by the Q-learning technique. Three different RL-policies with two kinds of tree exploration techniques (Breadth and Depth) are compared in simulation scenarios with different complexity. Moreover, the proposed learning methods are experimentally validated in a real scenario by adopting a KINOVA Jaco2 7-DoFs robot manipulator.

Objects Relocation in Clutter with Robot Manipulators via Tree-based Q-Learning Algorithm: Analysis and Experiments

Golluccio G.;Di Lillo P.;Di Vito D.;Marino A.;Antonelli G.
2022-01-01

Abstract

This work addresses the problem of retrieving a target object from cluttered environment using a robot manipulator. In the details, the proposed solution relies on a Task and Motion Planning approach based on a two-level architecture: the high-level is a Task Planner aimed at finding the optimal objects sequence to relocate, according to a metric based on the objects weight; the low-level is a Motion Planner in charge of planning the end-effector path for reaching the specific objects taking into account the robot physical constraints. The high-level task planner is a Reinforcement Learning agent, trained using the information coming from the low-level Motion Planner. In this work we consider the Q-Tree algorithm, which is based on a dynamic tree structure inspired by the Q-learning technique. Three different RL-policies with two kinds of tree exploration techniques (Breadth and Depth) are compared in simulation scenarios with different complexity. Moreover, the proposed learning methods are experimentally validated in a real scenario by adopting a KINOVA Jaco2 7-DoFs robot manipulator.
File in questo prodotto:
File Dimensione Formato  
s10846-022-01719-9.pdf

solo utenti autorizzati

Tipologia: Versione Editoriale (PDF)
Licenza: DRM non definito
Dimensione 4.55 MB
Formato Adobe PDF
4.55 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11580/93947
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
social impact