In this article, we present a control architecture for a robotic manipulator finally aimed at helping people with severe motion disabilities in performing daily life operations, such as manipulating objects or drinking. The proposed solution allows the user to focus the attention only on the operational tasks, while all the safety-related issues are automatically handled by the developed control architecture. The user commands the manipulator sending high-level commands via a P300-based brain–computer interface. A perception module, relying on an RGB-D sensor, continuously detects and localizes the objects in the scene, tracking the position of the user and monitoring the environment for identifying static and dynamic obstacles, e.g., a person entering in the scene. A lightweight manipulator is controlled relying on a task-priority inverse kinematics algorithm that handles task hierarchies composed of equality-based and set-based tasks, including obstacle avoidance and joint mechanical limits. This article describes the overall architecture and the integration of the implemented software modules, that are based on common frameworks and software libraries, such as the robotic operating system (ROS), BCI2000, OpenCV, and PCL. The experimental results on a use case scenario using a Kinova 7DOFs Jaco2 robot helping a user to perform drinking and manipulation tasks show the effectiveness of the developed control architecture.

BCI-controlled assistive manipulator: developed architecture and experimental results

Di Lillo, Paolo
;
Arrichiello, Filippo;Di Vito, Daniele;Antonelli, Gianluca
2021-01-01

Abstract

In this article, we present a control architecture for a robotic manipulator finally aimed at helping people with severe motion disabilities in performing daily life operations, such as manipulating objects or drinking. The proposed solution allows the user to focus the attention only on the operational tasks, while all the safety-related issues are automatically handled by the developed control architecture. The user commands the manipulator sending high-level commands via a P300-based brain–computer interface. A perception module, relying on an RGB-D sensor, continuously detects and localizes the objects in the scene, tracking the position of the user and monitoring the environment for identifying static and dynamic obstacles, e.g., a person entering in the scene. A lightweight manipulator is controlled relying on a task-priority inverse kinematics algorithm that handles task hierarchies composed of equality-based and set-based tasks, including obstacle avoidance and joint mechanical limits. This article describes the overall architecture and the integration of the implemented software modules, that are based on common frameworks and software libraries, such as the robotic operating system (ROS), BCI2000, OpenCV, and PCL. The experimental results on a use case scenario using a Kinova 7DOFs Jaco2 robot helping a user to perform drinking and manipulation tasks show the effectiveness of the developed control architecture.
File in questo prodotto:
File Dimensione Formato  
BCI_TCDS_Final.pdf

non disponibili

Tipologia: Documento in Pre-print
Licenza: Dominio pubblico
Dimensione 2.13 MB
Formato Adobe PDF
2.13 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11580/76462
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 26
social impact