To meet these needs, this research line is focused on the analysis, design, and implementation of methods and tools that offer non-expert users an intuitive way to program and interact with robots. Furthermore, thanks to the Joiint Lab established in 2021 within the technology park Kilometro Rosso, our laboratory collaborates side by side with several companies which propose new challenges and provide valuable feedback on the developed technologies, establishing a synergy between research and industrial needs.
Human-Robot Interfaces (HRI)
In history, several tools and devices have been developed by humans to interact and interface with different systems, facilitating the work and increasing ergonomics, safety, and precision.
In general, the interfaces enable communication between systems that do not share the same language. Today, user interfaces are playing a relevant role in the industrial field, bridging the gap between humans and machines. The research activities in this area present interfaces for remote and proximity interaction:
Interface for remote interaction: the main field of application concerns teleoperation. The approaches developed in our Lab are designed to interface with a generic robot composed of both manipulators and mobile systems. The strengths of the developed interfaces lie in their intuitiveness, functionality, and affordability.
The proposed teleoperation framework [1] leverages a motion capture suite composed of wearable and lightweight devices placed on the user’s arms, forearms, and hands. Each device integrates inertial and electromyography sensors thanks to which it is possible to remap the user's movements and intentions on the controlled robot. In addition, stereoscopic cameras integrated with virtual reality headsets provide appropriate and immersive visual feedback.
This framework has been tested on different robots and in several real scenarios: Search & Rescue applications [2] (video), service robotics [1] (video), and in response to the pandemic [3] (video).
References
- Gianluca Lentini, Alessandro Settimi, Danilo Caporale, Manolo Garabini, Giorgio Grioli, Lucia Pallottino, Manuel G Catalano, Antonio Bicchi, “Alter-ego: a mobile robot with a functionally anthropomorphic upper body designed for physical interaction”, IEEE Robotics & Automation Magazine, 2019.
- Francesca Negrello, Alessandro Settimi, Danilo Caporale, Gianluca Lentini, Mattia Poggiani, Dimitrios Kanoulas, Luca Muratore, Emanuele Luberto, Gaspare Santaera, Luca Ciarleglio, L Ermini, L Pallottino, D Caldwell, N Tsagarakis, A Bicchi, M Garabini, M Catalano, “Walk-man humanoid robot: Field experiments in a post-earthquake scenario”, IEEE Robotics & Automation Magazine, 2018.
- Maria Rosanna Fossati, Manuel Giuseppe Catalano, Marina Carbone, Gianluca Lentini, Danilo Caporale, Giorgio Grioli, Mattia Poggiani, Michele Maimeri, Manuel Barbarossa, Cristiano Petrocelli, Paola Vivani, Claudia Calderini, Laura Carrozzi, Mauro Ferrari, Antonio Bicchi,“LHF Connect: a DIY telepresence robot against COVID-19”, Strategic Design Research Journal, 2020.
Interface for proximity interaction: although kinesthetic teaching represents one of the most effective methods to program industrial robots, it is useful only for manipulators and often requires expensive extra hardware (e.g. force/torque sensors) to achieve a powerful gravity compensation.
The method developed in our Lab [1] provides a tool to jog and program mobile manipulators by using a single and user-friendly interface. Inspired by hand-guide approaches to jog robotic arms, the proposed method enables users to move both manipulators and mobile bases in an intuitive and contactless way through a common smartphone (video).
References
- G Lentini, P Falco, G Grioli, MG Catalano, A Bicchi, “Contactless Lead-Through Robot Interface”, I-RIM, 2020.
Robot Programming without Coding
Robot programming remains a crucial point in the robot’s spread. Despite the great strides made in the “Intuitive Robot Programming” field, the significant part of the lifetime cost of a robotic cell still lies in the application software.
Learning from demonstration (LfD) established, over the years, as a promising method to transfer skills from humans to robots. In general, three phases in LfD have been identified: teaching, learning, and autonomous execution. The scientific community provided several tools to demonstrate a task to a robot as well as methods to encode and generalize the learned actions in a new situation. Although the relation between perception and action has been demonstrated through several psychological and neurobiological studies, it is important to establish and select what is relevant during the execution of a given task.
This line of research presents a framework [1] that encapsulates the three phases involved during robot programming, grouping perceptions according to their nature and establishing rules for the selection of salient perceptions. In addition, the proposed framework is compatible with the different sub-methods offered by the literature in the field of “tasks segmentation” and “actions generalization”. Finally, all the learned tasks are represented as a network, which is able to evolve and reorganize automatically in case new tasks are learned (video).
References
- G Lentini, G Grioli, MG Catalano, A Bicchi, “Robot Programming without Coding”, IEEE International Conference on Robotics and Automation (ICRA), 2020.