2017 IEEE/RSJ IROS Workshop

Human in-the-loop robotic manipulation:
on the influence of the human role

September 24, Vancouver, Canada, Full-day

Objectives:

One of the key skills for a robot is to physically interact with the environment in order to achieve basic tasks such as pick-and-place, sorting etc. For physical interaction, object grasping and manipulation capabilities along with dexterity (e.g. to use objects/tools successfully) and high-level reasoning (e.g. to decide about how to fulfill task requirements) are crucial. Typical applications of robots have been welding, assembly, pick-and-place in industrial settings. However, traditional industrial robots perform their assignments in cages and are heavily dependent on hard automation that requires pre-specified fixtures and timeconsuming programming.

During recent years there have been several attempts of designing robots that are inherently safe and thus can work together with humans in mixed assembly lines out of their cages or even replace human workers without major redesigns of the workplace. Some recent remarkable product examples are the dual-armed ABB’s robot YuMi and Rethink Robotics’s Baxter. Despite the aforementioned technological achievements, robots still lack the perception, control and cognitive abilities that can allow a fluent interaction with humans both cognitively and physically. One promising direction is to include human in the loop, i.e. as an input agent that can influence the robot decision-making process. The recent release of the ISO/TS 15066 on collaborative robots demonstrates the will of having in a near future human and robots working closely together. In this direction, a key aspect to consider is that there can be different roles implicitly assigned to the human in such collaboration. Two types of involvement are usually envisioned for the human: as a teacher and as a co-worker. The former has been addressed in many ways, e.g. programming by demonstration approaches to derive robot controllers from observing humans with the aim of adapting to novel cases with minimum expertise. A key issue is how to convey the information from the human to the robot, namely the interface to provide demonstrations. One common way is to record human motions directly, but it requires addressing the not-so-trivial problem of human to robot motion mapping. The two other main approaches, namely kinesthetic teaching (guiding the robot physically) and teleoperation (human operator using the robot’s sensors and effectors, e.g. through a haptic device) bypass this mapping issue by demonstrating the motion directly within the robot configuration space. Kinesthetic teaching does not only allow teaching of motion trajectories but can also facilitate teaching of contact forces required to perform a manipulation task or in general interaction tasks that involve robots, humans and objects. The latter, human as a co-worker, can be considered in scenarios where humans and robots do share their working space and actively collaborate, through joint actions like object cooperative manipulation and object exchange (exchanging a tool or a manufactured piece). In both cases the robot should be able to predict the human intention or motion and react accordingly in order to achieve the task at hand. The presence and the involvement of the human in the task execution introduces high amount of uncertainty and variations that is not typical for standard industrial environment and requires advanced multimodal interactive perception skills for the robot.

This workshop focuses on human-in-the-loop robotic manipulation that can involve different human roles, e.g., supervisory, cooperative, active or passive. This workshop proposes to gather experts in human-in-the-loop robotic manipulation, for detecting synergies in the frameworks proposed to observe and model the human contribution to the task. We would like also to identify the critical challenges still to be addressed by the community, to reach the envisioned human-robot close and fluent collaboration, across the different approaches pertaining to the workshop topic.

Topics of interest:

  • Physical Human-robot interaction
  • Cooperative object manipulation
  • Human-robot synchronization and hand-overs
  • Learning from demonstration
  • Adaptive Control
  • Multimodal interactive perception
  • Teleoperation and haptic interfaces
  • Human motion prediction
  • Safety through mechanical and control design
  • Mapping from human to robotic skills
  • Grasp and manipulation planning
  • Learning for grasping and manipulation
  • Human involvement in industrial robotic applications, e.g., shared assembly lines

Invited Speakers:

Siddhartha Srinivasa, University of Washington, USA
Joseph McIntyre, Tecnalia, Spain
Tamim Asfour, KIT, Germany
Erhan Oztop, Ozyegin University, Turkey
Guilherme Maeda, TU Darmstadt, Germany
Tony Prescott, University of Sheffield, UK
Dan Popa, University of Louisville, USA
Paolo Robuffo Giordano, CNRS, France
Sylvain Calinon, Idiap Research Institute, Switzerland
Brenna D. Argall, Northwestern University, USA
Christian Ott, DLR, Germany

Program:

09:00 – 09:15 Welcome and Opening
09:15 – 09:40 Invited Talk: Siddhartha Srinivasa
09:40 – 10:00 Poster Teasers (3 minute talks)
10:00 – 10:30 Coffee and Posters
10:30 – 10:55 Invited Talk: Tony Prescott
10:55 – 11:20 Invited Talk: Sylvain Calinon
11:20 – 11:45 Invited Talk: Joseph McIntyre
11:45 – 12:10 Invited Talk: Erhan Oztop
12:10 – 13:50 Lunch
13:50 – 14:15 Invited Talk: Tamim Asfour
14:15 – 14:40 Invited Talk: Guilherme Maeda
14:40 – 15:05 Invited Talk: Paolo Robuffo Giordano
15:05 – 15:30 Invited Talk: Christian Ott
15:30 – 15:55 Invited Talk: Dan Popa
16:00 – 16:30 Coffee and Posters
16:30 – 16:55 Invited Talk: Brenna D. Argall
16:55 – 18:00 Panel Discussion and Closing

 

Invited Talks:

Siddhartha SrinivasaMathematical Models for Human in the Loop Manipulation

Abstract: Much of my group’s work over the past 5 years has focused on building mathematical models of human-robot collaboration, formalizing notions of legibility, adaptivity, deception, and trust, in terms of Bayesian inference and stochastic optimal control. I’ll speak about our ongoing efforts and some new directions.

Tony Prescott , Towards multimodal perception and social cognition for co-botics [pdf]

Abstract:  Human-robot collaboration in both industrial and assistive settings will benefit in advances in robot multi-sensory scene awareness and social cognition, improved communication with users, and variable autonomy (from full autonomy to full tele-operation).  This talk will provide an overview of research related to co-botics and human-in-the-loop interaction at Sheffield Robotics.  A particular focus will be on a biomimetic control architecture we are developing, with UK and and European partners, that aims to improve robot social cognition and awareness.

Sylvain CalinonChallenges in extending learning from demonstration to collaborative skills and shared autonomy [pdf]

Abstract: In human-centric robot applications, it is useful if the robots can learn new skills by interacting with the end-users. From a machine learning perspective, the challenge is to acquire skills from only few interactions, with strong generalization demands. It requires: 1) the development of intuitive active learning interfaces to acquire meaningful demonstrations; 2) the development of models that can exploit the structure and geometry of the acquired data in an efficient way; 3) the development of adaptive control techniques that can exploit the learned task variations and coordination patterns. The developed models often need to serve several purposes (recognition, prediction, online synthesis), and be compatible with different learning strategies (imitation, emulation, exploration). Such challenge can be facilitated is these different techniques share a common probabilistic representation of the tasks and objectives to achieve. In human-robot collaboration, such representation can take various forms, and movements need to be enriched with force and impedance information to anticipate the users’ behaviors and generate safe and natural gestures. I will present an approach combining model predictive control and statistical learning for robot skill acquisition, illustrated in various applications, with robots either close to us (robot for dressing assistance), part of us (prosthetic hand with EMG and tactile sensing), or far from us (teleoperation of bimanual robot in deep water).

Joseph McIntyre, Understanding Human Motor Behavior: A Step Toward Achieving Teaching by Demonstration

Successfully teaching a robot to perform an assembly task through demonstration is a challenging proposition. Teaching a robot to mimic a trajectory is hard enough when one considers questions of non-homologous joint configurations and performance characteristics between the robot and the human. But when contact forces are involved in the functional task, the challenges go up a step, because the robot cannot “see” the forces that are applied. Still, one can imagine measuring the movements and forces applied by the human to perform a given task, using, for instance, a combination of machine vision or motion capture, instrumented objects to measure forces, and kinaesthetic teaching or teleoperation to guide the robot in it’s own intrinsic workspace. One might then parameterize the kinematics and dynamics of the movements to provide a template that the robot can follow. Such an approach may work when the robot is already endowed with a repertoire of “skills”, wherein the demonstration serves primarily to prime the preprogramed skill with tuning parameters extracted from the demonstrated motion. Still, the conundrum of mapping observations to pre-conceived control parameters is still unsolved. A much more daunting task, however, is to actually teach control from scratch simply by demonstrating the task to the robot. How might one extract from the human-demonstrated motion the control policy used by the human to achieve the task in terms of impedances or admittances, controller gains, state transition criteria, etc.

One key to achieving robot programming through demonstration is to have an understanding of how humans perform these tasks. But physiologists who aim to understand human behaviour are faced with the same challenges as the learning robot described above, i.e. how can we as experimentalists deduce from observations of human performing assembly tasks the control policies employed by the nervous system to achieve these tasks. In this talk I will discuss a number of studies undertaken to better understand the neurophysiology of human motor control and suggest ways in which the methodologies and conclusions drawn from these studies might be applied to the challenges of teaching robots through demonstration.

Erhan OztopHuman sensorimotor learning in shared control systems [pdf]

Abstract: In  human-human collaboration, both parties do learn and adapt to change their control policies based on each other’s behavior. The human-robot version is no different; human still would learn, and if  we wish we may program the robot to learn and change its behavior through time. If managed properly, this co-adaptation mechanism may lead to a higher task performance. However, there is no established general rule to ensure this. Furthermore,  the required human effort for high level task performance must be considered. Eventually a high performing collaborative system may be obtained; but, the learning/adaptation time needed by the human operator can be prohibitively long. Another dimension to consider is how much the agents are allowed to communicate. In some tasks, the human can be in charge and control when and how the robot collaboration is invoked; in some other cases, the robot can indicate its plan or current state using a sensory modality not in use for the task at hand.  As an extreme case, even, neither agent may be given any knowledge about the other agent.

One natural way of inducing effective human-robot collaboration is to adopt a human-in-the-loop setup, where the control signals of the agents are combined to generate the net motor output driving the plant. In such shared control systems, usually the goal is to combine the strengths of each partner to achieve a task performance higher than that is possible by one of the partners alone.

Although shared control is a promising direction for effective human-robot collaboration, the robot and its control policy  create a novel environment for the human operator, for which significant human sensorimotor learning is often needed. Therefore, the human side of the shared control framework needs to be studied in detail to transform the framework into a widely adopted technology. In this talk, I will present our work in this direction that investigates the human sensorimotor learning in shared vs. direct control of  a robot arm -with no explicit communication- for balancing  a sphere on a tray attached to it.

 

Tamim AsfourAffordances-Supported Shared Autonomy for Humanoid Manipulation [pdf]

Abstract: Understanding and exploiting the interaction possibilities of robots with the environment provides a powerful mechanism for increasing the versatility of task execution. The talk presents our recent work on extracting affordance hypotheses from visual perception, their validation through force-based interaction and their use to realize loco-manipulation tasks of a humanoid robot. We formalize affordances as belief functions over the space of end-effector poses und organize them in a hierarchy of whole-body loco-manipulation affordances. This formalization allows the consistent integration of affordance related evidence from multiple perceptual modalities and sensorimotor experiences with different degrees of certainty. The resulting affordance-based scene representation allows the implementation of shared autonomy strategies, by which a human pilot controls a humanoid robot by selecting affordances according to their degrees of belief and by parameterizing the associated execution strategies by means of Object-Action Complexes.

Guilherme MaedaHuman-Robot Collaborative Skills–representation, learning, and assessment [pptx]

Abstract: Imitation learning has remarkably changed the way robots acquire new skills by making use of human demonstrations. In the context of human-robot collaboration, however, many problems in imitation learning arise. In this talk, I will discuss some of the challenges and our proposed solutions. I will talk about the representation of human-robot skills with Interaction Movement Primitives, how to use human observations and ergonomics to create personalized robot skills, and finally, how to incorporate skill assessment to enable active requests of human demonstrations.

Paolo Robuffo GiordanoRecent Results on Shared Control for Human-assisted Telemanipulation [pdf]

Abstract: Nowadays and future robotics applications are expected to address more and more complex tasks in increasingly unstructured environments and in co-existence or co-operation with humans. Achieving full autonomy is clearly a “holy grail” for the robotics community: however, one could easily argue that real full autonomy is, in practice, out of reach for many years to come. The leap between the cognitive skills (e.g., perception, decision making, general ”scene understanding”) of us humans w.r.t. those of the most advanced nowadays robots is still huge. In most applications involving tasks in unstructured environments, uncertainty, and interaction with the physical word, human assistance is still necessary, and will probably be for the next decades.

These considerations motivate research efforts into the (large) topic of shared control for complex robotics systems: on the one hand, empower robots with a large degree of autonomy for allowing them to effectively operate in non-trivial environments. On the other hand, include human users in the loop for having them in (partial) control of some aspects of the overall robot behavior.

In this talk I will then review several recent results on novel shared control architectures meant to blend together diverse fields of robot autonomy (sensing, planning, control) for providing a human operator an easy “interface” for commanding the robot at high-level. Applications to the control of a dual-arm manipulator systems for remote telemanipulation will be illustrated.

Christian OttPassivity based control for physical human-robot interaction [pdf]

Abstract: In this talk I will highlight some new control oriented developments towards safe and reliable physical human robot interaction. Passivity presents a basic requirement for interaction with uncertain environments and humans. While passivity of the controller itself is in some cases too conservative, “passivity-based control” aims at passivity for the interaction port in the closed loop system.  In this talk I will discuss the passivity properties of several multi-task frameworks as a baseline for human-robot interaction. Moreover, I will present some recently proposed nullspace control actions that enhance safety without limiting the overall performance for the task execution. The presented methods are evaluated on several torque controlled robots available at DLR.

Dan PopaAdaptive interfaces for collaborative robots of the future [pdf]

Abstract:  In this talk we describe recent research endeavors at our lab to develop new human-machine interfaces (HMIs) for collaborative robots (coRobots). Adaptation to human preferences and safety enabled by distributed microfabricated sensors and adaptive physical human-robot interaction are essential technologies for coRobots of the future. We use examples from our recent research to highlight surprising findings from encapsulating neuroadaptive controllers, traded controllers, and adaptive teleoperation controllers into new HMIs of coRobots designed for the home and hospital environments. CoRobots in our lab use distributed sensors in polymer “skins”, and exhibit a higher degree of interactivity, usability and personalization. Applications include nursing assistance, imitation for treatment of autism, and better interfaces for prosthetics.

Brenna D. Argall, Bridging Gaps in Lost Function with Human-in-the-Loop Robotic Manipulation

Abstract: It is an irony that often the more severe a person’s motor impairment, the more challenging it is for them to operate the very assistive machines which might enhance their quality of life. Assistive manipulators pose a particular challenge because of their complexity: the dimensionality of the manipulator’s control space in general far exceeds the dimensionality of the control signal able to be produced by the human operator (for reasons of motor impairment, or interface limitations). To introduce robotics autonomy and intelligence offers a solution that offloads some control burden from the operator. This human-in-the-loop system however differs in critical ways from traditional human-robot manipulation teams where the human serves as coworker or teacher. This talk will overview these critical differences, and discuss algorithmic and experimental work in the argallab which aims to address them.

 

 

Extended Abstracts/Posters:

Call for Papers:

We welcome the submission of two page extended abstracts describing new or ongoing work. Final instructions for poster presentations and talks will be available on the workshop website after decision notifications have been sent. All abstracts will be accessible on the workshop website. Submissions should be in .pdf format. Please send submissions to yiannis[at]chalmers[dot]se with the subject line “IROS 2017 Workshop Submission”. For any questions or clarifications, please contact the organizers.

Important Dates

Abstract submission deadline: August 15, 2017
Acceptance notification: September 1, 2017
Final materials due: September 17, 2017
Workshop date: September 24, 2017

Organizers:

Yiannis Karayiannidis, Chalmers University of Technology, KTH, Sweden
Yasemin Bekiroglu, ABB Corporate Research, Sweden
Anthony Remazeilles, Tecnalia Research and Innovation, Health division, Spain
Justus Piater, University of Innsbruck, Austria

ABB Logo   

KTH Logo