March 22, Edinburgh, 14:00-15:30
Teaching by Demonstration for Industrial Applications
Autonomy in robotic grasping and manipulation applications remains to be an elusive goal due to many reasons such as uncertainties arising before and during interactions with real objects. Over the last 30 years, robots have brought remarkable efficiency gains to industrial manufacturers, mainly in the automotive industry. Traditional industrial robots are heavily dependent on hard automation that requires pre-specified fixtures and time-consuming programming and reprogramming performed by experienced software engineers. One promising robotic application in industry is assembly which in reality has proven challenging to automate due to e.g., complex materials, precise grasping requirements, part variations, operations requiring high precision (snap fits), operations requiring special motions (twist insertions) and wear and tear of the assembly equipment. While robotic assembly does exist, it has only been applied in a fraction of the potential cases. As a result, nowadays even expensive products produced in fairly large volumes are still assembled manually in low wage countries under harsh conditions. A potential solution to have a smooth transition towards higher level of autonomy is to include human teachers providing feedback through demonstration. Open questions in this domain include how to design solutions to reduce required experience and skill level of robot programmer, make industrial robots available to new users, guide users, learning from experience. The aim of the workshop is to connect researchers from different backgrounds such as neuroscience (perception and motor control) and robotics (perception, planning, control, learning and design) in order to set the basis and define core open problems in this area. Furthermore, we want to discuss advantages, limitations, challenges and progress of different approaches pertaining to the workshop topic.
- Naresh Marturi, KUKA, UK,
- Daniel Braun, KUKA, Germany
- Paolo Rocco, Politecnico di Milano, Italy
- Aude Billard, EPFL, Switzerland
- Carl Henrik Ek, Bristol University, UK
- Yasemin Bekiroglu, ABB AB Corporate Research, Sweden
- Dimitrios Tzovaras, Information Technologies Institute, Greece
- Zoe Doulgeri, Aristotle University, Greece
- Jacek Malec and Elin Anna Topp, Lund University, Sweden
- Joseph Mclntyre, Tecnalia, Spain
- Guilherme Maeda, TU Darmstadt, Germany
14:00 – 14:02 Introduction by Yasemin Bekiroglu
14:02 – 14:46 Statements from the Speakers
- Yasemin Bekiroglu, SARAFun: Smart Assembly Robot with Advanced Functionalities
- Jose Medina and Aude Billard, Towards teaching from demonstration in industrial collaborative environments
- Guilherme Maeda and Jan Peters, Learning Interaction Primitives from Demonstration for Future Industrial Applications
- Carl Henrik Ek, Data driven learning in Robotics
- Jacek Malec and Elin Anna Topp,
- Naresh Marturi, Vision-guided State Estimation for Industrial Robots Towards Industrial Random Bin-picking
- Dimitrios Tzovaras, Developing systems with advanced perception, cognition, and interaction capabilities for learning a robotic assembly in one day
- Daniel Braun, RobDREAM – Optimizing Robot Performance while DREAMing
- Paolo Rocco, Accurate sensorless lead-through programming for lightweight robots in structured environments
- Zoe Doulgeri, Teaching Assembly forces: The case of successful snap assembly
- Joseph Mclntyre, Understanding Human Skills
14:46 – 15:30 Discussion and Conclusions, moderated by Joseph McIntyre
Daniel Braun, RobDREAM – achieving decent performance for new robot programming paradigms
Abstract: The RobDREAM action is targeting the challenges arising while setting up and operating flexible mobile manipulation applications with robots. The robotic systems necessary for automating such tasks are highly complex and setting up the systems involves tuning more and more parameters of the included methods and algorithms. The first goal of RobDREAM is the simplification of the setup phase and the efforts of the consortium could also benefit from teaching by demonstration methods. The second objective of RobDREAM is the automatic performance optimization of an application once it is set up. This optimization is achieved by an offline optimization based on data collected during the operation of the robotic application – the similarity to processes in human sleep led to naming this step DREAMing. We deem this step essential for achieving a decent performance in any complex autonomous system (Slides).
Biography: Dr. Daniel BRAUN received his „Diplom-Ingenieur“ in Electrical Engineering and Information Technology in 2005 from the University of Karlsruhe (TH). In 2012 he received his doctorate degree in Informatics from the Karlsruhe Institute of Technology (KIT), Germany. From 2005-2012 he has been a member of the Institute for Process Control and Robotics at the KIT where he has been active in the fields of process systems modelling and robotics as well as sensor systems for robot safety applications and robot manipulation. In 2013 he joined KUKA as a project manager for cooperative research projects. Under his lead KUKA has participated in and coordinated several national and EC-wide projects targeting complex robotic mobile manipulation scenarios and the challenges coming with applications in this domain. Contact: Daniel Braun, phone: +49 821 797-4863, Email: Daniel.Braun@kuka.com
Paolo Rocco, Accurate sensorless lead-through programming for lightweight robots in structured environments
Abstract: Lead-through programming techniques (where the operator manually guides the robot to teach new positions) reduce the complexity of robot programming. However, they might suffer from lack of accuracy, need to ensure the human operator safety, and need for additional force/torque sensors that are expensive, fragile and difficult to integrate in the robot controller. This talk will discuss an approach to lead-through robot programming, which does not rely on dedicated hardware. A voting system identifies the largest Cartesian component of the force/torque applied to the manipulator in order to obtain accurate lead-through programming via admittance control and constraint-based optimization with obstacle avoidance. Experimental validation of the approach will be shown (Slides).
Biography: Prof. Paolo Rocco is a full professor in automatic control and robotics at Politecnico di Milano, Italy, where he serves as Chair of the BSc and MSc Programs on Automation and Control Engineering. He is also a co-founder of Smart Robots, a spin-off company of Politecnico di Milano. A Senior Member of IEEE, he has served in various positions in the Editorial Boards of journals and conferences. At present, he serves as a Senior Editor for the IEEE Robotics and Automation Letters and as an Associate Editor for the IFAC journal Mechatronics. He has been in charge of several research projects with industrial partners and public bodies. Currently his research interests concern a few aspects related to industrial robotics, with particular focus on safe and productive human-robot interaction. He is the author of about 150 papers in the areas of robotics, motion control, and mechatronics. Web: http://home.deib.polimi.it/rocco
Jose Ramon Medina and Aude Billard, Towards teaching from demonstration in industrial collaborative environments
Abstract: Teaching by demonstration has been proven a successful method for programming simple tasks for robots in industrial environments. However, when robots share their workspace with humans, the necessary requisites of fast reaction, continuous adaptation and high compliance must be taken into account, not only at the control stage but also at the learning stage. In this talk we first briefly present the methodology applied in our lab and first results for teaching position tasks and force based tasks, with emphasis on fast reaction and adaptation. We then outline the most interesting challenges in the field concerning these specific scenario with collaboration with humans (Slides). Web: http://lasa.epfl.ch
Carl Henrik Ek, Data driven learning in robotics
Abstract: Machine learning has in the last couple of years conquered new domains and pushed the boundaries of what we believed was possible to solve with data driven learning. However, in many ways robotics has not had its share of the cake. The field of robotics takes a special pride in experiments performed in the real world. This poses a whole new set of challenges as the distribution of environment rapidly grows so complex that we cannot parametrise it with data alone. The increased performance in application of machine learning comes not from methodological developments but rather availability of data. Where does this leave robotics? In this talk I want to highlight what demands robotics puts on machine learning and where the shortcomings and developments are currently being focused (Slides).
Biography: Dr. Carl Henrik Ek is a lecturer at the University of Bristol. He is interested in developing techniques for data-efficient learning and interpretable learning with a focus on Bayesian non-parametrics. Web: http://carlhenrik.com
Yasemin Bekiroglu, SARAFun: Smart Assembly Robot with Advanced Functionalities
Abstract: The SARAFun project focuses on enabling a non-expert user to integrate a new bi-manual assembly task on a YuMi robot in less than a day. The overall conceptual approach is that the robot should be capable of learning and executing assembly tasks in a human like manner. To achieve this we study to understand how human assembly workers learn and perform assembly tasks to be able to model and transfer assembly skills. The robot will learn assembly tasks, such as insertion or folding, by observing the task being performed by a human instructor. The robot will then analyze the task and generate an assembly program, including exception handling, and design 3D printable fingers tailored for gripping the parts at hand. Aided by the human instructor, the robot will finally learn to perform the actual assembly task, relying on sensory feedback from vision, force and tactile sensing as well as physical human robot interaction. During this phase the robot will gradually improve its understanding of the assembly at hand until it is capable of performing the assembly in a fast and robust manner (Slides).
Biography: Dr. Yasemin Bekiroglu holds a PhD., in Computer Science from the Royal Institute of Technology KTH, Sweden. She conducted postdoctoral research at KTH and University of Birmingham (2012-2016). She has been involved in several European Projects such as CogX, eSMCs, ROBOHOW and RoMaNs, before she started working as a Scientist at ABB Corporate Research (2016). Yasemin’s research interests include robotics, computer vision and machine learning. More specifically, she is interested in learning-based approaches for robot grasping and manipulation using multisensory data. She is the coordinator of the Project SARAFun (Smart Assembly Robots with Advanced Functionalities, No. 644938, http://sarafun.eu). Web: http://www.yaseminbekiroglu.com
Dimitrios Tzovaras, Developing systems with advanced perception, cognition, and interaction capabilities for learning a robotic assembly in one day
Abstract: Innovative technologies on robotic perception and cognition developed at CERT/ITI are presented. The integration of these technologies to a robotic system will enable non-expert users to teach the system a new assembly task in less than a day, without the use of conventional programming methods. A basic component of the integrated system is a web-based Human Robot Interaction (HRI) interface that helps the user to demonstrate the task and control the teaching procedure in an intuitive and straightforward manner. The motivation behind this work is to help automate assembly execution, since even expensive products produced in large volumes are still assembled manually in low wage countries under harsh conditions (Slides).
Biography: Dr. Dimitrios Tzovaras is a Senior Researcher and Director at CERTH/ITI. He received the Diploma in Electrical Engineering and the Ph.D. in 2D and 3D Image Compression from the Aristotle University of Thessaloniki, Greece in 1992 and 1997, respectively. Prior to his current position, he was a Senior Researcher on the Information Processing Laboratory at the Electrical and Computer Engineering Department of the Aristotle University of Thessaloniki. His main research interests include machine learning, artificial intelligence, network and visual analytics for network security, computer security, data fusion, biometric security, virtual reality,. His is author or co-author of over 110 articles in refereed journals and over 290 papers in international conferences. Dr. Tzovaras has been Associate Editor in the Journal of Applied Signal Processing (JASP), Journal on Advances in Multimedia of EURASΙP and IEEE Transactions on Image Processing, and Senior Associate Editor in the IEEE Signal Processing Letters. Since 1992, Dr. Tzovaras has been involved in more than 85 European projects, funded by the EC and the Greek Ministry of Research and Technology. He has a very large management record (project coordinator and scientific and technical manager) in 19 projects (7 H2020, 8 FP7, 3 FP6 and 1 Nationally funded project).
Zoe Doulgeri, Teaching Assembly forces: The case of successful snap assembly detection
Abstract: Teaching assembly forces is a challenging issue within the context of fast assembly robot deployment as it does not involve gross motions that can be captured by visual perception systems from human demonstrations. The feasibility of controlling an assembly task by teaching the correct assembly forces to a robotic system is an open issue. Collaborative human robot assembly and machine learning could be used in this direction. This presentation will aim to discuss the most promising directions by searching answers to a number of questions some of which are defined below as well as describe a first attempt developed within the SARAFun project for the subproblem of successful snap assembly detection. Can a robot learn a pattern of forces through human demonstration? Which approaches are more promising for such training? Is it possible to enable a robot to operate as a smart sensor and learn from human-robot collaborative experimentation? Can these forces directly be mapped to a kinematic control strategy? Is such an approach financially viable for SMEs as well as large scale industries (Slides)?
Biography: Prof. Zoe Doulgeri, received her diploma from the Electrical and Computer Engineering (ECE) department of Aristotle University of Thessaloniki (AUTH), Greece and her MSc in Control Systems and her PhD in production scheduling of flexible manufacturing systems from the Imperial College, London UK. She is currently a Professor in Robotics and Control of Manufacturing Systems in the Electrical and Computer Engineering Dept. of AUTH and a collaborative professor with the research institute CERTH-ITI. She is the author of more than 100 scientific papers in international journals and refereed conference proceedings. She served as Associate editor at the Journal of Intelligent and Robotic Systems (2012-2015) and she is currently an Associate editor in the Frontiers in Robotics and AI and in the IEEE Robotics and Automation Letters. She participated in 12 projects funded or co-funded by the EU in the past targeted at the robotic handling of flexible materials, she was the principal investigator of ARISTEIA I national project PIROS (http://piros.web.auth.gr) targeting physically interactive robot services and she is currently participating in the H2020 projects SARAFun, RAMCIP and SMARTsurg as CERTH/ITI associate. She has served as research proposal evaluator in EU (FP5, FP7) and at national level. Her research interests include physical human–robot interaction, object grasping and manipulation, redundant and flexible joint manipulators, and model free control of robotic systems with prescribed performance guarantees. She is an IEEE member (RAS and CS), the representative of Greece in the European Control Association (EUCA) and a member of the Technical Chamber of Greece. Web: http://ee.auth.gr/en/school/faculty-staff/electronics-computers-department/doulgeri-zoe/
Jacek Malec and Elin A. Topp, You can only learn what you already know
Abstract: In industrial context 98% chances of success is a failure. Current data-driven machine learning cannot provide guarantees, even more so in case of demonstration-based learning. In order to learn something useful for industry, the knowledge framework needs to be there from the beginning, and the teaching process needs to take all possible clues from the demonstrating human, possibly in a multi-modal, mixed-initiative-way. The challenge consists of preparing the framework for successful learning (Slides-videos will be made available soon).
Biography: Dr. Elin A. Topp holds since 2012 a Senior Lecturer position in the group for Robotics and Semantic Systems, Department of Computer Science, at Lund University in Lund, Sweden. She obtained her Ph.D. and Licentiate degrees in 2009 and 2006 respectively from the Royal Institute of Technology (KTH) in Stockholm, Sweden, where she worked on different aspects of Human Augmented Mapping in the Computational Vision and Active Perception group at the Centre for Autonomous Systems, (CVAP/CAS). Originating from Germany, Dr. Topp received her MSc. degree from Karlsruhe University, now Karlsruhe Institute of Technology, in 2004. She has served as a reviewer for various journals and conferences in the field of Robotics, specifically Human-Robot Interaction, and Intelligent Systems, such as IROS, ROMAN, HRI, SORO, and TIIS. Web: http://cs.lth.se/elin_anna_topp/
Dr. Jacek Malec has received the M.Sc. degree in electrical engineering and the Ph.D. degree in artificial intelligence from Wroclaw University of Technology, Wroclaw, Poland, in 1981 and 1987, respectively. Since then he has worked as a researcher and lecturer at the Wroclaw Technical University, Poland, Linkoeping University, Sweden, Maelardalen University, Sweden and finally settled in Lund in 1999, where he is currently full professor at the Department of Computer Science. His research interests are in artificial intelligence, in particular knowledge representation and reasoning, robotics, in particular intelligent industrial and service robotics, and distributed systems, in particular multi-agent, cloud-based systems. He is Senior Member of IEEE, member of SAIS (Swedish AI Society), and member of AAAI (Association for Advancement of AI). Web: http://cs.lth.se/jacek_malec/
Joseph Mclntyre, Understanding human motor skills as a key to teaching robots through demonstration
Abstract: One key to achieving robot programming through demonstration is to have an understanding of how the humans perform these tasks. It is not sufficient to simply mimic the movements of the hands and arms of the human. Assembly requires the application of forces and torques to perform, for instance, a snap fit of two adjoining pieces. A robot cannot “see” these forces, nor can the robot understand what control policy the human is using to decide to apply more force or not in a given situation. But if one knows in advance what control policies or skills the human possess, the robot is more likely to be able to recognise what skill the human is performing and tune the pre-acquired skill to the current circumstances. Furthermore, humans are remarkably adaptable when it comes to manipulation and assembly tasks. If one can successfully identify and transfer these skills from human to robot, one will not only achieve better, more efficient robot programming, our understanding of human motor behavior will have advanced
significantly as well (Slides).
Biography: Dr. Joseph McIntyre, Ph.D. is an IKERBASQUE Research Professor. His research focuses (1) on questions of how the nervous system regulates the forces and torques applied by the hand when interacting with the physical environment and (2) on questions of how the brain integrates information from multiple sensory organs (eyes, proprioceptors, inner ear) to provide optimal estimation of the body’s posture in space and robust predictive control of movement. In collaboration with neuroscientists, engineers, therapists and clinicians he contributes with theoretical underpinnings and basic knowledge to future clinical and technological developments in the field of rehabilitation engineering. Dr. McIntyre was trained as an engineer and as a biologist as an undergraduate at Caltech and carried out his Ph.D. work at M.I.T. in the field of Computational Neuroscience, under the direction of N. Hogan, E. Bizzi, F.A. Mussa-Ivaldi and C.G. Atkeson. Since that time he has carried out studies in Europe on human motor behaviour and psychophysics at the Collège de France in Paris, at the Santa Lucia Scientific Foundation in Rome, at the CNRS – Université Paris Descartes in Paris. In January 2014 Dr. McIntyre accepted an Ikerbasque Research Professorship to work with Tecnalia Health Division on the development of robotic technologies for health and rehabilitation. Website: http://www.tecnalia.com/en/robotics/home/home.htm
Guilherme Maeda and Jan Peters, Learning Interaction Primitives from Demonstration for Future Industrial Applications
Abstract: Robots that can be programmed by a non-expert user to execute tasks in collaboration have the potential to revolutionize the European industrial scenario. However, the sense-plan-act paradigm established by industrial robotics does not account for the interaction with humans, and methods to program collaborative robots are still unclear. In this talk, I will introduce interaction primitives, a data-driven approach based on the use of imitation learning, for learning movement primitives for human-robot collaboration. The core idea is to learn a representation of joint trajectories of a robot and a human from demonstrations. The correlation between the learned trajectories is used to infer the appropriate robot task and motion based on human observations. Applications of the method and future directions in the industrial context will be discussed (Slides).
Biography: Dr. Guilherme Maede is a research scientist at the Intelligent Autonomous Systems group (IAS) in TU Darmstadt. He is the IAS team leader for the EU funded FP7 3rd Hand Robot Project. His goal is to enable robots to learn challenging, collaborative tasks by interacting with humans and with the environment. To this end, his research bridges the areas of control, learning, and human-robot collaboration. Guilherme received his PhD from the Australian Centre for Field Robotics (ACFR). He did his work under the supervision of Hugh Durrant-Whyte, Surya Singh, David Rye, and Ian Manchester. Motivated by the mining industry, his work investigated the combined use of data-driven iterative learning methods and state estimation applied to autonomous excavation. Between 2005-2007 he did his masters at the Tokyo Institute of Technology (TITECH) in the field of precision positioning control. Guilherme also worked from 2007 to 2009 at IHI Corporation researching novel mechanical designs and control of heavy industrial equipment such as industrial end-effectors and large-scale roller printers. Web: http://www.jan-peters.net
Naresh Marturi, Vision-guided state estimation of industrial robots
Abstract: Random bin-picking is one of the major applications the present automotive industry is interested in. Great progress has already been realised picking structured, semi-structured and even random parts out of bin. Nevertheless, majority of the demonstrated applications are on handling objects with simple (parametric) shapes, while bin-picking of complex shaped objects has always been a vital challenge to tackle. Major difficulties are detecting randomly oriented objects that are mostly occluded and reflective, and collision-free grasp planning. In this talk, I will discuss on various issues related to accomplish random bin-picking of complex objects as well as introduce a novel deep learning-based part + pose detection scheme. Later I will discuss various problems that we are currently facing in planning grasps. (Slides)
Biography: Dr. Naresh Marturi is a KTP Robot Vision Scientist with Kuka Robotics UK Ltd. and the University of Birmingham, since 2015. He obtained his Ph.D. degree in automatic control from the Université de Franche-Comté, Besançon, France, in 2013 and M.S. degree in robotics and intelligent systems from Örebro University, Örebro, Sweden, in 2010. After Ph.D. he spent one year at Institute FEMTO-ST, France as a post-doctoral researcher. His primary research interests are in the fields of machine vision, industrial robotics, human-robot interaction, vision-based robotic control and deep learning. At Kuka UK, he is majorly responsible for developing industrially robust vision-guided applications. He also possesses a solid knowledge and research experience in developing vision-guided techniques at Micro and Nano scales. So far, he has authored over two dozens of scientific articles in major robotics and vision conferences and journals. He also mentored and co-supervised many Ph.D scholars and masters’ interns. Other than research, Dr. Marturi carries 10+ years of cross-platform programming experience in various programming languages. Prior to his masters, he worked as a software developer and team lead for a multi-national company in India.
Yasemin Bekiroglu, ABB AB Corporate Research, Sweden
Zoe Doulgeri, Aristotle University of Thessaloniki, Greece
Jacek Malec, Lund University, Sweden
Elin A. Topp, Lund University, Sweden
Joseph Mclntyre, Tecnalia, Spain
Photos from the Workshop