ICS Research Seminar: From a Multi-modal Intelligent Cell to a Self-organizing Robotic Skin – Realizing Self and Enriching Robot Tactile Interaction
Philipp Mittendorfer: "From a Multi-modal Intelligent Cell to a Self-organizing Robotic Skin – Realizing Self and Enriching Robot Tactile Interaction"
Human skin provides numerous inspirations for robots, supplying the whole body surface with multi-modal tactile sensitivity. Unlike a robot purely relying on joint information or vision, a robot equipped with artificial skin has a much richer information set. Challenges to efficiently deploy, organize and utilize a high number of distributed multi-modal sensors have so far prevented an effective utilization of artificial skin technology in robotics. In this thesis, we introduce a novel approach to create multi-modal artificial skin and a novel approach to self-organize the body representation of a robot. Our modular artificial skin is built by placing similar skin cells side-by-side into a flexible carrier material. Every skin cell is a self-contained system with a variety of sensors, signal conversion, processing and communication capabilities. The advantage of our modular approach is its robustness, scalability and transferability to various robotic systems. We developed various self-organizing features to automatically handle a potentially high number of skin cells on a large surface area. Automatic networking algorithms explore available skin cells and connections, distribute unique identifiers and provide robust and adaptive real-time communication. Mounted on a robot, our framework systematically explores and models the robot’s body schema – inferring the robot’s own kinematic and volumetric model from an egocentric perspective. In order to speed up the process, and to omit potentially harmful contacts, we only utilize low-range, open-loop motions of the robot and accelerometers embedded in our skin cells. A first algorithm explores the kinematic dependencies of body parts and joints, allocating actuators to joints and skin cells to body parts. A 3D reconstruction algorithm then computes the volumetric surface model of each body part, utilizing relative rotation estimates based on gravity and a topographic map inferred from the cell-2-cell connections. Turning skin patches into active visual markers, those distributed surface models can be visually combined into one homogeneous body representation – additionally joining tactile and visual space. A kinematic calibration algorithm finally estimates the parameters of the self-assembled kinematic model. In completion, we show exemplary applications of the prototype skin on industrial robot arms and the upper body of a humanoid robot. These examples demonstrate the benefits of an artificial skin for human robot interaction, multi-modal contact control, safety and object manipulation.
June 10,2015 at 10:45am, Karlstr. 45, 2nd floor