4th Workshop on Representing and Manipulating Deformable Objects @ ICRA2024
PACIFICO Yokohama area: North Room: G304
Deformable objects (DOs) are ubiquitous in human environments. From food, clothes, cables, to body tissues, DOs are present in personal households, industrial environments, agricultural settings, and hospital rooms, to mention a few. Despite the ease humans can reliably manipulate them, they still pose a major challenge for robotics. Specifically, we identify the following open questions within the research community: i) How can we feasibly represent the state of a deformable object? ii) How do we accurately model and simulate its complex and non-linear dynamics? iii) Which hardware tools and platforms are most suitable for grasping and manipulating them? In continuation of the workshops held at ICRA in 2021, 2022, and 2023, we aim to once again gather the community in pursuit of answers to these and further questions on DOs. Our goal is to facilitate connections among scientists across diverse subfields of robotics, such as perception, simulation, control, and mechanics, spanning various stages of their careers and operating within different professional environments, including academia, industry, and research centers. Additionally, we aim to focus on the tangible advancements made in the field since the first workshop edition in 2021. We believe that this analysis will help identify promising directions and ultimately pave the way for practical, real-world solutions.
The workshop aims to explore different aspects that will allow robots to autonomously manipulate deformable objects with greater ability and generalization. Enabling such manipulation is crucial for a variety of domains and tasks, such as domestic, industrial, and surgical contexts, which involve various forms of deformable objects. However, the complexity of representing and modeling the dynamics of these objects results in the lack of a current unified solution that can be adapted to a wide range of objects.
In the past few years, there has also been an increasing interest in applying foundation models to robotic manipulation, including the use of large pre-trained vision models, language models (LLMs), and vision language models (VLMs) for more sample-efficient learning and solving language-conditioned tasks. Additionally, recent advances in imitation learning, reinforcement learning, and 3D representation models have showcased the capability of robots learning to perform more complex, dexterous, and long-horizon tasks. The release of new simulators, datasets, and low-cost robotic hardware is lowering the barrier for reproducible research, benchmarking, and reuse of data. In this workshop, we will encourage discussions on how these recent advances can improve deformable object manipulation.
The workshop will focus on, but is not limited to, the following topics for deformable object manipulation:
Representation and state estimation
Simulation and modeling
Transfer from simulation to reality
Learning to manipulate using data-driven methods such as reinforcement learning and learning from demonstrations
Perception: state tracking, parameter identification, property detection (e.g. landmarks for
garments) and classification, etc.
Control, visual servoing and planning
Use of foundation models, such as large vision and language models, and associated large datasets
David Held [Remote] - Spatially-aware Robot Learning for Deformable Object Manipulation [Video]
09:30 - 10:15
Spotlight talks #1
Alessio Caporali, Piotr Kicki, Kevin Galassi, Riccardo Zanella, Krzysztof Walas and Gianluca Palli - Deformable Linear Objects Manipulation with Online Model Parameters Estimation [PDF][Video]
Alessio Caporali, Kevin Galassi, Matteo Pantano, Gianluca Palli - Sparse to Dense: Robotic Perception of Deformable Objects via Foundation Models [PDF][Video]
Mingrui Yu, Kangchen Lv, Changhao Wang, Yongpeng Jiang, Masayoshi Tomizuka and Xiang Li - Generalizable Whole-Body Global Manipulation of Deformable Linear Objects by Dual-Arm Robot in 3-D Constrained Environments [PDF][Video]
Yuhong Deng, David Hsu - Generalizable Clothes Manipulation with Large Language Model [PDF][Video]
Kejia Chen∗ , Zhenshan Bing∗ , Fan Wu∗ , Yansong Wu, Liding Zhang, Sami Haddadin, Alois Knoll - Real-time Contact State Estimation in Shape Control of Deformable Linear Objects under Small Environmental Constraints [PDF][Video]
Kaifeng Zhang* , Baoyu Li* , Kris Hauser, Yunzhu Li - AdaptiGraph: Material-Adaptive Graph-Based Neural Dynamics for Robotic Manipulation [PDF][Video]
Mingrui Yu, Boyuan Liang, Xiang Zhang, Xinghao Zhu, Xiang Li, and Masayoshi Tomizuka - In-Hand Following of Deformable Linear Objects Using Dexterous Fingers with Tactile Sensing [PDF][Video]
Haoran Lu*, Yitong Li*, Ruihai Wu*, Chuanruo Ning, Yan Shen, Hao Dong - UniGarment: A Unified Simulation and Benchmark for Garment Manipulation [PDF][Video]
Simeon Adebola*, Tara Sadjadpour*, Karim El-Refai*, Will Panitch, Zehan Ma, Roy Lin, Tianshuang Qiu, Shreya Ganti, Charlotte Le, Jaimyn Drake, and Ken Goldberg - Automating Deformable Gasket Assembly [PDF][Video]
Alberta Longhini, Michael C. Welle, Zackory Erickson, and Danica Kragic - AdaFold: Adapting Folding Trajectories of Cloths via Feedback-loop Manipulation [PDF][Video]
10:15 - 10:45
Coffee break + Poster presentation
10:45 - 11:15
Chelsea Finn [Remote] - Learning Long-Horizon Bi-Manual Tasks involving Deformable Object Manipulation [Video]
11:15 – 11:45
Michael Yip - Deformable Manipulation for Autonomous Surgical Robots [Video]
11:45 – 12:15
Gonzalo Lopez-Nicolas - Multi-scale analysis for shape control of texture-less objects
12:15 - 14:00
Lunch + extra Poster time
14:00 – 14:30
Jeffrey Ichnowski - Deformable Manipulator for Deformable Manipulation [Video]
14:30 – 15:00
David Hsu - Differentiable Particles for General-Purpose Deformable Object Manipulation [Video]
15:00 – 15:45
Spotlight talks 2
Zeqing Zhang, Guanqi Chen, Wentao Chen, Ruixing Jia, Liangjun Zhang and Jia Pan - GmClass: Granular Material Classification through Force Feedback of Robotic Manipulation [PDF][Video]
Shaoxiong Yao, Yifan Zhu, and Kris Hauser - Structured Bayesian Meta-Learning for Data-Efficient Visual-Tactile Model Estimation [PDF][Video]
Raquel Marcos-Saavedra, Miguel Aranda, and Gonzalo López-Nicolás - Multirobot transport of deformable objects using deformation modes [PDF][Video]
Martin Filliung, Juliette Drupt, Charly Peraud, Claire Dune, Nicolas Boizot, Andrew Comport, Cedric Anthierens, Vincent Hugel - An Augmented Catenary Model for Underwater Tethered Robots [PDF][Video]
Luca Beber, Edoardo Lamon, Davide Nardi, Daniele Fontanelli, Matteo Saveriano, Luigi Palopoli - A Passive Variable Impedance Control Strategy with Viscoelastic Parameters Estimation of Soft Tissues for Safe Ultrasonography [PDF][Video]
Rui Liu, Amisha Bhaskar, Pratap Tokekar - Adaptive Visual Imitation Learning for Robotic Assisted Feeding Across Varied Bowl Configurations and Food Types [PDF][Video]
Jingyi Xiang, Holly Dinkel, Harry Zhao, Naixiang Gao, Brian Coltin, Trey Smith, Timothy Bretl - TrackDLO: Tracking Deformable Linear Objects Under Occlusion with Motion Coherence [PDF][Video]
Chikaha Tsuji, Enrique Coronado, Pablo Osorio and Gentiane Venture - Adaptive contact-rich manipulation through few-shot imitation learning with tactile feedback and pre-trained object representations [PDF][Video]
Paul Maria Scheikl, Nicolas Schreiber, Christoph Haas, Niklas Freymuth, Gerhard Neumann, Rudolf Lioutikov, and Franziska Mathis-Ullrich - Movement Primitive Diffusion: Learning Gentle Robotic Manipulation of Deformable Objects [PDF][Video]
Ruihai Wu*, Haoran Lu*, Yiyan Wang, Yubo Wang, Hao Dong - UniGarmentManip: A Unified Framework for Category-Level Garment Manipulation via Dense Visual Correspondence [PDF][Video]
Hiroto ARASAKI, Sena TAKAHASHI, and Akio NAMIKI - Realtime Paper Shape Estimation for Origami Robot System [PDF][Video]
We invite participants to submit extended abstracts 3+n pages, with n pages (no page-limit) for the bibliography, in the IEEE conference style.
Submissions will be reviewed by experts of their respective field. The accepted abstracts will be made available on the workshop website but will not appear in the official IEEE conference proceedings.
Participants are encouraged to submit their recent work on the topics of interest mentioned above.
Contributions are encouraged, but are not required, to be original.
The review process will be single-blind, meaning the submitted paper does not need to be anonymized.
We are happy to announce the WDO Best Abstract Award sponsored by the IEEE RAS Technical Committee Computer & Robot Vision.
The selected contribution will receive a prize of 400$.
Any extended abstract submitted to the workshop will be automatically considered for the award.
Associate Professor
University of Michigan, USA
Personal website
Talk title: Two routes toward long-horizon deformable object manipulation
Bio: Dmitry Berenson received a B.S. in Electrical and Computer Engineering from Cornell University in 2005, where he started his robotics work in Hod Lipson's lab. He went on to graduate from the Ph.D. program at the Robotics Institute at Carnegie Mellon University (CMU) in 2011, where his advisors were Siddhartha Srinivasa and James Kuffner. While at CMU, Dmitry Berenson worked in the Personal Robotics Lab and completed interships at the Digital Human Research Center in Japan, Intel Labs in Pittsburgh, and LAAS-CNRS in France. In 2012 he completed a post-doc at UC Berkeley working with Ken Goldberg and Pieter Abbeel. Dmitry Berenson was an Assistant Professor at WPI 2012-2016. He started as faculty at the University of Michigan in 2016. His current research focuses on learning and motion planning for manipulation. Dmitry Berenson has received the IEEE RAS Early Career Award and the NSF CAREER award.
Bio: Chelsea Finn is an Assistant Professor in Computer Science and Electrical Engineering at Stanford University. Her research interests lie in the capability of robots and other agents to develop broadly intelligent behavior through learning and interaction. To this end, her work has pioneered end-to-end deep learning methods for vision-based robotic manipulation, meta-learning algorithms for few-shot learning, and approaches for scaling robot learning to broad datasets. Her research has been recognized by awards such as the Sloan Fellowship, the IEEE RAS Early Academic Career Award, and the ACM doctoral dissertation award, and has been covered by various media outlets including the New York Times, Wired, and Bloomberg. Prior to Stanford, she received her Bachelor's degree in Electrical Engineering and Computer Science at MIT and her PhD in Computer Science at UC Berkeley.
David Held
Associate Professor
Carnegie Mellon University (CMU), USA
Personal website
Talk title: Spatially-aware Robot Learning for Deformable Object Manipulation
Bio: David Held is an Associate Professor at Carnegie Mellon University in the Robotics Institute and is the director of the RPAD lab: Robots Perceiving And Doing. His research focuses on perceptual robot learning, i.e. developing new methods at the intersection of robot perception and planning for robots to learn to interact with novel, perceptually challenging, and deformable objects. David has applied these ideas to robot manipulation and autonomous driving. Prior to coming to CMU, David was a post-doctoral researcher at U.C. Berkeley, and he completed his Ph.D. in Computer Science at Stanford University. David also has a B.S. and M.S. in Mechanical Engineering at MIT. David is a recipient of the Google Faculty Research Award in 2017 and the NSF CAREER Award in 2021.
David Hsu
Provost's Chair Professor
National University of Singapore, Singapore
Personal website
Talk title: Differentiable Particles for General-Purpose Deformable Object Manipulation
Bio: David Hsu is a professor of computer science and the Director of Smart Systems Institute at the National University of Singapore (NUS). He is an IEEE Fellow.
His research lies in the intersection of robotics and AI. In recent years, he has been working on robot planning and learning under uncertainty for human-centered robots. His work won multiple international awards, including, most recently, Test of Time Award at Robotics: Science & Systems (RSS) in 2021 and IJCAI-JAIR Best Paper Prize in 2022. He has chaired or co-chaired several international robotics conferences, including WAFR 2010, RSS 2015, ICRA 2016, and CoRL 2021. He served on the editorial boards of Journal of Artificial Intelligence Research and International Journal of Robotics Research. He is currently an Editor of IEEE Transactions on Robotics.
Jeff Ichnowski
Assistant Professor
Carnegie Mellon University (CMU), USA
Personal website
Talk title: Deformable Manipulator for Deformable Manipulation
Bio: Jeff Ichnowski is an assistant professor at Carnegie Mellon University's Robotics Institute. He was a postdoc at UC Berkeley's Sky Computing/RISE lab, AUTOLAB, and BAIR. Before returning to academia, he was the principal architect at SuccessFactors, Inc., one of the world's largest cloud-based software-as-a-service companies. His research explores robot algorithms and systems for high-speed motion, task, and manipulation planning, using cloud-based high-performance computing, optimization, and deep learning.
Talk title: Multi-scale analysis for shape control of texture-less objects
Bio: Gonzalo Lopez-Nicolas is currently a Professor with Universidad de Zaragoza and Aragon Institute for Engineering Research (I3A). His current research interests include shape control, visual control, multi-robot systems, and the application of computer vision to robotics.
Michael Yip
Associate Professor
University of California, San Diego, USA
Personal website
Talk title: Deformable Manipulation for Autonomous Surgical Robots
Bio: Michael Yip, Ph.D., is an Associate Professor at the University of California San Diego and the Director of Advanced Robotics and Controls Lab (ARClab) at UCSD. His research group works at the intersection of medical robotics, machine learning, and computer vision, with applications towards robotic surgery, physical human-robot interaction, autonomous driving, and search and rescue. His research group have won numerous awards at robotics and AI venues. Dr. Yip was previously a Research Associate with Disney Research, a Visiting Professor at Stanford University, and a Visiting Professor with Amazon Robotics.
Organizers
Michael C. Welle, KTH Royal Institute of Technology, Sweden
Martina Lippi, Roma Tre University, Italy
Fangyi Zhang, Queensland University of Technology (QUT), Australia
Lawrence Yunliang Chen, University of California, Berkeley, USA
Co-Organizers
Alberta Longhini, KTH Royal Institute of Technology, Sweden
Danica Kragic, KTH Royal Institute of Technology, Sweden
Daniel Seita, University of Southern California, USA
David Held, Carnegie Mellon University, USA
Peter Corke, Queensland University of Technology (QUT), Australia
Contact
If you have any questions please contact Michael Welle at the email: mwelle AT kth DOT se