Skip to main content

The Next Generation of Robotics from UC San Diego in the Spotlight at IROS 2023

Published Date

Article Content

From helping wounded soldiers in the field, to helping robots avoid obstacles in the field, and more, researchers from the UC San Diego Contextual Robotics Institute made a strong showing at IROS 2023. 

The conference took place from Oct. 1 to 5 in Detroit. This year, the conference’s theme is “The Next Generation of Robotics,” so the event aims to particularly highlight the contributions of younger researchers and to help accelerate the future contributions of all those just entering the field. 

“The work our faculty are presenting at IROS is very much showcasing the exceptional research from many of our most recent hires,” said Henrik Christensen, director of the UC San Diego Contextual Robotics Institute and a professor in the UC San Diego Department of Computer Science and Engineering. “These are great examples of our research portfolio and our growth.”

Multi-modal planning on rearrangement for stable manipulation
Jiaming Hu, Zang Tang, Henrik Christensen
Abstract: Nowadays, a number of grasping algorithms  have been proposed that can predict a candidate of grasp poses, even for unseen objects. This enables a robotic manipulator to pick-and-place such objects. However, some of the predicted grasp poses to stably lift a target object may not be directly approachable due to workspace limitations. In such cases, the robot will need to rearrange the environment to enable successful grasping of the desired object. This involves planning a sequence of continuous actions such as sliding, re-grasping, and transferring. To address this multi-modal problem, we propose a Markov-Decision Process-based multi-modal planner that can rearrange the object into a position suitable for stable manipulation. We demonstrate improved performance in both simulation and the real world for pick- and-place tasks.
https://arxiv.org/pdf/2309.15283.pdf

Sequential Neural Barriers for Scalable Dynamic Obstacle Observance
Hongzhan Yu, Chiaki Hirayama, Chenning Yu, Sylvia Herbert & Sean Gao
Abstract: There are two major challenges for scaling up robot navigation around dynamic obstacles: the complex interaction dynamics of the obstacles can be hard to model analytically, and the complexity of planning and control grows exponentially in the number of obstacles. Data-driven and learning-based methods are thus particularly valuable in this context. However, data-driven methods are sensitive to distribution drift, making it hard to train and generalize learned models across different obstacle densities. We propose a novel method for compositional learning of Sequential Neural Control Barrier models (SNCBFs) to achieve scalability. Our approach exploits an important observation: the spatial interaction patterns of multiple dynamic obstacles can be decomposed and predicted through temporal sequences of states for each obstacle. Through decomposition, we can generalize control policies trained only with a small number of obstacles, to environments where the obstacle density can be 100x higher. We demonstrate the benefits of the proposed methods in improving dynamic collision avoidance in comparison with existing methods including potential fields, end-to-end reinforcement learning, and model-predictive control. We also perform hardware experiments and show the practical effectiveness of the approach in the supplementary video.
https://hoy021.github.io/projects/

Close the Optical Sensing Domain Gap by Physics-Grounded Active Stereo Sensor Simulation
Xiaoshuai Zhang, Rui Chen, Ang Li, Fanbo Xiang, Yuzhe Qin, Jiayuan Gu, Zhan Ling, Minghua Liu, Peiyu Zeng, Songfang Han, Zhiao Huang, Tongzhou Mu, Jing Xu, Hao Su
Abstract: In this paper, we focus on the simulation of active stereovision depth sensors, which are popular in both academic and industry communities. Inspired by the underlying mechanism of the sensors, we designed a fully physics-grounded simulation pipeline that includes material acquisition, ray-tracing based infrared (IR) image rendering, IR noise simulation, and depth estimation. The pipeline is able to generate depth maps with material-dependent error patterns similar to a real depth sensor in real time. We conduct real experiments to show that perception algorithms and reinforcement learning policies trained in our simulation platform could transfer well to the real world test cases without any fine-tuning. Furthermore, due to the high degree of realism of this simulation, our depth sensor simulator can be used as a convenient testbed to evaluate the algorithm performance in the real world, which will largely reduce the human effort in developing robotic algorithms. The entire pipeline has been integrated into the SAPIEN simulator and is open-sourced to promote the research of vision and robotics communities
https://angli66.github.io/active-sensor-sim

Learning Continuous Grasping Function with a Dexterous Hand from Human Demonstrations
Jianglong Ye, Jiashun Wang, Binghao Huang, Yuzhe Qin, Xiaolong Wang
Abstract: We propose to learn to generate grasping motion for manipulation with a dexterous hand using implicit functions. With continuous time inputs, the model can generate a continuous and smooth grasping plan. We name the proposed modelContinuous Grasping Function (CGF). CGF is learned via generative modeling with a Conditional Variational Autoencoder using 3D human demonstrations. We will first convert the large-scale human-object interaction trajectories to robot demonstrations via motion retargeting, and then use these demonstrations to train CGF. During inference, we perform sampling with CGF to generate different grasping plans in the simulator and select the successful ones to transfer to the real robot. By training on diverse human data, our CGF allows generalization to manipulate multiple objects. Compared to previous planning algorithms, CGF is more efficient and achieves significant improvement on success rate when transferred to grasping with the real Allegro Hand.
https://arxiv.org/pdf/2207.05053.pdf

Visual Reinforcement Learning with Self-Supervised 3D Representations
Yanjie Ze, Nicklas Hansen, Yinbo Chen, Mohit Jain, Xiaolong Wang
Abstract: A prominent approach to visual Reinforcement Learning (RL) is to learn an internal state representation using self-supervised methods, which has the potential benefit of improved sample-efficiency and generalization through additional learning signal and inductive biases. However, while the real world is inherently 3D, prior efforts have largely been focused on leveraging 2D computer vision techniques as auxiliary self-supervision.
In this work, we present a unified framework for self-supervised learning of 3D representations for motor control. Our proposed framework consists of two phases: a pretraining phase where a deep voxel-based 3D autoencoder is pretrained on a large object-centric dataset, and a finetuning phase where the representation is jointly fine tuned together with RL on in-domain data. We empirically show that our method enjoys improved sample efficiency in simulated manipulation tasks compared to 2D representation learning methods. Additionally, our learned policies transfer zero-shot to a real robot setup with only approximate geometric correspondence and uncalibrated cameras, and successfully solve motor control tasks that involve grasping and lifting from a single RGB camera.
https://yanjieze.com/3d4rl/

Finding biomechanically safe trajectories for robot manipulation of the human body in a search and rescue scenario
Lizzie Peiros, Zih-Yun Chiu, Yuheng Zhi , Nikhil Shinde, Michael C. Yip
Abstract—There has been increasing awareness of the difficulties in reaching and extracting people from mass casualty scenarios, such as those arising from natural disasters. While platforms have been designed to consider reaching casualties and even carrying them out of harm’s way, the challenge of repositioning a casualty from its found configuration to one suitable for extraction has not been explicitly explored. Furthermore, this planning problem needs to incorporate biomechanical safety considerations for the casualty. Thus, we present a first solution to biomechanically safe trajectory generation for repositioning limbs of unconscious human casualties. We describe biomechanical safety as mathematical constraints, mechanical descriptions of the dynamics for the robot-human coupled system, and the planning and trajectory optimization process that considers this coupled and constrained system. We finally evaluate our approach over several variations of the problem and demonstrate it on a real robot and human subject. This work provides a crucial part of search and rescue.
https://arxiv.org/pdf/2309.15265v1.pdf

Share This:

Category navigation with Social links