Learning Robot Manipulation in 3D
Skip to main content
eScholarship
Open Access Publications from the University of California

UC San Diego

UC San Diego Electronic Theses and Dissertations bannerUC San Diego

Learning Robot Manipulation in 3D

Abstract

Building robots that can automate tedious, hazardous, and repetitive jobs has long been the driving force behind the advancements in machine learning, computer vision, and robotics community. Recent breakthroughs in deep learning, a data centric approach to problem solving, achieved tremendous success in visual recognition, natural language understanding, and video game playing. However, these approaches usually require large and diverse datasets to generalize to unseen situations. Collecting diverse data is the core challenge in all existing learning-based methods for robotic manipulation. Many approaches learn robotic manipulation policy directly on real-world robot-object interaction data. However, collecting real data is order of magnitudes more costly than visual recognition and natural language tasks. In some cases, collecting real robot data of some tasks is even impossible with existing infrastructure. Other approaches first learn policies in simulation, then deploy in the real world. However, these methods encounter another set of challenges including hard to transfer due to physics gap between simulation and real world. When using RL to learn policies in simulation, challenges exist also at the algorithmic level. The engineered dense reward is hard to specify for the policy to autonomously collect data closer to the globally optimal solution, and sparse reward is hard for algorithms to optimize. In this thesis, we will introduce two projects that lay the foundation of two promising directions of building real-world Embodied AI: 1. Large scale sparse reward policy learning in simulation, 2. Continuously improving simulation with real data. These projects serve as the foundation for building future RL algorithms and learning based simulations.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View