|
50 | 50 | </head> |
51 | 51 | <body> |
52 | 52 | <header> |
53 | | - <h1>Towards Learning-based Control for Robust Real-world Robotic Grasping in Dynamic Environments</h1> |
| 53 | + <h1>Towards Learning-based Control for Versatile Robotic Grasping in the Real World</h1> |
54 | 54 | <p><font size="+2"> <em>Authors:</em> Nicolas Bach, Christian Jestel, Julian Eßer, Oliver Urbann and Peter Detzner </font><br> |
55 | 55 | <font size="+1"> Department of AI and Autonomous Systems, Fraunhofer Institute for Material Flow and Logistics (IML) </font></p> |
56 | 56 | </header> |
57 | 57 |
|
58 | 58 | <main> |
59 | 59 | <section> |
60 | 60 | <h2>Abstract</h2> |
61 | | - <p> Robotic manipulation in dynamic environments |
62 | | -presents significant challenges, especially compared to the |
63 | | -adaptability and flexibility of humans. While traditional robotic |
64 | | -systems excel in controlled settings, their performance falters |
65 | | -in unpredictable scenarios. Learning-based control has shown |
66 | | -promise in addressing these challenges by developing adaptable |
67 | | -behaviors for robotic platforms. However, its application to real- |
68 | | -world manipulation tasks remains limited. This paper presents |
69 | | -a two-stage training process to improve the robustness of |
70 | | -robotic grasping tasks in dynamic environments. In particular, |
71 | | -we introduce new rewards and observations of net contact |
72 | | -measurements for more effective teacher training. Moreover, we |
73 | | -utilize privileged information to inform point cloud sampling, |
74 | | -enhancing student training and sim-to-real transfer reliability. |
75 | | -Our framework is validated through ablation studies and real- |
76 | | -world experiments, demonstrating robust grasping of various |
77 | | -objects under a variety of changing environmental conditions. |
78 | | -These advancements contribute to bridging the sim-to-real gap, |
79 | | -paving the way for generalizable and deployable manipulation |
80 | | -policies that function independently of specific settings.</p> |
| 61 | + <p>Robotic manipulation in non-structured environments presents significant challenges, especially compared to the adaptability and flexibility of humans. While traditional robotic systems excel in controlled settings, their performance falters in unpredictable scenarios. Learning-based control has shown promise in addressing these challenges by developing adaptable behaviors for robotic platforms. However, its application to real-world manipulation tasks remains limited. In this paper we present a two-stage training process that generates versatile and robust policies for robotic grasping tasks in the real-world. In particular, we introduce new rewards and observations of net contact measurements for more effective teacher training. Moreover, we utilize privileged information to inform point cloud sampling, enhancing student training and sim-to-real transfer reliability. Our training process is validated through ablation studies and real-world experiments, demonstrating robust grasping of various objects under a variety of changing environmental conditions. These advancements contribute to bridging the sim-to-real gap, paving the way for generalizable and deployable manipulation policies that function independently of specific settings. </p> |
81 | 62 | </section> |
82 | 63 | <div class="video-grid"> |
83 | 64 | <div class="video-item"> |
@@ -125,7 +106,7 @@ <h3> Uniform Sampling (left) vs. Object Tracking with Auxiliary Head and Informe |
125 | 106 | </section> |
126 | 107 |
|
127 | 108 | <section> |
128 | | - <h2> Experiments </h2> |
| 109 | + <h2>Quantitative Experiments</h2> |
129 | 110 | To evaluate the proposed methods, we perform an ablation study to assess the extension to the baseline in simulation. Further, we evaluate the efficiency of the proposed point cloud sampling method. In real-world experiments, we investigate the trained policies in terms of grasping success and robustness to deviations, such as different scenes, perturbations, and camera positions. |
130 | 111 | <h3>Grasping Experiments</h3> |
131 | 112 |
|
@@ -239,6 +220,11 @@ <h3>Grasping Experiments</h3> |
239 | 220 | </div> |
240 | 221 | </div> |
241 | 222 | <br> |
| 223 | + |
| 224 | + </section> |
| 225 | + <section> |
| 226 | + <h2> Qualitative Experiments </h2> |
| 227 | + |
242 | 228 | <h3>Invariance to Changes in the Scene</h3> |
243 | 229 |
|
244 | 230 | In this experiment we perform two grasps of a mug, then change the camera and move the surface, from which we grasp the object. |
|
0 commit comments