Lab Contributors

Robotic assembly is a long horizon problem that has attracted significant attention in the past two decades. The sparsity of the reward and the large state space for planning adds to the complexity of the problem. This paper proposes a Hierarchical Learning (HL) based approach which pivots the multi-level structure to seamlessly integrate task identification and sequencing with motion planning. The higher-level agent emphasizes comprehending tasks and learning task plans. It generates sequences and locations of sub-tasks, while the lower-level agent concentrates on executing the current sub-task. The higher-level agent employs a goal driven reinforcement learning (RL) approach to master the sequencing task, allowing it to adapt to unseen assemblies. Meanwhile, the lower level adopts a Learning from Demonstration (LfD) approach for motion planning, which can learn primitive skills from demonstration and intelligently combine the primitive skill for complicated tasks. The critical contribution of this work lies in the development of a novel method capable of comprehending and executing long horizon goal-driven assembly tasks without relying on expert demonstrations of specific tasks. The proposed approach is validated in a block assembly environment through simulation and real robot implementations.