Title |
Deep Reinforcement Learning based on Dynamics Prediction Network for Robotic Grasp in Cluttered Environments |
Authors |
김병완(Byung Wan Kim) ; 박영빈(Youngbin Park) ; 서일홍(Il Hong Suh) |
DOI |
https://doi.org/10.5573/ieie.2021.58.11.66 |
Keywords |
Deep reinforcement learning; Robotic grasp; Dynamics prediction network |
Abstract |
In this paper, we propose a reinforcement learning method to learn the dynamic relationship between the workspace and robot actions using Dynamics Prediction Network(DPN). We perform the grasping tasks with this deep Q network model in cluttered environments where it is difficult to find the grasp point because two or more objects overlap or adjoin each other. We use non-prehensile actions to solve the situation in which the grasp point cannot be found in the cluttered environment. DPN learning is effective in learning not only prehensile actions but also non-prehensile actions. Assuming that the learning environment is a deterministic world rather than a stochastic world, then the state transition probability is fixed at 1. The DPN learns one-to-one correspondence between the current state, current actions, and next state. Parameter sharing was applied to the training model in order to reduce the training computations due to the DPN model being added during the training phase. The proposed method applied both parameter sharing and DPN appropriately, reducing the burden on learning as well as the size of the weight memory of the operating model by half, and showed an 8% improvement in the grasping success rate of the block test environments. In the experiments of new test environments constructed by 3D scanning objects of the real world, the grasping success rate improved by 23%, showing that the generalization of physical interaction learning was well done. |