• 대한전기학회
Mobile QR Code QR CODE : The Transactions of the Korean Institute of Electrical Engineers
  • COPE
  • kcse
  • 한국과학기술단체총연합회
  • 한국학술지인용색인
  • Scopus
  • crossref
  • orcid

References

1 
M.Q. Mohammed, K.L. Chung, S. Chyi, 2020, Review of Deep Reinforcement Learning-Based Object Grasping: Techniques, Open Challenges, and Recommendations, IEEE Access, Vol. 8, pp. 178450-178481DOI
2 
M. Arulkumaran, M. P. Deisenroth, M. Brundage, A. A. Bharath, 2017, Deep Reinforcement Learning: A Brief Survey, IEEE Signal Processing Magazine, Vol. 34, No. 6, pp. 26-38DOI
3 
Y. Cho, J. Lee, K. Lee, 2020, CNN based Reinforcement Learning for Driving Behavior of Simulated Self-Driving Car, The Transactions of the Korean Institute of Electrical Engineers, Vol. 69, No. 11, pp. 1740-1749Google Search
4 
A. Zeng, S. Song, S. Welker, J. Lee, A. Rodriguez, T. Funkhouser, 2018, Learning Synergies Between Pushing and Grasping with Self-Supervised Deep Reinforcement Learning, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4238-4245DOI
5 
Z. Sun, K. Yuan, W. Hu, C. Yang, Z. Li, 2020, Learning Pregrasp Manipulation of Objects from Ungraspable Poses, 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 9917-9923DOI
6 
Y. Song, Y. Fei, C. Cheng, X. Li, C. Yu, 2019, UG-Net for Robotic Grasping using Only Depth Image, 2019 IEEE International Conference on Real-time Computing and Robotics (RCAR), pp. 913-918DOI
7 
C. Chen, H.-Y. Li, X. Zhang, X. Liu, U.-X. Tan, Aug 2019, Towards robotic picking of targets with background distractors using deep reinforcement learning,, in Proc. WRC Symp. Adv. Robot. Autom. (WRC SARA), Beijing, ChinaDOI
8 
W. Yuan, J. A. Stork, D. Kragic, M. Y. Wang, K. Hang, 2018, Rearrangement with Nonprehensile Manipulation Using Deep Reinforcement Learning, 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 270-277DOI
9 
H. Song, J. A. Hausten, W. Yuan, K. Hang, 2019, Multi-Object Rearrangement with Monte Carlo Tree Search: A Case Study on Planar Nonprehensile Sorting, arXiv preprint arXiv:1912.07024DOI
10 
P. Ni, W. Zhang, H. Zhang, Q. Cao, 2020, Learning efficient push and grasp policy in a totebox from simulation, Advanced Robotics, Vol. 34, No. 13, pp. 873-887DOI
11 
K. He, X. Zhang, S. Ren, J. Sun, 2016, Deep residual learning for image recognition, In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778Google Search
12 
C. Szegedy, W. J. Liu, Y. Sermanet, P. Reed, 2015, Going deeper with convolutions, In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-9Google Search
13 
G. Huang, Z. Liu, Q. K, 2017, Densely connected convolutional networks, In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700-4708Google Search
14 
Z. Wang, T. Schaul, M. Hessel, H. Hasselt, M. Lanctot, N. Freitas, 2016, Dueling network architectures for deep reinforcement learning, In International conference on machine learning, pp. 1995-2003Google Search
15 
H. Van Hasselt, A. Guez, D. Silver, 2016, Deep reinforcement learning with double q-learning, In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 30, No. 1Google Search
16 
J. Deng, W. Dong, R. Socher, L. Li, Kai Li, Li Fei-Fei, 2009, ImageNet: A large-scale hierarchical image database, 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248-255DOI
17 
E. Rohmer, S. P. N. Singh, M. Freese, 2013, V-REP: A versatile and scalable robot simulation framework, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1321-1326DOI
18 
Maximilian IGL, 2019, Generalization in reinforcement learning with selective noise injection and information bottleneck, arXiv preprint arXiv:1910.12911Google Search