• 대한전기학회
Mobile QR Code QR CODE : The Transactions of the Korean Institute of Electrical Engineers
  • COPE
  • kcse
  • 한국과학기술단체총연합회
  • 한국학술지인용색인
  • Scopus
  • crossref
  • orcid

References

1 
Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE Conference, vol. 82, no. 11, pp. 2278-2324, 1998. DOI:10.1109/5.726791DOI
2 
A. Krizhevsky, I. Sutskever and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Communications of the Association for Computing Machinery (ACM), vol. 60, no. 6, pp. 84-90, 2017. DOI:10.1145/3065386DOI
3 
K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” Proceeding of 3rd International Conference on Learning Representations, (ICLR), May 7-9, 2015.DOI
4 
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke and A. Rabinovich, “Going deeper with convolutions,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-9, 2015.DOI:10.1109/CVPR.2015.7298594DOI
5 
M. tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” Proceedings of the 36th International Conference on Machine Learning (ICML), vol. 97, pp. 9-15, 2015.URL
6 
A. Howard, M. Zhu, B. Chen, D.Kalenichenko, W. Wang, T. Weyand, M. Andreetto and H. Adam, “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” CoRR abs/1704.04861, 2017.DOI
7 
K. He, X. Zhang, S. Ren and J. Sun, “Deep Residual Learning for Image Recognition,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, 2016.DOI:10.1109/CVPR.2016.90DOI
8 
Y. M. Park, S. Y. Ahn, E. J. Lim, Y. S Choi, Y. C. Woo, W. Choi, “Deep Learning Model Parallelism,” Electronics and Telecommunications Trends, vol. 33, no. 4, pp. 1-13, 2018.DOI:10.22648/ETRI.2018.J.330401DOI
9 
Y. J. Lee, Y. H. Moon, J. Y. Park, O. G. Min, “Recent R&D Trends for Lightweight Deep Learning,” Electronics and Telecommunications Trends, vol. 34, no. 2, pp. 40-50, 2019.DOI:10.22648/ETRI.2019.J.340205DOI
10 
J. D. Choi, K. W. Min, J. H. Kim, B. S. Seo, D. H. Kim, D. S. Yoo, J. I. Cho, “Zero Accident, Connected Autonomous Driving Vehicle,” Electronics and Telecommunications Trends, vol. 36, no. 2, pp. 21-31, 2021.DOI:10.22648/ETRI.2021.J.360103DOI
11 
L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, J. Schulman, J. Hilton, F. Kelton, L. Miller, M. Simens, A. Askell, P. Welinder, P. Christiano, J. Leike and R.Lowe, “Training language models to follow instructions with human feedback,” Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems (NeurIPS), vol. 35, pp. 27730-27744, 2022.URL
12 
A. Radford, J. Wu, R. Child, D. Luan, D. Amodei and I. Sutskerver, “Language Models are Unsupervised Multitask Learners,” OpenAI blog, 1(8), vol. 1, no. 8, pp. 9, 2019.URL
13 
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., and Amodei, “Language Models are Few-Shot Learners,” Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems, (NeurIPS), vol. 33, pp. 1877-1901, 2020.URL
14 
M. Y. Lee, J. Chung, J. H. Lee, J. H. Han, Y. S. Kwon, “Trends in AI Processor Technology,” Electronics and Telecommunications Trends, vol. 35, no. 3, pp. 65-75, 2020.DOI:10.22648/ETRI.2020.J.350307DOI