• 대한전기학회
Mobile QR Code QR CODE : The Transactions of the Korean Institute of Electrical Engineers
  • COPE
  • kcse
  • 한국과학기술단체총연합회
  • 한국학술지인용색인
  • Scopus
  • crossref
  • orcid

References

1 
Y.-H. Ryu, D.-Y. Kim, D. Oualid and D.-J. Lee, “Deep learning-based lane recognition through longitudinal and lateral nonlinear model prediction control secure motion planning (written in Korean),” Proceedings of the 2022 Information and Control Symposium, pp. 61-63, 2022. https://www.dbpia.co.kr/Journal/articleDetail?nodeId=NODE11065764URL
2 
N. Fatima, “Enhancing Performance of a Deep Neural Network, A Comparative Analysis of Optimization Algorithms,” Advances in Distributed Computing and Artificial Intelligence Journal, vol. 9, no. 2, pp. 79-90, 2020. DOI:https://dx.doi.org/ 10.14201/ADCAIJ2020927990DOI
3 
S. Ruder, “An overview of gradient descent optimization algorithms,” arXiv preprint arXiv:1609.04747, 2016. DOI:10.48550/arXiv.1609.04747DOI
4 
A. Zhang, Z. C. Lipton and S. J. Smola, Dive into Deep Learning, pp. 473-555, 2022. DOI:10.1017/9781009389426DOI
5 
N. Qian. “On the momentum term in gradient descent learning algorithms,” Neural networks, vol. 12, no. 1, pp. 145–151, 1999. DOI:10.1016/S0893-6080(98)00116-6DOI
6 
J. Duchi, E. Hazan and Y. Singer, “Adaptive Subgradient Methods for Online Learning and Stochastic Optimization,” Journal of Machine Learning Research, vol. 12, issue 7, pp. 2121-2159, 2011. https://jmlr.org/papers/volume12/duchi11a/duchi11a.pdfURL
7 
M. D. Zeiler, “AdaDelta: An Adaptive Learning Rate Method,” arXiv:1212.5701v1 [cs.LG], 2012. DOI:10.48550/arXiv.1212.5701DOI
8 
D. P. Kingma and j. L. Ba, “Adam: A Method for Stochastic Optimization,” Proceedings of the 3rd International Conference on Learning Representations, pp. 1-15, 2015. DOI:10.48550/arXiv.1412.6980DOI
9 
D.-S. Shim and J. Shim, “A Modified Stochastic Gradient Decent Optimization Algorithm with Random Learning Rate for Machine Learning and Deep Learning,” International Journal of Control, Automation, and Systems, vol. 21, no. 11, pp. 3825-3831, 2023. DOI:10.1007/s12555-022-0947-1DOI
10 
X. Yang, “Kalman Optimization for Consistent Gradient Decent,” Proceedings of the 2021 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3900-3904, 2021. DOI:10.1109/ICASSP39728.2021.9414588DOI
11 
I. Goodfellow, Y. Bengio and A. Courville, Deep Learning, The MIT Press, pp. 286-302, 2016. DOI:10.5555/3086952DOI
12 
G. Saito, Deep Learning from Scratch (written in Korean), Hanbit Media, inc., pp. 189-202, 2017. https://sdr1982.tistory.com/201URL
13 
F. Schneider, L. Balles and P. Heunnig, “DeepOBS: A Deep Learning Optimizer Benchmark Suite,” Proceedings of the 7th International Conference on Learning Representations (ICLR), pp. 1-14, 2019. DOI:10.48550/arXiv.1903.05499DOI
14 
D. Choi, C.J. Shallue, Z. Nado, J.-H Lee, C.J. Maddison and G.E. Dahl, “On Empirical Comparisons of Optimizers for Deep Learning,” arXiv:1910.05446v3 [cs.LG], 2020. DOI:10.48550/arXiv.1910.05446DOI
15 
A. Zohrevand and Z. Imani, “An Empirical Study of the Performance of Different Optimizers in the Deep Neural Network,” Proceedings of the 12th Iranian/Second International Conference on Machine Vision and Image Processing (MVIP), 2022. DOI:10.1109/MVIP53647.2022.9738743DOI