• 대한전기학회
Mobile QR Code QR CODE : The Transactions of the Korean Institute of Electrical Engineers
  • COPE
  • kcse
  • 한국과학기술단체총연합회
  • 한국학술지인용색인
  • Scopus
  • crossref
  • orcid

References

1 
R. Detchon and R. Van Leeuwen, “Policy: Bring sustainable energy to the developing world,” Nature, vol. 508, no. 7496, pp. 309–311, 2014. DOI:10.1038/508309aDOI
2 
H. Hu, N. Xie, D. Fang and X. Zhang, “The role of renewable energy consumption and commercial services trade in carbon dioxide reduction: Evidence from 25 developing countries,” Applied energy, vol. 211, pp. 1229–1244, 2018. DOI:10.1016/j.apenergy.2017.12.019DOI
3 
J. Wu, J. Yan, H. Jia, N. Hatziargyriou, N. Djilali and H. Sun, “Integrated energy systems,” Applied energy, vol. 167, pp. 155–157, 2016. DOI:10.1016/j.apenergy.2016.02.075DOI
4 
B. Kroposki, “Integrating high levels of variable renewable energy into electric power systems,” Journal of Modern Power Systems and Clean Energy, vol. 5, no. 6, pp. 831–837, 2017. DOI:10.1007/s40565-017-0339-3DOI
5 
M. L. Tuballa and M. L. Abundo, “A review of the development of smart grid technologies,” Renewable and Sustainable Energy Reviews, vol. 59, pp. 710–725, 2016. DOI:10.1016/j.rser.2016.01.011DOI
6 
J. Keirstead, M. Jennings and A. Sivakumar, “A review of urban energy system models: Approaches, challenges and opportunities,” Renewable and Sustainable Energy Reviews, vol. 16, no. 6, pp. 3847–3866, 2012. DOI:10.1016/j.rser.2012.02.047DOI
7 
M. F. Zia, E. Elbouchikhi and M. Benbouzid, “Microgrids energy management systems: A critical review on methods, solutions, and prospects,” Applied energy, vol. 222, pp. 1033–1055, 2018. DOI:10.1016/j.apenergy.2018.04.103DOI
8 
S. Impram, S. V. Nese and B. Oral, “Challenges of renewable energy penetration on power system flexibility: A survey,” Energy Strategy Reviews, vol. 31, no. 100539, pp. 1-12, 2020. DOI:10.1016/j.esr.2020.100539DOI
9 
D. Liu, Q. Yang, Y. Chen, X. Chen and J. Wen, “Optimal parameters and placement of hybrid energy storage systems for frequency stability improvement,” Protection and Control of Modern Power Systems, vol. 10, no. 2, pp. 40–53, 2025. DOI:10.23919/PCMP.2023.000259DOI
10 
K. Liu, Z. Chen, X. Li and Y. Gao, “Analysis and control parameters optimization of wind turbines participating in power system primary frequency regulation with the consideration of secondary frequency drop,” Energies, vol. 18, no. 6, pp. 1–19, 2025. DOI:10.3390/en18061317DOI
11 
M. Dahane, A. Benali, H. Tedjini, A. Benhammou, M. A. Hartani and H. Rezk, “Optimized double-stage fractional order controllers for dfig-based wind energy systems: A comparative study,” Results in Engineering, vol. 25, no. 104584, pp. 1-17, 2025. DOI:10.1016/j.rineng.2025.104584DOI
12 
L. Cheng and T. Yu, “A new generation of ai: A review and perspective on machine learning technologies applied to smart energy and electric power systems,” International Journal of Energy Research, vol. 43, no. 6, pp. 1928–1973, 2019. DOI:10.1002/er.4333DOI
13 
M. M. Gajjala and A. Ahmad, “A survey on recent advances in transmission congestion management,” International Review of Applied Sciences and Engineering, vol. 13, no. 1, pp. 29–41, 2021. DOI:10.1556/1848.2021.00286DOI
14 
H. Zhang, X. Sun, M. H. Lee and J. Moon, “Deep reinforcement learning based active network management and emergency load-shedding control for power systems,” IEEE Transactions on Smart Grid, vol. 15, no. 2, pp. 1423-1437, 2023. DOI:10.1109/TSG.2023.3302846DOI
15 
S. M. Mohseni-Bonab, I. Kamwa, A. Rabiee and C. Chung, “Stochastic optimal transmission switching: A novel approach to enhance power grid security margins through vulnerability mitigation under renewables uncertainties,” Applied Energy, vol. 305, no. 117851, pp. 1-14, 2022.DOI:10.1016/j.apenergy.2021.117851DOI
16 
D. Michaelson, H. Mahmood and J. Jiang, “A predictive energy management system using pre-emptive load shedding for islanded photovoltaic microgrids,” IEEE Transactions on Industrial Electronics, vol. 64, no. 7, pp. 5440–5448, 2017. DOI:10.1109/TIE.2017.2677317DOI
17 
R. S. Sutton, “Reinforcement learning: An introduction,” A Bradford Book, 2018. DOI:10.1017/S0263574799271172DOI
18 
D. Cao, W. Hu, J. Zhao, G. Zhang, B. Zhang, Z. Liu, Z. Chen and F. Blaabjerg, “Reinforcement learning and its applications in modern power and energy systems: A review,” Journal of modern power systems and clean energy, vol. 8, no. 6, pp. 1029–1042, 2020. DOI:10.35833/MPCE.2020.000552DOI
19 
E. Mocanu, D. C. Mocanu, P. H. Nguyen, A. Liotta, M. E. Webber, M. Gibescu and J. G. Slootweg, “On-line building energy optimization using deep reinforcement learning,” IEEE transactions on smart grid, vol. 10, no. 4, pp. 3698–3708, 2018. DOI:10.1109/TSG.2018.2834219DOI
20 
Y. Zhang, X. Wang, J. Wang and Y. Zhang, “Deep reinforcement learning based volt-var optimization in smart distribution systems,” IEEE Transactions on Smart Grid, vol. 12, no. 1, pp. 361–371, 2020. DOI:10.1109/TSG.2020.3010130DOI
21 
Z. Yan and Y. Xu, “Data-driven load frequency control for stochastic power systems: A deep reinforcement learning method with continuous action search,” IEEE Transactions on Power Systems, vol. 34, no. 2, pp. 1653–1656, 2018. DOI:10.1109/TPWRS.2018.2881359DOI
22 
Q. Huang, R. Huang, W. Hao, J. Tan, R. Fan and Z. Huang, “Adaptive power system emergency control using deep reinforcement learning,” IEEE Transactions on Smart Grid, vol. 11, no. 2, pp. 1171–1182, 2019. DOI:10.1109/TSG.2019.2933191DOI
23 
Z. Zhang, D. Zhang and R. C. Qiu, “Deep reinforcement learning for power system applications: An overview,” CSEE Journal of Power and Energy Systems, vol. 6, no. 1, pp. 213–225, 2019. DOI:10.17775/CSEEJPES.2019.00920DOI
24 
Q. Li, T. Lin, Q. Yu, H. Du, J. Li and X. Fu, “Review of deep reinforcement learning and its application in modern renewable power system control,” Energies, vol. 16, no. 10, pp. 1–23, 2023. DOI:10.3390/en16104143DOI
25 
J. N. Tsitsiklis, “Asynchronous stochastic approximation and q-learning,” Machine learning, vol. 16, pp. 185–202, 1994. DOI:10.1007/BF00993306DOI
26 
A. Agarwal, S. M. Kakade, J. D. Lee and G. Mahajan, “Optimality and approximation with policy gradient methods in markov decision processes,” in Conference on Learning Theory.PMLR, vol. 125, pp. 64–66, 2020. https://proceedings.mlr.press/v125/agarwal20a.htmlURL
27 
H. Wang and B. Raj, “On the origin of deep learning,” arXiv preprint arXiv:1702.07800, 2017. DOI:10.48550/arXiv.1702.07800DOI
28 
Y. LeCun, Y. Bengio and G. Hinton, “Deep learning,” nature, vol. 521, no. 7553, pp. 436–444, 2015. DOI:10.1038/nature14539DOI
29 
R. Sun, “Optimization for deep learning: theory and algorithms,” arXiv preprint arXiv:1912.08957, 2019. DOI:10.48550/arXiv.1912.08957DOI
30 
J. Tsitsiklis and B. Van Roy, “Analysis of temporal-diffference learning with function approximation,” Advances in neural information processing systems, vol. 9, pp. 1-7, 1996. https://proceedings.neurips.cc/paper_files/paper/1996/file/e00406144c1e7e35240afed70f34166a-Paper.pdfURL
31 
C. J. Watkins and P. Dayan, “Q-learning,” Machine learning, vol. 8, pp. 279–292, 1992. DOI:10.1007/BF00992698DOI
32 
V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski et al., “Human-level control through deep reinforcement learning,” nature, vol. 518, no. 7540, pp. 529–533, 2015. DOI:10.1038/nature14236DOI
33 
J. Fan, Z. Wang, Y. Xie and Z. Yang, “A theoretical analysis of deep q-learning,” in Learning for dynamics and control. PMLR, vol. 120, pp. 486–489, 2020. https://proceedings.mlr.press/v120/yang20a.htmlURL
34 
H. Van Hasselt, A. Guez and D. Silver, “Deep reinforcement learning with double q-learning,” in Proceedings of the AAAI conference on artificial intelligence, vol. 30, no. 1, pp. 2094-2100, 2016. DOI:10.1609/aaai.v30i1.10295DOI
35 
Z. Wang, T. Schaul, M. Hessel, H. Hasselt, M. Lanctot and N. Freitas, “Dueling network architectures for deep reinforcement learning,” in International conference on machine learning.PMLR, vol. 48, pp. 1995–2003, 2016. https://proceedings.mlr.press/v48/wangf16.htmlURL
36 
D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra and M. Riedmiller, “Deterministic policy gradient algorithms,” in International conference on machine learning. PMLR, vol. 32, no. 1, pp. 387–395, 2014. https://proceedings.mlr.press/v32/silver14.htmlURL
37 
R. S. Sutton, D. McAllester, S. Singh and Y. Mansour, “Policy gradient methods for reinforcement learning with function approximation,” Advances in neural information processing systems, vol. 12, pp. 1-7, 1999. https://proceedings.neurips.cc/paper_files/paper/1999/file/464d828b85b0bed98e80ade0a5c43b0f-Paper.pdfURL
38 
T. Degris, M. White and R. S. Sutton, “Off-policy actor-critic,” arXiv preprint arXiv:1205.4839, 2012. DOI:10.48550/arXiv.1205.4839DOI
39 
S. Li, S. Bing and S. Yang, “Distributional advantage actor-critic,” arXiv preprint arXiv:1806.06914, 2018. DOI:10.48550/arXiv.1806.06914DOI
40 
V. Mnih, “Asynchronous methods for deep reinforcement learning,” arXiv preprint arXiv:1602.01783, 2016. DOI:10.48550/arXiv.1602.01783DOI
41 
T. Haarnoja, A. Zhou, K. Hartikainen, G. Tucker, S. Ha, J. Tan, V. Kumar, H. Zhu, A. Gupta, P. Abbeel et al., “Soft actor-critic algorithms and applications,” arXiv preprint arXiv:1812.05905, 2018. DOI:10.48550/arXiv.1812.05905DOI
42 
J. Schulman, F. Wolski, P. Dhariwal, A. Radford and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017. DOI:10.48550/arXiv.1707.06347DOI
43 
S. Pateria, B. Subagdja, A.-h. Tan and C. Quek, “Hierarchical reinforcement learning: A comprehensive survey,” ACM Computing Surveys (CSUR), vol. 54, no. 5, pp. 1–35, 2021. DOI:10.1145/3453160DOI
44 
S. Gu, L. Yang, Y. Du, G. Chen, F. Walter, J. Wang and A. Knoll, “A review of safe reinforcement learning: Methods, theories and applications,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 46, no. 12, pp. 11216–11235, 2024. DOI:10.1109/TPAMI.2024.3457538DOI
45 
F. Meng, Y. Bai and J. Jin, “An advanced real-time dispatching strategy for a distributed energy system based on the reinforcement learning algorithm,” Renewable energy, vol. 178, pp. 13–24, 2021. DOI:10.1016/j.renene.2021.06.032DOI
46 
T. Yang, L. Zhao, W. Li and A. Y. Zomaya, “Dynamic energy dispatch strategy for integrated energy system based on improved deep reinforcement learning,” Energy, vol. 235, no. 121377, pp. 1-15, 2021. DOI:10.1016/j.energy.2021.121377DOI
47 
A. S. Ebrie and Y. J. Kim, “Reinforcement learning-based optimization for power scheduling in a renewable energy connected grid,” Renewable Energy, vol. 230, no. 120886, pp. 1-27, 2024. DOI:10.1016/j.renene.2024.120886DOI
48 
X. Han, C. Mu, J. Yan and Z. Niu, “An autonomous control technology based on deep reinforcement learning for optimal active power dispatch,” International Journal of Electrical Power & Energy Systems, vol. 145, no. 108686, pp. 1-10, 2023. DOI:10.1016/j.ijepes.2022.108686DOI
49 
X. Zhou, J. Wang, X. Wang and S. Chen, “Optimal dispatch of integrated energy system based on deep reinforcement learning,” Energy Reports, vol. 9, pp. 373–378, 2023. DOI:10.1016/j.egyr.2023.09.157DOI
50 
I. Damjanović, I. Pavić, M. Puljiz and M. Brcic, “Deep reinforcement learning-based approach for autonomous power flow control using only topology changes,” Energies, vol. 15, no. 19, pp. 1-16, 2022. DOI:10.3390/en15196920DOI
51 
M. Subramanian, J. Viebahn, S. H. Tindemans, B. Donnot and A. Marot, “Exploring grid topology reconfiguration using a simple deep reinforcement learning approach,” in 2021 IEEE Madrid PowerTech, pp. 1–6, 2021. DOI:10.1109/PowerTech46648.2021.9494879DOI
52 
Z. Yang, Z. Qiu, Y. Wang, C. Yan, X. Yang and G. Deconinck, “Power grid topology regulation method based on hierarchical reinforcement learning,” in 2024 Second International Conference on Cyber-Energy Systems and Intelligent Energy (ICCSIE), pp. 1–6, 2024. DOI:10.1109/ICCSIE61360.2024.10698617DOI
53 
Z. Qiu, Y. Zhao, W. Shi, F. Su and Z. Zhu, “Distribution network topology control using attention mechanism-based deep reinforcement learning,” in 2022 4th International Conference on Electrical Engineering and Control Technologies (CEECT), pp. 55–60, 2022. DOI:10.1109/CEECT55960.2022.10030642DOI
54 
X. Han, Y. Hao, Z. Chong, S. Ma and C. Mu, “Deep reinforcement learning based autonomous control approach for power system topology optimization,” in 2022 41st Chinese Control Conference (CCC), pp. 6041–6046, 2022. DOI:10.23919/CCC55666.2022.9902073DOI
55 
R. Huang, Y. Chen, T. Yin, X. Li, A. Li, J. Tan, W. Yu, Y. Liu and Q. Huang, “Accelerated deep reinforcement learning based load shedding for emergency voltage control,” arXiv preprint arXiv:2006.12667, 2020. DOI:10.48550/arXiv.2006.12667DOI
56 
Y. Pei, J. Yang, J. Wang, P. Xu, T. Zhou and F. Wu, “An emergency control strategy for undervoltage load shedding of power system: A graph deep reinforcement learning method,” IET Generation, Transmission & Distribution, vol. 17, no. 9, pp. 2130–2141, 2023. DOI:10.1049/gtd2.12795DOI
57 
J. Li, S. Chen, X. Wang and T. Pu, “Load shedding control strategy in power grid emergency state based on deep reinforcement learning,” CSEE Journal of Power and Energy Systems, vol. 8, no. 4, pp. 1175–1182, 2021. DOI:10.17775/CSEEJPES.2020.06120DOI
58 
J. Zhang, Y. Luo, B. Wang, C. Lu, J. Si and J. Song, “Deep reinforcement learning for load shedding against short-term voltage instability in large power systems,” IEEE Transactions on Neural Networks and Learning Systems, vol. 34, no. 8, pp. 4249–4260, 2021. DOI:10.1109/TNNLS.2021.3121757DOI
59 
H. Chen, J. Zhuang, G. Zhou, Y. Wang, Z. Sun and Y. Levron, “Emergency load shedding strategy for high renewable energy penetrated power systems based on deep reinforcement learning,” Energy Reports, vol. 9, pp. 434–443, 2023. DOI:10.1016/j.egyr.2023.03.027DOI
60 
Z. Hu, Z. Shi, L. Zeng, W. Yao, Y. Tang and J. Wen, “Knowledge-enhanced deep reinforcement learning for intelligent event-based load shedding,” International Journal of Electrical Power & Energy Systems, vol. 148, no. 108978, pp. 1-11, 2023. DOI:10.1016/j.ijepes.2023.108978DOI
61 
Y. Zhang, M. Yue and J. Wang, “Adaptive load shedding for grid emergency control via deep reinforcement learning,” in 2021 IEEE Power & Energy Society General Meeting (PESGM). pp. 1-5, 2021. DOI:10.1109/PESGM46819.2021.9638058DOI