• 대한전기학회
Mobile QR Code QR CODE : The Transactions of the Korean Institute of Electrical Engineers
  • COPE
  • kcse
  • 한국과학기술단체총연합회
  • 한국학술지인용색인
  • Scopus
  • crossref
  • orcid

References

1 
Z. Zhang, D. Zhang and R. C. Qiu, “Deep reinforcement learning for power system applications: An overview,” CSEE Journal of Power and Energy Systems, vol. 6, no. 1, pp. 213–225, 2019. DOI:10.17775/CSEEJPES.2019.00920DOI
2 
A. Marot, B. Donnot, G. Dulac-Arnold, A. Kelly, A. O’Sullivan, J. Viebahn, M. Awad, I. Guyon, P. Panciatici and C. Romero, “Learning to run a power network challenge: a retrospective analysis,” in NeurIPS 2020 Competition and Demonstration Track. PMLR, pp. 112–132, 2021. https://proceedings.mlr.press/v133/marot21a.htmlURL
3 
D. Ernst, M. Glavic and L. Wehenkel, “Power systems stability control: reinforcement learning framework,” IEEE Transactions on Power Systems, vol. 19, no. 1, pp. 427–435, 2004. DOI:10.1109/TPWRS.2003.821457DOI
4 
W. Cai, H. N. Esfahani, A. B. Kordabad and S. Gros, “Optimal management of the peak power penalty for smart grids using mpc-based reinforcement learning,” in 2021 60th IEEE Conference on Decision and Control (CDC), pp. 6365–6370, 2021. DOI:10.1109/CDC45484.2021.9683333DOI
5 
M. Kamel, R. Dai, Y. Wang, F. Li and G. Liu, “Data-driven and model-based hybrid reinforcement learning to reduce stress on power systems branches,” CSEE Journal of Power and Energy Systems, vol. 7, no. 3, pp. 433–442, 2021. DOI:10.17775/CSEEJPES.2020.04570DOI
6 
J. Li, S. Chen, X. Wang and T. Pu, “Load shedding control strategy in power grid emergency state based on deep reinforcement learning,” CSEE Journal of Power and Energy Systems, vol. 8, no. 4, pp. 1175–1182, 2021. DOI:10.17775/CSEEJPES.2020.06120DOI
7 
H. Yousuf, A. Y. Zainal, M. Alshurideh and S. A. Salloum, “Artificial intelligence models in power system analysis,” in Artificial Intelligence for Sustainable Development: Theory, Practice and Future Applications. Springer, vol. 912, pp. 231–242, 2020. DOI:10.1007/978-3-030-51920-9_12DOI
8 
C. Zhao, U. Topcu, N. Li and S. Low, “Design and stability of load-side primary frequency control in power systems,” IEEE Transactions on Automatic Control, vol. 59, no. 5, pp. 1177–1189, 2014. DOI:10.1109/TAC.2014.2298140DOI
9 
Y. Zhang, X. Shi, H. Zhang, Y. Cao and V. Terzija, “Review on deep learning applications in frequency analysis and control of modern power system,” International Journal of Electrical Power & Energy Systems, vol. 136, no. 107744, pp. 1–18, 2022. DOI:10.1016/j.ijepes.2021.107744DOI
10 
A. K. Ozcanli, F. Yaprakdal and M. Baysal, “Deep learning methods and applications for electrical power systems: A comprehensive review,” International Journal of Energy Research, vol. 44, no. 9, pp. 7136–7157, 2020. DOI:10.1002/er.5331DOI
11 
D. Yoon, S. Hong, B. J. Lee and K. E. Kim, “Winning the l2rpn challenge: Power grid management via semi-markov afterstate actor-critic,” in 9th International Conference on Learning Representations, ICLR 2021, pp. 1–12, 2021. https://openreview.net/forum?id=LmUJqB1Cz8URL
12 
M. Subramanian, J. Viebahn, S. H. Tindemans, B. Donnot and A. Marot, “Exploring grid topology reconfiguration using a simple deep reinforcement learning approach,” in 2021 IEEE Madrid PowerTech, pp. 1–6, 2021. DOI:10.1109/PowerTech46648.2021.9494879DOI
13 
I. Damjanovi´c, I. Pavi´c, M. Brˇci´c and R. Jerˇci´c, “High performance computing reinforcement learning framework for power system control,” in 2023 IEEE Power & Energy Society Innovative Smart Grid Technologies Conference (ISGT). IEEE, pp. 1–5, 2023. DOI:10.1109/ISGT51731.2023.10066416DOI
14 
J. Schulman, F. Wolski, P. Dhariwal, A. Radford and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017. DOI:10.48550/arXiv.1707.06347DOI
15 
V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, 2015. DOI:10.1038/nature14236DOI
16 
T. Haarnoja, A. Zhou, P. Abbeel and S. Levine, “Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor,” in International Conference on Machine Learning. PMLR, pp. 1861–1870, 2018,.URL
17 
Y. Liu, D. Zhang and H. B. Gooi, “Optimization strategy based on deep reinforcement learning for home energy management,” CSEE Journal of Power and Energy Systems, vol. 6, no. 3, pp. 572–582, 2020. DOI:10.17775/CSEEJPES.2019.02890DOI
18 
Y. Zhou, B. Zhang, C. Xu, T. Lan, R. Diao, D. Shi, Z. Wang and W. -J. Lee, “A data-driven method for fast ac optimal power flow solutions via deep reinforcement learning,” Journal of Modern Power Systems and Clean Energy, vol. 8, no. 6, pp. 1128–1139, 2020. DOI:10.35833/MPCE.2020.000522DOI
19 
B. Zhang, W. Hu, D. Cao, T. Li, Z. Zhang, Z. Chen and F. Blaabjerg, “Soft actor-critic-based multi-objective optimized energy conversion and management strategy for integrated energy systems with renewable energy,” Energy Conversion and Management, vol. 243, no. 114381, pp. 1–15, 2021. DOI:10.1016/j.enconman.2021.114381DOI
20 
G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang and W. Zaremba, “Openai gym,” arXiv preprint arXiv:1606.01540, 2016. DOI:10.48550/arXiv.1606.01540DOI
21 
M. Lehna, J. Viebahn, A. Marot, S. Tomforde and C. Scholz, “Managing power grids through topology actions: A comparative study between advanced rule-based and reinforcement learning agents,” Energy and AI, vol. 14, no. 100276, pp. 1–11, 2023. DOI:10.1016/j.egyai.2023.100276DOI
22 
I. Damjanovi´c, I. Pavi´c, M. Puljiz and M. Brcic, “Deep reinforcement learning-based approach for autonomous power flow control using only topology changes,” Energies, vol. 15, no. 19, pp. 1–16, 2022. DOI:10.3390/en15196920DOI
23 
X. Han, Y. Hao, Z. Chong, S. Ma and C. Mu, “Deep reinforcement learning based autonomous control approach for power system topology optimization,” in 2022 41st Chinese Control Conference (CCC), pp. 6041–6046, 2022. DOI:10.23919/CCC55666.2022.9902073DOI
24 
V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra and M. Riedmiller, “Playing atari with deep reinforcement learning,” arXiv preprint arXiv:1312.5602, 2013.URL
25 
K. Greff, R. K. Srivastava, J. Koutn´ık, B. R. Steunebrink and J. Schmidhuber, “Lstm: A search space odyssey,” IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 10, pp. 2222–2232, 2016. DOI:10.1109/TNNLS.2016.2582924DOI
26 
Z. C. Lipton, J. Berkowitz and C. Elkan, “A critical review of recurrent neural networks for sequence learning,” arXiv preprint arXiv:1506.00019, 2015. DOI:10.48550/arXiv.1506.00019DOI
27 
P. -L. Bacon, J. Harb and D. Precup, “The option-critic architecture,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31, no. 1, pp. 1726–1734, 2017. DOI:10.1609/aaai.v31i1.10916DOI
28 
B. Donnot, “Grid2op- A testbed platform to model sequential decision making in power systems,” 2020. https://GitHub.com/rte-france/grid2opURL