A Deep Reinforcement Learning-Based Partitioning Method for Power System Parallel Restoration
Changcheng Li, Weimeng Chang, Dahai Zhang, Jinghan He,
A Deep Reinforcement Learning-Based Partitioning Method for Power System Parallel Restoration,
Energy Engineering,
Volume 123, Issue 1,
2025,
,
ISSN 0199-8595,
https://doi.org/10.32604/ee.2025.069389.
(https://www.sciencedirect.com/science/article/pii/S0199859525002180)
Abstract: Effective partitioning is crucial for enabling parallel restoration of power systems after blackouts. This paper proposes a novel partitioning method based on deep reinforcement learning. First, the partitioning decision process is formulated as a Markov decision process (MDP) model to maximize the modularity. Corresponding key partitioning constraints on parallel restoration are considered. Second, based on the partitioning objective and constraints, the reward function of the partitioning MDP model is set by adopting a relative deviation normalization scheme to reduce mutual interference between the reward and penalty in the reward function. The soft bonus scaling mechanism is introduced to mitigate overestimation caused by abrupt jumps in the reward. Then, the deep Q network method is applied to solve the partitioning MDP model and generate partitioning schemes. Two experience replay buffers are employed to speed up the training process of the method. Finally, case studies on the IEEE 39-bus test system demonstrate that the proposed method can generate a high-modularity partitioning result that meets all key partitioning constraints, thereby improving the parallelism and reliability of the restoration process. Moreover, simulation results demonstrate that an appropriate discount factor is crucial for ensuring both the convergence speed and the stability of the partitioning training.
Keywords: Partitioning method; parallel restoration; deep reinforcement learning; experience replay buffer; partitioning modularity