Dissemin is shutting down on January 1st, 2025

Published in

Wiley Open Access, IET Energy Systems Integration, 1(6), p. 62-72, 2023

DOI: 10.1049/esi2.12118

Links

Tools

Export citation

Search in Google Scholar

Data‐driven power system dynamic security assessment under adversarial attacks: Risk warning based interpretation analysis and mitigation

Journal article published in 2023 by Zhebin Chen ORCID, Chao Ren, Yan Xu ORCID, Zhao Yang Dong, Qiaoqiao Li
This paper is made freely available by the publisher.
This paper is made freely available by the publisher.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Green circle
Published version: archiving allowed
Data provided by SHERPA/RoMEO

Abstract

AbstractPower system dynamic security assessment (DSA) has long been essential for protecting the system from the risk of cascading failures and wide‐spread blackouts. The machine learning (ML) based data‐driven strategy is promising due to its real‐time computation speed and knowledge discovery capacity. However, ML algorithms are found to be vulnerable against well‐designed malicious input samples that can lead to wrong outputs. Adversarial attacks are implemented to measure the vulnerability of the trained ML models. Specifically, the targets of attacks are identified by interpretation analysis that the data features with large SHAP values will be assigned with perturbations. The proposed method has the superiority that an instance‐based DSA method is established with interpretation of the ML models, where effective adversarial attacks and its mitigation countermeasure are developed by assigning the perturbations on features with high importance. Later, these generated adversarial examples are employed for adversarial training and mitigation. The simulation results present that the model accuracy and robustness vary with the quantity of adversarial examples used, and there is not necessarily a trade‐off between these two indicators. Furthermore, the rate of successful attacks increases when a greater bound of perturbations is permitted. By this method, the correlation between model accuracy and robustness can be clearly stated, which will provide considerable assistance in decision making.