Explainable Artificial Intelligence Techniques for Power Monitoring Systems
Modern power systems are characterized by a high degree of complexity and uncertainties that may jeopardize their stability, due to the continuous integration of nonlinear distributed generators and loads. To face this challenge, a recent advancement in smart grid technologies is the use of monitoring systems that are mainly based on smart meters. Such systems may help grid operators and consumers to manage their energy by estimating the power consumption of various components in the system. Open access to such information can encourage energy-saving behavior, improve detection of faults and disturbances, better demand forecast, and energy incentives. Nevertheless, with the rapid growth of these measurement units, there is an urgent need for efficient and near real-time algorithms to analyze and make better use of all the available data. Accordingly, with the evolution of deep learning, better classifiers and algorithms are being developed for power monitoring system applications. However, despite the evident success of such algorithms, an inherent difficulty is that since machine learning models are often very complex, it may not be clear how or why they make certain decisions, and how they treat real-world data. Therefore, experts in the energy field may find it hard to trust the decisions and recommendations made by such algorithms, limiting their practical use.
In this light, the main objective of this seminar is to present Explainable Artificial Intelligence (XAI) techniques for power monitoring systems and to highlight the potential of using XAI in this context. First, a new method that explains the results of Load Disaggregation and Power Quality Disturbances (PQD) classifiers using recently developed XAI techniques, by providing trustworthy and simple feedback to consumer-user and power experts, is given. Then, an evaluation process that allows power experts to measure the explainability of such classifiers is suggested. Finally, A new XAI method that explains the decisions of PQD classifiers, using a latent space representation is developed. The method generates visual explanations for any classifier, without requiring architectural changes or re-training. The method is evaluated for PQD localization, and is optimized to be transparent and easy to understand when compared to other XAI techniques.
* Ph.D. Under the supervision of Prof. Yoash Levron.