Abstract
Load forecasting has long been a key task for reliable power systems planning and operation. Over the recent years, advanced metering infrastructure has proliferated in industry. This has given rise to many load forecasting methods based on frequent measurements of power states obtained by smart meters. Meanwhile, real-world constraints arising in this new setting present both challenges and opportunities to achieve high load forecastability. The bandwidth constraints often imposed on the transmission between data concentrators and utilities are one of them, which limit the amount of data that can be sampled from customers. There lacks a sampling-rate control policy that is self-adaptive to users’ load behaviors through online data interaction with the smart grid environment. In this paper, we formulate the bandwidth-constrained sampling-rate control problem as a Markov decision process (MDP) and provide a reinforcement learning (RL)-based algorithm to solve the MDP for an optimal sampling-rate control policy. The resulting policy can be updated in real time to accommodate volatile load behaviors observed in the smart grid. Numerical experiments show that the proposed RL-based algorithm outperforms competing algorithms and delivers superior predictive performance.
Original language | English (US) |
---|---|
Pages (from-to) | 924-934 |
Number of pages | 11 |
Journal | European Journal of Operational Research |
Volume | 295 |
Issue number | 3 |
DOIs | |
State | Published - Dec 16 2021 |
Keywords
- Forecasting
- Machine learning
- Markov decision processes
- Reinforcement learning
- Sampling-rate control
ASJC Scopus subject areas
- Computer Science(all)
- Modeling and Simulation
- Management Science and Operations Research
- Information Systems and Management