Reinforcement learning for microgrid energy management Articles uri icon

published in

publication date

  • September 2013

start page

  • 133

end page

  • 146

issue

  • 2013

volume

  • 59

International Standard Serial Number (ISSN)

  • 0360-5442

Electronic International Standard Serial Number (EISSN)

  • 1873-6785

abstract

  • We consider a microgrid for energy distribution, with a local consumer, a renewable generator (wind turbine) and a storage facility (battery), connected to the external grid via a transformer. We propose a 2 steps-ahead reinforcement learning algorithm to plan the battery scheduling, which plays a key role in the achievement of the consumer goals. The underlying framework is one of multi-criteria decision-making by an individual consumer who has the goals of increasing the utilization rate of the battery during high electricity demand (so as to decrease the electricity purchase from the external grid) and increasing the utilization rate of the wind turbine for local use (so as to increase the consumer independence from the external grid). Predictions of available wind power feed the reinforcement learning algorithm for selecting the optimal battery scheduling actions. The embedded learning mechanism allows to enhance the consumer knowledge about the optimal actions for battery scheduling under different time-dependent environmental conditions. The developed framework gives the capability to intelligent consumers to learn the stochastic environment and make use of the experience to select optimal energy management actions.

keywords

  • smartgrids; microgrids; markov chain model; reinforcement learning; sensitivity analysis