Reinforcement learning for occupant-centric operation of residential energy system
Evaluating the adaptation potential to the unusual occupantsÂ´ behavior during COVID-19 pandemic
Keywords:einforcement Learning, Space heating, Hot water, Occupant behavior, olar energy, Machine Learning
Occupant behavior is a highly stochastic phenomenon, which is known as a key challenge for the optimal control of residential energy systems. With the increasing share of renewable energy in the building sector, the volatile nature of renewable energy is also another key challenge for optimal control. It is challenging and time-consuming to develop a rule-based or model-based control algorithm that can properly take into account these stochastic parameters and ensure an optimal operation. Rather, a learning ability can be provided for the controller to learn these parameters in each specific house, without the need for any model. This research aims to develop a model-free control framework, based on Reinforcement Learning, which takes into account the stochastic occupants' behavior and PV power production and tries to minimize energy use while ensuring occupants' comfort and water hygiene. This research, for the first time, integrates a model of Legionella growth to ensure that energy saving is not with the cost of occupants' health. Hot water use data of three different residential houses are measured to evaluate the performance of the proposed framework on realistic occupants' behavior. The measurement campaign was during the COVID-19 pandemic, which would further highlight the adaptability of the Reinforcement Learning framework to the unusual situation when the prediction of occupants' behavior is even more challenging. Results indicate that the proposed framework can successfully learn and predict occupants' behavior and PV power production, and significantly reduce energy use without violating comfort and hygiene aspects.
How to Cite
This work is licensed under a Creative Commons Attribution 4.0 International License.