From Data To Control
Learning an HVAC Control Policy
DOI:
https://doi.org/10.34641/clima.2022.362Keywords:
Deep Learning, Reinforcement Learning, HVAC control, system identificationAbstract
This study introduced a framework for smart HVAC controllers that can be used at scale. The proposed controllers derive their control policy solely from data. First a simulator of the process is learned, which we call the Neural Twin. The results showed that the Neural Twin framework is able to simulate several distinct processes with an average absolute error close to 0.2 °C for all processes, even when predicting several hours ahead. Then, the Neural Twin was used to develop two different control algorithms. The first algorithm learned a control policy for a process using a neural network. The network was trained using Proximal Policy Optimization by gathering experience from the simulated environment provided by the Neural Twin. The second algorithm performed Model Predictive Control using the Neural Twin at real time during control. It used the Neural Twin to choose an optimal control sequence, given a set of possible control sequences. It selects the optimal control sequence based on a horizon, which is usually a few hours ahead. Both control algorithms were inspected in several environments and for one of those environments the best controller was tested on a physical room. The results show that the control algorithms were able to handle a wide variety of different processes, without manual tuning. The controllers achieved improved performance compared to the conventional control algorithms, which were manually tuned, mainly in terms of energy usage. It is estimated that the proposed controllers can lead to a 5% - 40% decrease in effective energy usage, while retaining the thermal comfort and stability. The controller trained using reinforcement learning showed the best performance. From the results it was concluded that the control methods pose an attractive alternative compared to conventional controllers.