A model-free reinforcement learning approach for the energetic control of a building with non-stationary user behaviour
Conférence : Communications avec actes dans un congrès international
We consider a system providing a service to a user (smart building, vehicle…) and allowing the user to interact with it. The service provided to the user regulates physical signals of the user’s surroundings (lighting, heating, radio sound,…). The user can interfere with the system to affect the values of those signals to adjust the signals to his comfort. However, the user might induce the system to use more energy than needed for his comfort. In the other hand, it is usually difficult to have an accurate model of human behaviour for the system to control signal values effectively. The problem we address in this paper is therefore for the system to try to learn from its history to regulate the signal as to minimise energy usage and satisfy the user for the service considered. We propose in this paper a stateless reinforcement learning algorithm that regulates signal values to minimise energy and satisfy the user.