Ebell N, Gütlein M, Pruckner M (2019)
Publication Language: English
Publication Type: Conference contribution, Conference Contribution
Publication year: 2019
Publisher: IEEE
Pages Range: 1-5
ISBN: 978-1-5386-8218-06
URI: https://ieeexplore.ieee.org/document/8905520
DOI: 10.1109/ISGTEurope.2019.8905520
Due to the increase of the complexity and uncertainty in the future sustainable energy system new control
algorithms for decentralized acting energy entities are needed. We present an approach of distributed Reinforcement Learning in a multi-agent setup to find a control strategy of two cooperative agents within an energy cell. In order to practice energy sharing to decrease the energy cell’s overall interdependence
on the electrical grid, we train two independently learning agents, an energy storage and an electric power generator using Q-learning. We compare the learned strategy of the agents under partial and full observability of the environment and evaluate the interdependence of the energy cell on the electrical grid. Our results show that distributed Q-learning with independently learning agents works in the setup of an energy cell without the necessity of information exchange between agents. The algorithm under partial observability of the environment reaches comparable performance to that of full observability with fewer need of communication but at the cost of five times longer training time.
APA:
Ebell, N., Gütlein, M., & Pruckner, M. (2019). Sharing of Energy Among Cooperative Households Using Distributed Multi-Agent Reinforcement Learning. In Proceedings of the 2019 IEEE Innovative Smart Grid Technologies Europe (pp. 1-5). Bucharest, RO: IEEE.
MLA:
Ebell, Niklas, Moritz Gütlein, and Marco Pruckner. "Sharing of Energy Among Cooperative Households Using Distributed Multi-Agent Reinforcement Learning." Proceedings of the 2019 IEEE Innovative Smart Grid Technologies Europe, Bucharest IEEE, 2019. 1-5.
BibTeX: Download