system configuration and sizing of generation as well as storage technologies are essential steps for cost-efficient investments. In current research, flexibility options for electricity from growing renewable energies attract attention. One of the considered options is the power-to-gas technology in combination with fuel cells. Linear Programs for optimal system operation exist for example for distributed energy systems. In this study we propose a Mixed Integer Linear Program of a power-to-gas unit consisting of an electrolyzer, a fuel cell and a hydrogen storage. For the fuel cell a minimum load and a non-linear efficiency curve is taken into account. The non-linear efficiency curve is approximated by piecewise linearization. Bilinear products in the modeling of the efficiency curve are being substituted to maintain full power plant sizing and operation functionality. Different fuels, such as natural gas and hydrogen to be converted in the fuel cell, are implemented as well. As a result, we show that a detailed model of the non-linear efficiency curve of a fuel cell leads to more accurate results concerning the system operation. The configuration of system components in the observed energy system changes. Especially the battery system experiences a change in sizing and operation. However, solving time of the model is increasing dramatically. Our results demonstrate a valuable approach to compare the results of a Linear Program to a Mixed Integer Linear Program. Hence giving the possibility to evaluate the necessity of detailed over simplified models regarding calculation of cost-effectivenes}, author = {Ebell, Niklas and Bott, Andre and Beck, Tobias and Bürner, Johannes and Praß, Julian and Franke, Jörg}, doi = {10.4028/WWW.SCIENTIFIC.NET/AMM.871.11}, faupublication = {yes}, journal = {Applied Mechanics and Materials}, keywords = {power-to-gas; Mixed Integer Linear Program; optimal operation; system configuration; fuel cell; load-dependent efficiency}, pages = {11--19}, peerreviewed = {Yes}, title = {{Model} of a {Power}-to-{Gas} {System} with {Fuel} {Cell} in a {Mixed} {Integer} {Linear} {Program} for the {Energy} {Supply} of {Residential} and {Commercial} {Buildings}}, url = {https://www.scientific.net/Paper/Preview/525041}, volume = {871}, year = {2017} } @inproceedings{faucris.203860270, abstract = {Rooftop-installed photovoltaic systems for residential buildings with battery energy storage system are increasing. Controlling power flows of volatile and unpredictable renewable energy sources in such a system is challenging. Therefore, in this paper we present an algorithm based on Reinforcement Learning to control the power flows of a residential household with a battery energy storage system and a photovoltaic system using neural networks as a function approximation. In a nondeterministic environment the optimal choice of a series of actions to be taken is complex. Training a Reinforcement Learning algorithm, these complex patterns can be learned. The task of the energy storage is to reduce the energy feedin to the electric grid as well as to improve power system stability by providing frequency containment reserve power to the transmission system operator. Our model includes the profiles of the grid’s frequency, photovoltaic power generation and the electric load of two different households for one year. The first household is used to train the algorithm and to adjust the weights of the neural network to estimate the state-action values. The second household is used to test the functionality of the algorithm on unseen data. To evaluate the behavior of the Reinforcement Learning algorithm the results are compared to a simulation of rule-based control. As a result, after 300 episodes of training, the algorithm is able to reduce the energy consumption from the grid up to 7:8% compared to the rule-based control system managing the system’s power flows.

algorithms for decentralized acting energy entities are needed. We present an approach of distributed Reinforcement Learning in a multi-agent setup to find a control strategy of two cooperative agents within an energy cell. In order to practice energy sharing to decrease the energy cell’s overall interdependence

on the electrical grid, we train two independently learning agents, an energy storage and an electric power generator using Q-learning. We compare the learned strategy of the agents under partial and full observability of the environment and evaluate the interdependence of the energy cell on the electrical grid. Our results show that distributed Q-learning with independently learning agents works in the setup of an energy cell without the necessity of information exchange between agents. The algorithm under partial observability of the environment reaches comparable performance to that of full observability with fewer need of communication but at the cost of five times longer training time.