tions of distributionally robust optimization (DRO) problems. The considered

ambiguity sets can exploit information on moments as well as confidence sets.

Typically, reformulation approaches using duality theory need to make strong

assumptions on the structure of the underlying constraints, such as convexity

in the decisions or concavity in the uncertainty. In contrast, here we present a

duality-based reformulation approach for DRO problems, where the objective of

the adverserial is allowed to depend on univariate indicator functions. This ren-

ders the problem nonlinear and nonconvex. In order to be able to reformulate

the semiinfinite constraints nevertheless, an exact reformulation is presented

that is approximated by a discretized counterpart. The approximation is re-

alized as a mixed-integer linear problem that yields sufficient conditions for

distributional robustness of the original problem. Furthermore, it is proven that

with increasingly fine discretizations, the discretized reformulation converges

to the original distributionally robust problem. The approach is made concrete

for a challenging, fundamental task in particle separation that appears in

material design. Computational results for realistic settings show that the safe

approximation yields robust solutions of high-quality and can be computed

within short time}, author = {Dienstbier, Jana and Liers, Frauke and Rolfes, Jan}, faupublication = {yes}, keywords = {Distributionally Robust Optimization, Robust Optimization, Stochastic Optimization, Mixed-Integer Optimization, Discrete Optimization}, note = {https://cris.fau.de/converis/publicweb/Publication/288229871}, peerreviewed = {automatic}, title = {{A} {Safe} {Approximation} {Based} on {Mixed}-{Integer} {Optimization} for {Non}-{Convex} {Distributional} {Robustness} {Governed} by {Univariate} {Indicator} {Functions}}, url = {https://arxiv.org/abs/2301.11185}, year = {2024} } @article{faucris.307873240, abstract = {We provide a unifying, black-box tool for establishing existence of approximate equilibria in weighted congestion games and, at the same time, bounding their Price of Stability. Our framework can handle resources with general costs—including, in particular, decreasing ones—and is formulated in terms of a set of parameters which are determined via elementary analytic properties of the cost functions. We demonstrate the power of our tool by applying it to recover the recent result of Caragiannis and Fanelli [ICALP’19] for polynomial congestion games; improve upon the bounds for fair cost sharing games by Chen and Roughgarden [Theory Comput. Syst., 2009]; and derive new bounds for nondecreasing concave costs. An interesting feature of our framework is that it can be readily applied to mixtures of different families of cost functions; for example, we provide bounds for games whose resources are conical combinations of polynomial and concave costs. In the core of our analysis lies the use of a unifying approximate potential function which is simple and general enough to be applicable to arbitrary congestion games, but at the same time powerful enough to produce state-of-the-art bounds across a range of different cost functions.}, author = {Giannakopoulos, Yiannis and Poças, Diogo}, doi = {10.1007/s00224-023-10133-z}, faupublication = {yes}, journal = {Theory of Computing Systems}, keywords = {Approximate equilibria; Atomic congestion games; Potential games; Price of stability}, note = {CRIS-Team Scopus Importer:2023-07-21}, peerreviewed = {Yes}, title = {{A} {Unifying} {Approximate} {Potential} for {Weighted} {Congestion} {Games}}, year = {2023} } @article{faucris.296104822, abstract = {We consider the capacitated facility location problem with (partial) single-sourcing (CFLP-SS). A natural mixed integer formulation for the problem involves 0–1 variables xj indicating whether faclility j is used or not and yij variables indicating the fraction of the demand of client i that is satisfied from facility j. When the x variables are fixed, the remaining problem is a transportation problem with single-sourcing. This structure suggest the use of a Benders’ type decomposition algorithm. Here we present three possible variants. Applied to CFLP-SS they are compared computationally with a commercial solver (CPLEX) on the original formulation, a CPLEX version of Benders and a recent cut-and-solve approach developed specifically for CFLP-SS. Our results show that for CFLP-SS, when the percentage of clients requiring single-sourcing is less equal than 25%, the Benders’ variants achieve speedups of between 1.2 and 3.7.}, author = {Weninger, Dieter and Wolsey, Laurence A.}, doi = {10.1016/j.ejor.2023.02.042}, faupublication = {yes}, journal = {European Journal of Operational Research}, keywords = {Benders’ algorithm; Branch-and-cut; Facilities planning and design; Integer programming; Mixed integer subproblems}, note = {CRIS-Team Scopus Importer:2023-04-14}, peerreviewed = {Yes}, title = {{Benders}-type branch-and-cut algorithms for capacitated facility location with single-sourcing}, year = {2023} } @article{faucris.260238531, author = {Giannakopoulos, Yiannis}, doi = {10.1016/j.tcs.2015.03.010}, faupublication = {no}, journal = {Theoretical Computer Science}, pages = {83--96}, peerreviewed = {Yes}, title = {{Bounding} the {Optimal} {Revenue} of {Selling} {Multiple} {Goods}}, volume = {581}, year = {2015} } @unpublished{faucris.316075428, abstract = {We consider locally recoverable codes (LRCs) and aim to determine the smallest possible length n=n{\_}q (k, d, r) of a linear [n, k, d]{\_}q -code with locality r. For k ≤ 7 we exactly determine all values of n{\_}2(k, d, 2) and for k ≤ 6 we exactly determine all values of n{\_}2(k, d, 1). For the ternary field we also state a few numerical results. As a general result we prove that n{\_}q(k, d, r) equals the Griesmer bound if the minimum Hamming distance d is sufficiently large and all other parameters are fixe}, author = {Kurz, Sascha}, faupublication = {yes}, keywords = {linear codes; locally recoverable codes; data storage; bounds for parameters}, note = {https://cris.fau.de/converis/publicweb/Publication/316075428}, peerreviewed = {automatic}, title = {{Bounds} on the minimum distance of locally recoverable codes}, year = {2024} } @article{faucris.266620219, abstract = {We present an algorithm to solve capacity extension problems that frequently occur in energy system optimization models. Such models describe a system where certain components can be installed to reduce future costs and achieve carbon reduction goals; however, the choice of these components requires the solution of a computationally expensive combinatorial problem. In our proposed algorithm, we solve a sequence of linear programs that serve to tighten a budget—the maximum amount we are willing to spend towards reducing overall costs. Our proposal finds application in the general setting where optional investment decisions provide an enhanced portfolio over the original setting that maintains feasibility. We present computational results on two model classes, and demonstrate computational savings up to 96% on certain instances.}, author = {Singh, Bismark and Rehberg, Oliver and Groß, Theresa and Hoffmann, Maximilian and Kotzur, Leander and Stolten, Detlef}, doi = {10.1007/s11590-021-01826-w}, faupublication = {yes}, journal = {Optimization Letters}, peerreviewed = {Yes}, title = {{Budget}-cut: introduction to a budget based cutting-plane algorithm for capacity expansion models}, year = {2021} } @unpublished{faucris.320188815, abstract = {Linear codes play a central role in coding theory and have applications in several branches of mathematics. For error correction purposes the minimum Hamming distance should be as large as possible. Linear codes related to applications in Galois Geometry often require a certain divisibility of the occurring weights. In this paper we present an algorithmic framework for the classification of linear codes over finite fields with restricted sets of weights. The underlying algorithms are based on lattice point enumeration and integer linear programming. We present new enumeration and non-existence results for projective two-weight codes, divisible codes, and additive GF(4)-code}, author = {Kurz, Sascha}, faupublication = {yes}, keywords = {linear codes; classification; enumeration; lattice point enumeration; integer linear programming; two-weight codes}, note = {https://cris.fau.de/converis/publicweb/Publication/320188815}, peerreviewed = {automatic}, title = {{Computer} classification of linear codes based on lattice point enumeration and integer linear programming}, year = {2024} } @inproceedings{faucris.327066209, abstract = {Linear codes related to applications in Galois Geometry often require a certain divisibility of the occurring weights. In this paper we present an algorithmic framework for the classification of linear codes over finite fields with restricted sets of weights. The underlying algorithms are based on lattice point enumeration and integer linear programming. We present new enumeration and non-existence results for projective two-weight codes, divisible codes, and additive F4-codes.}, author = {Kurz, Sascha and Kurz, Sascha}, booktitle = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)}, date = {2024-07-22/2024-07-25}, doi = {10.1007/978-3-031-64529-7{\_}11}, editor = {Kevin Buzzard, Alicia Dickenstein, Bettina Eick, Anton Leykin, Yue Ren}, faupublication = {yes}, isbn = {9783031645280}, keywords = {classification; enumeration; integer linear programming; lattice point enumeration; linear codes; two-weight codes}, note = {CRIS-Team Scopus Importer:2024-08-16}, pages = {97-105}, peerreviewed = {unknown}, publisher = {Springer Science and Business Media Deutschland GmbH}, title = {{Computer} {Classification} of {Linear} {Codes} {Based} on {Lattice} {Point} {Enumeration}}, venue = {Durham, GBR}, volume = {14749 LNCS}, year = {2024} } @inproceedings{faucris.260251370, author = {Giannakopoulos, Yiannis and Noarov, Georgy and Schulz, Andreas S.}, booktitle = {Proceedings of the 13th Symposium on Algorithmic Game Theory (SAGT)}, doi = {10.1007/978-3-030-57980-7}, faupublication = {no}, pages = {339}, peerreviewed = {Yes}, title = {{Computing} {Approximate} {Equilibria} in {Weighted} {Congestion} {Games} via {Best}-{Responses}}, year = {2020} } @article{faucris.267633646, abstract = {We present a deterministic polynomial-time algorithm for computing d(d+0(d))-approximate (pure) Nash equilibria in (proportional sharing) weighted congestion games with polynomial cost functions of degree at most d. This is an exponential improvement of the approximation factor with respect to the previously best deterministic algorithm. An appealing additional feature of the algorithm is that it only uses best-improvement steps in the actual game, as opposed to the previously best algorithms, that first had to transform the game itself. Our algorithm is an adaptation of the seminal algorithm by Caragiannis at al. [Caragiannis I, Fanelli A, Gravin N, Skopalik A (2011) Efficient computation of approximate pure Nash equilibria in congestion games. Ostrovsky R, ed. Proc. 52nd Annual Symp. Foundations Comput. Sci. (FOCS) (IEEE Computer Society, Los Alamitos, CA), 532-541; Caragiannis Fanelli A, Gravin N, Skopalik A (2015) Approximate pure Nash equilibria in weighted congestion games: Existence, efficient computation, and structure. ACM Trans. Econom. Comput. 3(1):2:1-2:32.], but we utilize an approximate potential function directly on the original game instead of an exact one on a modified game. A critical component of our analysis, which is of independent interest, is the derivation of a novel bound of [d/W)(d/rho)](d+1) for the price of anarchy (PoA) of p-approximate equilibria in weighted congestion games, where Phi(d,rho) is the Lambert-W function. More specifically, we show that this PoA is exactly equal to Phi(d+1)(d,rho) where Phi(d,rho) is the unique positive solution of the equation rho(x +1)(d) = x(d+1). Our upper bound is derived via a smoothness-like argument, and thus holds even for mixed Nash and correlated equilibria, whereas our lower bound is simple enough to apply even to singleton congestion games.}, author = {Giannakopoulos, Yiannis and Noarov, Georgy and Schulz, Andreas S.}, doi = {10.1287/moor.2021.1144}, faupublication = {yes}, journal = {Mathematics of Operations Research}, note = {CRIS-Team WoS Importer:2021-12-31}, pages = {1-22}, peerreviewed = {Yes}, title = {{Computing} {Approximate} {Equilibria} in {Weighted} {Congestion} {Games} via {Best}-{Responses}}, year = {2021} } @misc{faucris.266208597, abstract = {Every optimization problem has a corresponding verification problem which verifies whether a given optimal solution is in fact optimal. In the literature there are a lot of such ways to verify optimality for a given solution, e.g., the branch-and-bound tree. To simplify this task, Baes et al. introduced optimality certificates for convex mixed-integer nonlinear programs and proved that these are bounded in the number of integer variables. We introduce an algorithm to compute the certificates and conduct computational experiments. Through the experiments we show that the optimality certificates can be surprisingly small.

Mathematical optimization can be used to model and solve these planning problems.

However, in order to convince decision-makers of an alternative solution structure, mathematical solutions must be comprehensible and tangible. In this work, we describe optimized decision-support systems for ambulatory care using the example of four different planning problems that cover a variety of aspects in terms of planning scope and decision support tools. The planning problems that we present are the problem of positioning centers for vaccination against Covid-19 (strategical) and emergency doctors (strategical/tactical), the out-of-hours pharmacy planning problem (tactical), and the route planning of patient transport services (operational). For each problem, we describe the planning question, give an overview of the mathematical model and present the implemented decision support application.

npresented in Bärmann et al. (2016). In Bärmann et al. (2016), we developed a construction for the inner approximation of L

9]. We generalize their approach and theoretical results to robust optimization problems in Euclidean spaces with affine uncertainty. Additionally, we demonstrate the value of this approach in an exemplary manner in the area of robust semidefinite programming (SDP). In particular, we prove that computing a Pareto robustly optimal solution for a robust SDP is tractable and illustrate the benefit of such solutions at the example of the maximal eigenvalue problem. Furthermore, we modify the famous algorithm of Goemans and Williamson [Assoc Comput Mach 42(6):1115–1145, 8] in order to compute cuts for the robust max-cut problem that yield an improved approximation guarantee in non-worst-case scenarios.

It is applied to optimize the discrete curtailment of solar feed-in in an electrical distribution network and guarantees network stability under fluctuating feed-in.

This is modeled by a two-stage mixed-integer stochastic optimization problem proposed by Aigner et al. (European Journal of Operational Research, (2021)).

The solution approach is based on the approximation of chance constraints via robust constraints using suitable uncertainty sets.

The resulting robust optimization problem has a known equivalent tractable reformulation.%, which can be solved with standard software.

To compute uncertainty sets that lead to an inner approximation of the stochastic problem, an R-vine copula model is fitted to the distribution of the multi-dimensional power forecast error, i.e., the difference between the forecasted solar power and the measured feed-in at several network nodes.

The uncertainty sets are determined by encompassing a sufficient number of samples drawn from the R-vine copula model.

Furthermore, an enhanced algorithm is proposed to fit R-vine copulas which can be used to draw conditional samples for given solar radiation forecasts.

The experimental results obtained for real-world weather and network data demonstrate the effectiveness of the combination of stochastic programming and model-based prediction of uncertainty via copulas.

We improve the outcomes of previous work by showing that the resulting uncertainty sets are much smaller and lead to less conservative solutions while maintaining the same probabilistic guarantees.

= 1 across the frontier, between the first price (SP1) and second price (SP infinity) mechanisms. En route to these results, we also provide a definitive answer to an important question related to the scheduling problem, namely whether nontruthful mechanisms can provide better makespan guarantees in the equilibrium compared with truthful ones. We answer this question in the negative by proving that the price of anarchy of all scheduling mechanisms is at least n, where n is the number of machines.}, author = {Filos-Ratsikas, Aris and Giannakopoulos, Yiannis and Lazos, Philip}, doi = {10.1287/moor.2021.1154}, faupublication = {yes}, journal = {Mathematics of Operations Research}, note = {CRIS-Team WoS Importer:2021-12-31}, peerreviewed = {Yes}, title = {{The} {Pareto} {Frontier} of {Inefficiency} in {Mechanism} {Design}}, year = {2021} } @misc{faucris.267284027, abstract = {The SCIP Optimization Suite provides a collection of software packages for mathematical optimization centered around the constraint integer programming framework SCIP. This paper discusses enhancements and extensions contained in version 8.0 of the SCIP Optimization Suite. Major updates in SCIP include improvements in symmetry handling and decomposition algorithms, new cutting planes, a new plugin type for cut selection, and a complete rework of the way nonlinear constraints are handled. Additionally, SCIP 8.0 now supports interfaces for Julia as well as Matlab. Further, UG now includes a unified framework to parallelize all solvers, a utility to analyze computational experiments has been added to GCG, dual solutions can be postsolved by PaPILO, new heuristics and presolving methods were added to SCIP-SDP, and additional problem classes and major performance improvements are available in SCIP-Jack.}, author = {Bestuzheva, Ksenia and Besançon, Mathieu and Chen, Wei-Kun and Chmiela, Antonia and Donkiewicz, Tim and van Doornmalen, Jasper and Eifler, Leon and Gaul, Oliver and Gamrath, Gerald and Gleixner, Ambros and Gottwald, Leona and Graczyk, Christoph and Halbig, Katrin and Hoen, Alexander and Hojny, Christopher and van der Hulst, Rolf and Koch, Thorsten and Lübbecke, Marco and Maher, Stephen J. and Matter, Frederic and Mühmer, Erik and Müller, Benjamin and Pfetsch, Marc E. and Rehfeldt, Daniel and Schlein, Steffan and Schlösser, Franziska and Serrano, Felipe and Shinano, Yuji and Sofranac, Boro and Turner, Mark and Vigerske, Stefan and Wegscheider, Fabian and Wellner, Philipp and Weninger, Dieter and Witzig, Jakob}, faupublication = {yes}, peerreviewed = {automatic}, title = {{The} {SCIP} {Optimization} {Suite} 8.0}, year = {2021} } @misc{faucris.321843198, abstract = {The SCIP Optimization Suite provides a collection of software packages for mathematical optimization, centered around the constraint integer programming (CIP) framework SCIP. This report discusses the enhancements and extensions included in SCIP Optimization Suite 9.0. The updates in SCIP 9.0 include improved symmetry handling, additions and improvements of nonlinear handlers and primal heuristics, a new cut generator and two new cut selection schemes, a new branching rule, a new LP interface, and several bugfixes. SCIP Optimization Suite 9.0 also features new Rust and C++ interfaces for SCIP, new Python interface for SoPlex, along with enhancements to existing interfaces. SCIP Optimization Suite 9.0 also includes new and improved features in the LP solver SoPlex, the presolving library PaPILO, the parallel framework UG, the decomposition framework GCG, and the SCIP extension SCIP-SDP. These additions and enhancements have resulted in an overall performance improvement of SCIP in terms of solving time, number of nodes in the branch-and-bound tree, as well as the reliability of the solve}, author = {Bolusani, Suresh and Besançon, Mathieu and Bestuzheva, Ksenia and Chmiela, Antonia and Dionísio, Joâo and Donkiewicz, Tim and van Doornmalen, Jasper and Eifler, Leon and Ghannam, Mohammed and Gleixner, Ambros and Graczyk, Christoph and Halbig, Katrin and Hedtke, Ivo and Hoen, Alexander and Hojny, Christopher and van der Hulst, Rolf and Kamp, Dominik and Koch, Thorsten and Kofler, Kevin and Lentz, Jurgen and Manns, Julian and Mexi, Gioni and Mühmer, Erik and Pfetsch, Marc E. and Schlösser, Franziska and Serrano, Felipe and Shinano, Yuji and Turner, Mark and Vigerske, Stefan and Weninger, Dieter and Xu, Liding}, faupublication = {yes}, peerreviewed = {automatic}, title = {{The} {SCIP} {Optimization} {Suite} 9.0}, year = {2024} } @article{faucris.268203201, abstract = {The operation of gas pipeline flow with high pressure and small Mach numbers allows to model the flow by a semilinear hyperbolic system of partial differential equations. In this paper we present a number of transient and stationary analytical solutions of this model. They are used to discuss and clarify why a PDE model is necessary to handle certain dynamic situations in the operation of gas transportation networks. We show that adequate numerical discretizations can capture the dynamical behavior sufficiently accurate. We also present examples that show that in certain cases an optimization approach that is based on multi-period optimization of steady states does not lead to approximations that converge to the optimal state.