## Abstracts## Benders' decomposition with strengthened lift-and-project cuts for stochastic programming problems## Achim KobersteinEuropean University Viadrina Frankfurt (Oder),Deutschland## Pavlo GlushkoEuropean University Viadrina Frankfurt (Oder), Germany## Csaba FabianJohn von Neumann University, HungaryKeywords: stochastic programming, Benders' decomposition, lift-and-project cuts
Lift-and-project cuts are well-known general 0-1 programming cuts which are typically deployed in branch-and-bound-type methods to solve MILP problems. In this talk, we discuss ways to use these cuts within the framework of Benders' type decomposition algorithms for solving mixed integer two-stage stochastic programming problems with binary first-stage variables and continuous recourse. In particular, we show how L&P cuts derived for the master problem can be strengthened by utilizing second-stage information. We present an adapted L-shaped algorithm and analyse its convergence for different problem instances and values of parameters. We show, that the strengthened L&P-cuts have the potential to significantly improve the convergence of the lower bound and reduce the number of major iterations. ## Resilience of Processing Systems: P-Graph Approach## Ákos OroszDepartment of Computer Science and Systems Technology, University of Pannonia,Hungary## Ferenc FriedlerSzéchenyi István University, Győr, HungaryKeywords: Resilience, P-graph, Process network synthesis, Structural enumeration
Resilience has first been introduced for ecological systems, and later have been adapted to other disciplines, such as process systems engineering. Resilience generally indicates the capability of a system to absorb expected and unexpected disturbances and to return to the original state. The resilience of the process network, in terms of systems engineering, describes its ability to stay operational after disturbing events (e.g., operating unit failures), and also its ability to revert back to the original operating mode. In process systems engineering an operating unit is used to indicate an operation or transformation that can be part of a complex system, e.g., a transformation of chemicals into other chemicals, or combining a product and some packaging into the packaged product. A processing network is a network of operating units designed to perform a complex processing task. Compared to mathematical networks, processing networks include two types of additional information of the operating units: each operating unit has an underlying mathematical model that describes the details of the transformation it performs, and two operating units can only be directly connected if the output of one can be used by the other. For example, the P-graph framework handles this information by employing a bipartite graph that also includes the inputs and outputs of the operating units (the so-called materials). The aim of Process Network Synthesis (PNS) is to build the optimal network to accomplish a certain task from the available set of operating units. The current work shows that resilience, when applied to processing systems, is a structural indicator. This means that the structure of the process is one of the most important factors of its resilience; it is greatly affected by the quantity and placement of the redundancies in the process structure. Determining the resilience of a system requires enumeration-based algorithms that can handle the complex structural connections in the network of operating units. Thus, the P-graph framework is an appropriate tool to perform the resilience analysis of a processing system. Moreover, the presented resilience formula can also be applied during process synthesis. ## Dynamic programming approaches for line balancing problems## András ÉlesDepartment of Computer Science and Systems Technology, University of Pannonia,Hungary## István HecklDepartment of Computer Science and Systems Technology, University of Pannonia, HungaryKeywords: dynamic programming; line balancing
In the present work, solution methods are presented which provide solutions to different classes of assembly line balancing problems. Line balancing consists of the sequencing of production tasks on one or more assembly lines, and assigning them to working personnel, usually to maximize productivity or minimize costs. This is a complex combinatorial problem. However, in certain circumstances, dynamic programming can be used to provide globally optimal solutions in polynomial time, for example, when task order is fixed. Two investigated scenarios are presented in this work: when the given production lines and workforce must be distributed among different products, and when workstations consisting of multiple workers can execute tasks in parallel. The effectiveness of the proposed dynamic programming based algorithms is demonstrated on a few examples. ## A numerical comparison of Ai-Zhang type interior point algorithms for linear optimization## Anita VargaBudapest University of Technology and Economics,Hungary## Marianna E.-NagyCorvinus University of Budapest, HungaryKeywords: Mathematical programming, Linear optimization, Interior point algorithms, Algebraic equivalent transformation technique
The topic of this talk is an Ai-Zhang type long-step algorithmic framework for linear optimization. We introduced and investigated a function class for which the convergence and best known iteration complexity of the general algorithm can be proved when applying the algebraic equivalent transformation technique of Darvay. The analysis and theoretical results regarding this function class will be discussed in the talk of Marianna E.-Nagy. We implemented the general algorithmic framework using Matlab. In this talk, we discuss the details of the implementation and the numerical results for different new functions from the proposed class. ## Allocation of Emissions in Supply Chain Games## Anna Ráhel RadványiCorvinus Unversity of Budapest, Department of Mathematics,Hungary## Miklós PintérCorvinus University of Budapest, Corvinus Centre for Operations Research, HungaryKeywords: Supply chain, allocation, emission, Shapley value
We consider supply chains consisting of several firms that are cooperating in a joint production of a final product. During the production the supply chain generates emissions which will be given by a carbon-footprint vector. Our goal is to allocate the total responsibility of the footprint of the supply chain among the firms. Therefore supply chain games (GREEN games in Gopalakrishnan et al., 2020) are considered. We characterize this class of TU-games, we show that the class of supply chain games is the convex cone generated by the dual of the unanimity games. Therefore, every supply chain game is totally alternating, hence it is concave. Then we consider core compatible (CC) and strongly monotone (SM) values on the class of supply chain games. We show that the Shapley value is characterized by the properties of equal treatment of equals (ETP), CC and SM on the class of supply chain games. We illustrate the results and the notions by examples. Reference: S. Gopalakrishnan, D. Granot, F. Granot, G. Sosic, H. Cui (2020): Incentives and Emission Responsibility Allocation in Supply Chains, Management Science, 2020, https://doi.org/10.1287/mnsc.2020.3724 ## The Uplift Diffusion Network Model and intervention optimization## Balázs DávidInnoRenew CoE,Slovenia## László HajduInnoRenew CoE, Slovenia## Miklós KrészInnoRenew CoE, SloveniaKeywords: diffusion models, influence maximization
Optimisation in the mechanism of processes on social networks has become an intensively researched topic in recent years. One of the key areas is the influence maximization defined by Kempe et al in 2003 as a combinatorial optimisation problem. As a classical approach with a fix integer k, the objective is to find the k most influential nodes with respect to a progressive iterative diffusion model defined by the activation function of the neighboring nodes and the adjacent edges.
Recently it turned out that these models can be used in the prediction of „negative processes” such as epidemics ( Gardner et al, PLoS neglected tropical diseases, Vol 12, Issue 1) and bankruptcies (Bóta et al, CEJOR, Vol 23, Issue 2). In this talk we will present a modelling approach and solution methodologies for the above „negative” infection processes when each object has an „apriori” and an „uplift” probability expresing the probabilities of the negative effect without and with intervention. Uplifts models without the network diffusion effect were successfully applied in business marketing campaign and related risk management, where the campaign is the „intervention”, while the „negative process” reflected by the appropriate probabilities can be a churn or a bankruptcy for example.
In this talk, we will give a detailed theoretical analysis of the so-called triggering models (Kempe et al, Theory of Computing, Vol 11, Issue 4). Furthermore, we will prove the NP-hardness of the optimization problem and we will present efficient approximation solution methods.
## Effective Implementation of a P-Graph Based High Level Process Modeling and Optimization Software## Botond BertokUnversity of Pannonia,HungaryKeywords: MILP, combinatorial optimization, P-graph
Process Network Synthesis (PNS) and the P-graph framework introduced by Friedler, Fan, and coauthors in 1991 is a high level formulation and solution approach to engineering and business process design problems. Recently it is widely applied to design, optimization, and scheduling of chemical engineering, manufacturing, transportation, supply chain, and business processes. Even if most problems can be formulated as MILP, both model generation and solution is more effective by P-graphs due to the available software and libraries supporting P-graph optimization. The presentation overview how an application specific combinatorial optimization software is built up from machine code implementation of set operations, through selection of effective data structures, combinatorial accelerations, to search strategies and the load distribution among processor cores. Each component is quantitatively compared to available standard and open source software libraries including MILP solvers. ## On the Non-Emptiness of the Core of a Cooperative Fuzzy Game: a Generalization of the Bondareva-Shapley Theorem## David BartlSilesian University in Opava / School of Business Administration in Karvina / Department of Informatics and Mathematics,CzechiaKeywords: Cooperative fuzzy TU-game; Core; Balancedness; Bondareva-Shapley Theorem
We propose a new model of a cooperative fuzzy game; the model partly resembles the partition function form games of Lucas and Thrall (1963). We then introduce the concept of the core of the cooperative fuzzy TU-game with respect to a general fuzzy coalition structure. Finally, we define the concept of balancedness and formulate a generalization of the Bondareva-Shapley Theorem. ## A primal-dual solver for nonsymmetric conic optimization## Dávid PappNorth Carolina State University,USA## Sercan YildizQontigo, Inc., USAKeywords: conic programming, interior-point method, semidefinite optimization
We present a primal-dual solver for solving optimization problems over nonsymmetric convex cones and its open-source implementation called Alfonso. The algorithm can solve conic optimization problems over any closed convex cone as long as a suitable barrier is available for either the cone or its dual; this includes hyperbolicity cones and their duals, sum-of-squares polynomials, all semidefinite and second-order cone representable cones, power cones, and the exponential cone. Besides enabling the solution of problems that cannot be cast as symmetric cone programs, it also offers performance advantages for problems whose symmetric cone programming representation requires many auxiliary variables or has a special structure that can be exploited. The worst-case iteration complexity of the method is the best known for non-symmetric cone optimization: $O(\sqrt{\nu}\mathrm{log}(1\u2044\u03f5)$ iterations to reach an ε-optimal solution, where ν is the barrier parameter of the barrier function of the cone.
The first part of the talk will focus on the algorithmic details and challenges. In the second half of the talk, we demonstrate the efficiency of the solver using semidefinite representable problems in which tailored barrier computation greatly decreases the solution time compared to using state-of-the-art optimization software.
## Bibliometric indices as a measure of long-term competitive balance in knockout tournaments## László CsatóInstitute for Computer Science and Control (SZTAKI) and Corvinus University of Budapest (BCE),Hungary## Dóra Gréta PetróczyCorvinus University of Budapest (BCE), HungaryKeywords: competitive balance; ranking; scientometrics; UEFA Champions League; uncertainty of outcome
We argue for the application of bibliometric indices to quantify long-term uncertainty of outcome in sports. The Euclidean index is proposed to reward quality over quantity, while the rectangle index can be an appropriate measure of core performance. Their differences are highlighted through an axiomatic analysis and several examples. Our approach also requires a weighting scheme to compare different achievements. The methodology is illustrated by studying the knockout stage of the UEFA Champions League in the 16 seasons played between 2003 and 2019: club and country performances as well as three types of competitive balance are considered. Measuring competition at the level of national associations is a novelty. All results are remarkably robust concerning the bibliometric index and the assigned weights. Inequality has not increased among the elite clubs and between the national associations, however, it has changed within some countries. Since the performances of national associations are more stable than the results of individual clubs, it would be better to build the seeding in the UEFA Champions League group stage upon association coefficients adjusted for league finishing positions rather than club coefficients. ## Binary decision-making based on fusion of correlated sensors## Edit CsizmásDepartment of Information Technology, GAMF, Faculty of Engineering and Computer Science, John von Neumann University, Hungary,Hungary## Edith Alice KovácsDepartment of Differential Equations, Budapest University of Technology and Economics, Hungary, HungaryKeywords: Binary decision, multivariate dependence, likelihood ratio test
Decision making based on data acquired from multiple sensors has a wide range of applications in medical diagnosis, finance, security problems. Many times the observations are not independent; in these cases also the dependence between them, their joint probability distribution should be taken into account. Each local sensor registers its local observation on a phenomenon of interest and transmits this observation to a fusion center which based on these makes a binary decision. We explore the idea of using the likelihood ratio for decision making. The case of independent sensors has been studied extensively. Also the case of two or three dependent sensors was studied by introducing for the dependence modeling copulas. However, the problem in higher dimension was not solved because of its computational complexity. In the talk we will present a new method, which can be used even for the fusion of a larger number of correlated sensors. The presented algorithm can be used also for the fusion of the results of more machine-learning models trained for a given classification task. ## Kernel-Based Interior-Point Methods for Linear Complementarity Problems and Generalizations## Goran LesajaGeorgia Southern University,United StatesLinear Complementarity Problems (LCPs) is an important class of problems closely related to many optimization problems. Thus, efficient algorithms for solving LCP are of interest for theoretical and practical purposes. Interior-Point Methods (IPMs) have revolutionized optimization theory and practice and have proved to be an efficient tool for solving many difficult optimization problems including LCPs. There are many types of IPMs, however, at the core of most of them is a sophisticated use of Newton’s type methods. In the first part of the talk, we present kernel-based IPMs where a kernel function is used to determine search direction, proximity measure, step-size, and other important elements of the method, which are then used to calculate the iteration bound of the method. The idea of using kernel functions in the design and analysis of the IPMs is quite powerful and rich. It was developed as an effort to resolve the “Irony of IPMs”, the term coined by Renegar, which describes the fact that the iteration bound of the short-step IPMs is the order of magnitude better than the iteration bound of long-step IPMs, while in practice long-step methods perform much better than short-step methods. We present feasible IPM based on the class of eligible kernel functions, although there are other classes of kernel functions such as self-regular functions. The class of eligible kernel functions is fairly general and includes the classical logarithmic function, the prototype self-regular function, and non-self-regular kernel functions as special cases. We show that the IPMs based on the eligible kernel function are globally convergent and the iteration bounds to obtain the epsilon-approximate solution of the LCP match the best-known iteration bounds for these types of methods. In particular, one of the main achievements of kernel-based IPMs is the improved complexity of long-step methods. In the second part of the talk, the generalizations of these IPMs to LCPs over symmetric cones are considered. A remarkable and surprising result has been shown recently: The algebraic structure of Euclidean Jordan Algebras (EJAs) and associated symmetric cones are connected to the important optimization problems and can serve as a unifying frame to analyze IPMs for semi-definite optimization problems, second-order cone optimization problems, and classical optimization problems over nonnegative orthant. Using careful tools of EJA and symmetric cones it is shown that generalizations of the kernel-based IPMs for LCP over symmetric cones still possess the best-known complexity achieved for the LCPs over nonnegative orthant. ## A full-Newton step feasible interior-point algorithm for
_{*}(κ) |