Vocal Optimization Conference: Advanced Algorithms
May 25-27, 2022, Budapest, Hungary

Abstracts


Benders' decomposition with strengthened lift-and-project cuts for stochastic programming problems

Achim Koberstein

European University Viadrina Frankfurt (Oder),Deutschland

Pavlo Glushko

European University Viadrina Frankfurt (Oder), Germany

Csaba Fabian

John von Neumann University, Hungary

Keywords: stochastic programming, Benders' decomposition, lift-and-project cuts

Lift-and-project cuts are well-known general 0-1 programming cuts which are typically deployed in branch-and-bound-type methods to solve MILP problems. In this talk, we discuss ways to use these cuts within the framework of Benders' type decomposition algorithms for solving mixed integer two-stage stochastic programming problems with binary first-stage variables and continuous recourse. In particular, we show how L&P cuts derived for the master problem can be strengthened by utilizing second-stage information. We present an adapted L-shaped algorithm and analyse its convergence for different problem instances and values of parameters. We show, that the strengthened L&P-cuts have the potential to significantly improve the convergence of the lower bound and reduce the number of major iterations.

Resilience of Processing Systems: P-Graph Approach

Ákos Orosz

Department of Computer Science and Systems Technology, University of Pannonia,Hungary

Ferenc Friedler

Széchenyi István University, Győr, Hungary

Keywords: Resilience, P-graph, Process network synthesis, Structural enumeration

Resilience has first been introduced for ecological systems, and later have been adapted to other disciplines, such as process systems engineering. Resilience generally indicates the capability of a system to absorb expected and unexpected disturbances and to return to the original state. The resilience of the process network, in terms of systems engineering, describes its ability to stay operational after disturbing events (e.g., operating unit failures), and also its ability to revert back to the original operating mode. In process systems engineering an operating unit is used to indicate an operation or transformation that can be part of a complex system, e.g., a transformation of chemicals into other chemicals, or combining a product and some packaging into the packaged product. A processing network is a network of operating units designed to perform a complex processing task. Compared to mathematical networks, processing networks include two types of additional information of the operating units: each operating unit has an underlying mathematical model that describes the details of the transformation it performs, and two operating units can only be directly connected if the output of one can be used by the other. For example, the P-graph framework handles this information by employing a bipartite graph that also includes the inputs and outputs of the operating units (the so-called materials). The aim of Process Network Synthesis (PNS) is to build the optimal network to accomplish a certain task from the available set of operating units. The current work shows that resilience, when applied to processing systems, is a structural indicator. This means that the structure of the process is one of the most important factors of its resilience; it is greatly affected by the quantity and placement of the redundancies in the process structure. Determining the resilience of a system requires enumeration-based algorithms that can handle the complex structural connections in the network of operating units. Thus, the P-graph framework is an appropriate tool to perform the resilience analysis of a processing system. Moreover, the presented resilience formula can also be applied during process synthesis.

Dynamic programming approaches for line balancing problems

András Éles

Department of Computer Science and Systems Technology, University of Pannonia,Hungary

István Heckl

Department of Computer Science and Systems Technology, University of Pannonia, Hungary

Keywords: dynamic programming; line balancing

In the present work, solution methods are presented which provide solutions to different classes of assembly line balancing problems. Line balancing consists of the sequencing of production tasks on one or more assembly lines, and assigning them to working personnel, usually to maximize productivity or minimize costs. This is a complex combinatorial problem. However, in certain circumstances, dynamic programming can be used to provide globally optimal solutions in polynomial time, for example, when task order is fixed. Two investigated scenarios are presented in this work: when the given production lines and workforce must be distributed among different products, and when workstations consisting of multiple workers can execute tasks in parallel. The effectiveness of the proposed dynamic programming based algorithms is demonstrated on a few examples.

A numerical comparison of Ai-Zhang type interior point algorithms for linear optimization

Anita Varga

Budapest University of Technology and Economics,Hungary

Marianna E.-Nagy

Corvinus University of Budapest, Hungary

Keywords: Mathematical programming, Linear optimization, Interior point algorithms, Algebraic equivalent transformation technique

The topic of this talk is an Ai-Zhang type long-step algorithmic framework for linear optimization. We introduced and investigated a function class for which the convergence and best known iteration complexity of the general algorithm can be proved when applying the algebraic equivalent transformation technique of Darvay. The analysis and theoretical results regarding this function class will be discussed in the talk of Marianna E.-Nagy. We implemented the general algorithmic framework using Matlab. In this talk, we discuss the details of the implementation and the numerical results for different new functions from the proposed class.

Allocation of Emissions in Supply Chain Games

Anna Ráhel Radványi

Corvinus Unversity of Budapest, Department of Mathematics,Hungary

Miklós Pintér

Corvinus University of Budapest, Corvinus Centre for Operations Research, Hungary

Keywords: Supply chain, allocation, emission, Shapley value

We consider supply chains consisting of several firms that are cooperating in a joint production of a final product. During the production the supply chain generates emissions which will be given by a carbon-footprint vector. Our goal is to allocate the total responsibility of the footprint of the supply chain among the firms. Therefore supply chain games (GREEN games in Gopalakrishnan et al., 2020) are considered. We characterize this class of TU-games, we show that the class of supply chain games is the convex cone generated by the dual of the unanimity games. Therefore, every supply chain game is totally alternating, hence it is concave. Then we consider core compatible (CC) and strongly monotone (SM) values on the class of supply chain games. We show that the Shapley value is characterized by the properties of equal treatment of equals (ETP), CC and SM on the class of supply chain games. We illustrate the results and the notions by examples. Reference: S. Gopalakrishnan, D. Granot, F. Granot, G. Sosic, H. Cui (2020): Incentives and Emission Responsibility Allocation in Supply Chains, Management Science, 2020, https://doi.org/10.1287/mnsc.2020.3724

The Uplift Diffusion Network Model and intervention optimization

Balázs Dávid

InnoRenew CoE,Slovenia

László Hajdu

InnoRenew CoE, Slovenia

Miklós Krész

InnoRenew CoE, Slovenia

Keywords: diffusion models, influence maximization

Optimisation in the mechanism of processes on social networks has become an intensively researched topic in recent years. One of the key areas is the influence maximization defined by Kempe et al in 2003 as a combinatorial optimisation problem. As a classical approach with a fix integer k, the objective is to find the k most influential nodes with respect to a progressive iterative diffusion model defined by the activation function of the neighboring nodes and the adjacent edges. Recently it turned out that these models can be used in the prediction of „negative processes” such as epidemics ( Gardner et al, PLoS neglected tropical diseases, Vol 12, Issue 1) and bankruptcies (Bóta et al, CEJOR, Vol 23, Issue 2). In this talk we will present a modelling approach and solution methodologies for the above „negative” infection processes when each object has an „apriori” and an „uplift” probability expresing the probabilities of the negative effect without and with intervention. Uplifts models without the network diffusion effect were successfully applied in business marketing campaign and related risk management, where the campaign is the „intervention”, while the „negative process” reflected by the appropriate probabilities can be a churn or a bankruptcy for example. In this talk, we will give a detailed theoretical analysis of the so-called triggering models (Kempe et al, Theory of Computing, Vol 11, Issue 4). Furthermore, we will prove the NP-hardness of the optimization problem and we will present efficient approximation solution methods.

Effective Implementation of a P-Graph Based High Level Process Modeling and Optimization Software

Botond Bertok

Unversity of Pannonia,Hungary

Keywords: MILP, combinatorial optimization, P-graph

Process Network Synthesis (PNS) and the P-graph framework introduced by Friedler, Fan, and coauthors in 1991 is a high level formulation and solution approach to engineering and business process design problems. Recently it is widely applied to design, optimization, and scheduling of chemical engineering, manufacturing, transportation, supply chain, and business processes. Even if most problems can be formulated as MILP, both model generation and solution is more effective by P-graphs due to the available software and libraries supporting P-graph optimization. The presentation overview how an application specific combinatorial optimization software is built up from machine code implementation of set operations, through selection of effective data structures, combinatorial accelerations, to search strategies and the load distribution among processor cores. Each component is quantitatively compared to available standard and open source software libraries including MILP solvers.

On the Non-Emptiness of the Core of a Cooperative Fuzzy Game: a Generalization of the Bondareva-Shapley Theorem

David Bartl

Silesian University in Opava / School of Business Administration in Karvina / Department of Informatics and Mathematics,Czechia

Keywords: Cooperative fuzzy TU-game; Core; Balancedness; Bondareva-Shapley Theorem

We propose a new model of a cooperative fuzzy game; the model partly resembles the partition function form games of Lucas and Thrall (1963). We then introduce the concept of the core of the cooperative fuzzy TU-game with respect to a general fuzzy coalition structure. Finally, we define the concept of balancedness and formulate a generalization of the Bondareva-Shapley Theorem.

A primal-dual solver for nonsymmetric conic optimization

Dávid Papp

North Carolina State University,USA

Sercan Yildiz

Qontigo, Inc., USA

Keywords: conic programming, interior-point method, semidefinite optimization

We present a primal-dual solver for solving optimization problems over nonsymmetric convex cones and its open-source implementation called Alfonso. The algorithm can solve conic optimization problems over any closed convex cone as long as a suitable barrier is available for either the cone or its dual; this includes hyperbolicity cones and their duals, sum-of-squares polynomials, all semidefinite and second-order cone representable cones, power cones, and the exponential cone. Besides enabling the solution of problems that cannot be cast as symmetric cone programs, it also offers performance advantages for problems whose symmetric cone programming representation requires many auxiliary variables or has a special structure that can be exploited. The worst-case iteration complexity of the method is the best known for non-symmetric cone optimization: O ( ν log ( 1 ϵ ) iterations to reach an ε-optimal solution, where ν is the barrier parameter of the barrier function of the cone. The first part of the talk will focus on the algorithmic details and challenges. In the second half of the talk, we demonstrate the efficiency of the solver using semidefinite representable problems in which tailored barrier computation greatly decreases the solution time compared to using state-of-the-art optimization software.

Bibliometric indices as a measure of long-term competitive balance in knockout tournaments

László Csató

Institute for Computer Science and Control (SZTAKI) and Corvinus University of Budapest (BCE),Hungary

Dóra Gréta Petróczy

Corvinus University of Budapest (BCE), Hungary

Keywords: competitive balance; ranking; scientometrics; UEFA Champions League; uncertainty of outcome

We argue for the application of bibliometric indices to quantify long-term uncertainty of outcome in sports. The Euclidean index is proposed to reward quality over quantity, while the rectangle index can be an appropriate measure of core performance. Their differences are highlighted through an axiomatic analysis and several examples. Our approach also requires a weighting scheme to compare different achievements. The methodology is illustrated by studying the knockout stage of the UEFA Champions League in the 16 seasons played between 2003 and 2019: club and country performances as well as three types of competitive balance are considered. Measuring competition at the level of national associations is a novelty. All results are remarkably robust concerning the bibliometric index and the assigned weights. Inequality has not increased among the elite clubs and between the national associations, however, it has changed within some countries. Since the performances of national associations are more stable than the results of individual clubs, it would be better to build the seeding in the UEFA Champions League group stage upon association coefficients adjusted for league finishing positions rather than club coefficients.

Binary decision-making based on fusion of correlated sensors

Edit Csizmás

Department of Information Technology, GAMF, Faculty of Engineering and Computer Science, John von Neumann University, Hungary,Hungary

Edith Alice Kovács

Department of Differential Equations, Budapest University of Technology and Economics, Hungary, Hungary

Keywords: Binary decision, multivariate dependence, likelihood ratio test

Decision making based on data acquired from multiple sensors has a wide range of applications in medical diagnosis, finance, security problems. Many times the observations are not independent; in these cases also the dependence between them, their joint probability distribution should be taken into account. Each local sensor registers its local observation on a phenomenon of interest and transmits this observation to a fusion center which based on these makes a binary decision. We explore the idea of using the likelihood ratio for decision making. The case of independent sensors has been studied extensively. Also the case of two or three dependent sensors was studied by introducing for the dependence modeling copulas. However, the problem in higher dimension was not solved because of its computational complexity. In the talk we will present a new method, which can be used even for the fusion of a larger number of correlated sensors. The presented algorithm can be used also for the fusion of the results of more machine-learning models trained for a given classification task.

Kernel-Based Interior-Point Methods for Linear Complementarity Problems and Generalizations

Goran Lesaja

Georgia Southern University,United States

Linear Complementarity Problems (LCPs) is an important class of problems closely related to many optimization problems. Thus, efficient algorithms for solving LCP are of interest for theoretical and practical purposes.

Interior-Point Methods (IPMs) have revolutionized optimization theory and practice and have proved to be an efficient tool for solving many difficult optimization problems including LCPs. There are many types of IPMs, however, at the core of most of them is a sophisticated use of Newton’s type methods. In the first part of the talk, we present kernel-based IPMs where a kernel function is used to determine search direction, proximity measure, step-size, and other important elements of the method, which are then used to calculate the iteration bound of the method. The idea of using kernel functions in the design and analysis of the IPMs is quite powerful and rich. It was developed as an effort to resolve the “Irony of IPMs”, the term coined by Renegar, which describes the fact that the iteration bound of the short-step IPMs is the order of magnitude better than the iteration bound of long-step IPMs, while in practice long-step methods perform much better than short-step methods. We present feasible IPM based on the class of eligible kernel functions, although there are other classes of kernel functions such as self-regular functions. The class of eligible kernel functions is fairly general and includes the classical logarithmic function, the prototype self-regular function, and non-self-regular kernel functions as special cases. We show that the IPMs based on the eligible kernel function are globally convergent and the iteration bounds to obtain the epsilon-approximate solution of the LCP match the best-known iteration bounds for these types of methods. In particular, one of the main achievements of kernel-based IPMs is the improved complexity of long-step methods.

In the second part of the talk, the generalizations of these IPMs to LCPs over symmetric cones are considered. A remarkable and surprising result has been shown recently: The algebraic structure of Euclidean Jordan Algebras (EJAs) and associated symmetric cones are connected to the important optimization problems and can serve as a unifying frame to analyze IPMs for semi-definite optimization problems, second-order cone optimization problems, and classical optimization problems over nonnegative orthant. Using careful tools of EJA and symmetric cones it is shown that generalizations of the kernel-based IPMs for LCP over symmetric cones still possess the best-known complexity achieved for the LCPs over nonnegative orthant.


A full-Newton step feasible interior-point algorithm for P*(κ)weighted linear complementarity problem

Goran Lesaja

Georgia Southern University,United States

Chiaoni Xi

Guilin University of Electronic Technology, China

Guoqiang Wang

Shanghai University of Engineering Science, China

Keywords: P*(κ) weighted linear complementarity problem · Interior-point algorithm · Full-Newton step · Polynomial complexity

In this talk we present a full-Newton step feasible interior-point algorithm for P*(κ) weighted linear complementarity problem (WLCP). We use a specific kernel function to define the central path and determine the search direction as well as proximity measure. We take full Newton steps, hence, avoiding the line search at each iteration which is computationally advantageous. We prove that the proposed algorithm has the polynomial Iteration bound that matches the best known iteration bound for these types of methods. Some numerical results are provided to indicate the computational performance of the algorithm.

Analysis of the Big Match game

Imre Balog

CUB,Hungary

Miklós Pintér

CUB, Hungary

Keywords: game theory, stochastic games

We look into the famous Big Match game. We analyse the reasons why the fix point theorem approach cannot be applied to this game. We investigate separately when the mixed strategies are 𝜎-additive (measure), and when the mixed strategies are charges. We compare two cases to each other and make a list of advantages and disadvantages of each.

Dependence of supplier efficiency on the choice of input-output criteria in Data Envelopment Analysis (DEA)

Imre Dobos

Budapest University of Technology and Economics,Magyarország

Gyöngyi Vörösmarty

Corvinus University of Budapest, Magyarország

Keywords: Supplier Selection, Data Envelopment Analysis (DEA), DEA Without Explicit Input (WEI), DEA Without Explicit Output (WEO), Common Weight Analysis (CWA)

When selecting suppliers, the splitting of criteria into input and output variables is often incidental because the decision variables are difficult to interpret from a management point of view. This means that all criteria can be considered as input but also output variables if the criteria are interpreted correctly. In this presentation, we analyze three model versions of the Data Envelopment Analysis (DEA). In the first model we define the input and output variables together to be maximized, in the other two models we consider each variable as a criterion to be minimized / maximized. Accordingly, we examine a classical DEA model, an DEA model with explicit input and explicit output, respectively. The efficiency results of the models are presented in a supplier selection numerical example. The interesting thing about the models is that we present the criteria with normalized data.

BiqBin and MADAM: New High-performance Solvers for Binary Quadratic Problems

Janez Povh

Institute of Mathematics, Physics and Mechanics Ljubljana, Slovenia

Nicolo Gusmeroli

University of Ljubljana, Faculty of Mechanical Engineering, Slovenia

Timotej Hrga

University of Ljubljana, Faculty of Mechanical Engineering, Slovenia

Borut Lužar

Faculty of Information Studies in Novo mesto, Novo mesto, Slovenia

Angelika Wiegele

University of Ljubljana, Faculty of Mechanical Engineering, Slovenia

Keywords: combinatorial optimization; integer programming; alignment; SGA

The area of combinatorial optimization studies optimization problems where the feasible set is usually a large discrete set, related with combinatorial objects, and the problems usually fall in the class of NP-hard problems. Integer programming offers several techniques how to solve them (approximately), including branch-and-bound (B&B) algorithm as a classical approach. In our talk we will present how to solve optimization problems, where the objective function is non-convex quadratic and the constraints are linear, but the variables are binary (we call them binary quadratic problems). This family of problems includes several well-known optimization problems, like the stable set problem, the max-cut problem, the clustering problem etc. Our approach is based on Branch and Bound (B&B) algorithm, where all the main ingredients (branching strategy and bounding procedures) are carefully developed using new semidefinite programming relaxations, improved first order methods like ADMM to solve the relaxed problems, new heuristics for finding feasible solutions and new branching strategies to explore the B&B tree. These parts are also carefully parallelised using a load coordinator-worker scheme and MPI for multi-node parallelization for the branching part and OpenMP for the bounding part, coupled with Armadillo library for linear algebra operations. The final results, the solvers BiqBin and MADAM, are written in C. BiqBin is available as a free web service running on the HPC at University of Ljubljana (http://www.biqbin.eu). Performance optimization for this solver was done at Czech's national supercomputer in Ostrava via PRACE preparatory access. We provide several numerical results. We benchmarked it against BiqCrunch, GUROBI, and SCIP on four families of (linearly constrained) binary quadratic problems. The results demonstrate that both solvers scale very well and are outperforming the other solvers on the majority of the benchmark instances.

On double Roman domination

Janez Žerovnik

University of Ljubljana,Slovenija

Darja Rupnik Poklukar

University of Ljubljana, Slovenia

Keywords: graph theory, domination

Double Roman domination of graphs is a discrete optimization problem motivated by a number of applications of Roman domination in present time and in history. In the 4th century, Emperor Constantine was faced with a difficult problem of how to defend the Roman Empire with limited resources. His decision was to allocate two types of armies to the provinces in such a way that all the provinces in the empire will be safe. While the problem is still of interest in military operations research, it also has applications in cases where a time-critical service is to be provided with some backup. For example, a fire station should never send all emergency vehicles to answer a call. Similar reasoning applies in any emergency service. Hence positioning the fire stations, first aid stations, etc. at optimal positions improves the public services without increasing the cost. A natural generalization, in particular in the case of emergency services, is the k-Roman domination, where in the district not one, but k emergency teams are expected to be quickly available in case of multiple emergency calls. Special case k=, is called the double Roman domination. It is well-known that the decision version of the double Roman domination problem (MIN-DOUBLE-RDF) is NP-complete, even when restricted to planar graphs, chordal graphs, bipartite graphs, undirected path graphs, chordal bipartite graphs and to circle graphs. In the talk, some recent studies of double Roman domination on some graph families will be outlined.

Optimization of a complex industrial system: experiences with NLP solvers

Jean Pimentel

Department of Chemical and Environmental Process Engineering, Budapest University of Technology and Economics,Hungary

Ákos Orosz

Department of Computer Science and Systems Technology, University of Pannonia, Veszprém, Hungary

Mihály Markot

Faculty of Mathematics, University of Vienna, Vienna, Austria

Ferenc Friedler

Széchenyi István University, Győr, Hungary

Keywords: Process network synthesis, P-graph framework, Non-linear optimization, NLP solver

Optimal design of production systems has increasing attention because it has major influence on the efficiency and sustainability of the system, therefore, it receives significant research effort worldwide. However, conventional optimization approaches are still insufficient to provide adequate solutions for industry-scale design problems. It is especially valid for the synthesis of industrial systems, or simply process synthesis, where the network of the process is constructed based on a set of available operating units. Because of the phenomena underlying the behavior of some of such units, usually complex nonlinear models are employed to describe their performance in the network. Besides, the mathematical formulation of process synthesis problems requires binary variables that represent the decision of including or excluding the units from the network. Consequently, the process synthesis is customarily formulated in terms of Mixed-Integer Nonlinear Programming problems (MINLP). Although solution of MINLP problems has been a matter of study for decades, and various MINLP solvers have been developed, the problem is still open as the globally optimal solution cannot be guaranteed with contemporary optimization tools. Hence, heuristics and approximations are normally used which may derive in local optimum from the problem; especially since every plausible structure may constitute a structure-related local optimum of the problem. In the current work, we demonstrate some of the potential failures that can arise when employing common optimization techniques for process synthesis, and present various challenges concerning the performance of nonlinear programming (NLP) solvers. For this, we perform a structural analysis of a industrial wastewater treatment network synthesis problem. We resort to the combinatorial axioms and algorithms of the P-graph framework, which exploit the properties of the structure to identify the set of “logical” process structures, termed as combinatorially feasible structures. Since no structure can be feasible without being combinatorially feasible, this results in the evaluation of all plausible structures including the globally optimal structure. In this approach, no structure-related local optimum is overlooked and in combination with a proper NLP solver the global optimum can be generated.

Model Generation Algorithm for Synthesizing Alternative Process Networks Including Synchronous and Asynchronous Activities

Károly Kalauz

Széchenyi István University,Hungary

László Szili

University of Pannonia, Hungary

Keywords: P-graph, Process Network Synthesis, Scheduling

In the last few decades, a time-sensitive layer in society has been formed and it is growing, for whom the time as an irrecoverable resource is becoming more and more valuable and as a consumer it is the most important to reach the necessary things in the lowest time period. Access time is an even more important issue for other actors at higher levels of the supply chain as it determines their own internal operations, but also has a fundamental impact on their external success through serving their customers. The time and location value of the products increases which results that the competition of individual suppliers is replaced by the competition of supply chains and supply networks. The strength of chains is determined by their weakest chain link, so it is important for all actors that the handling of both internal time management and management of external relations ensure fast and flexible operations. Most of the time required for the supply process is spent on moving and waiting and the real production takes only a negligible amount of time compare to the whole supply period. Thus, it is obvious to look for opportunities to shorten the lead time in logistic and supply networks as well. The P-graph framework was originally developed to solve process network synthesis (PNS) problems. Recently many extensions of PNS have been emerged to utilize the power of P-graph framework in providing structural process alternatives in a practically short computational time. PNS problem has been extended to handle time constraints. It is called Time Constrained PNS or TCPNS. In TCPNS problems each final target has to be achieved in time, while the availability of resources is limited in time, and the duration of each activity is a function of its volume. A method has been developed for modeling both synchronous and asynchronous potential activities by TCPNS. When synthesizing process networks, the aim is to find a given number of optimal and alternative suboptimal processes from a set of known building blocks to achieve each required and some of the potential final targets. Process models traditionally assume continuous activities in steady state. The toolkit developed for process synthesis is suitable for examining batch or one-time processes besides continuous ones. The method proposed herein guarantees the fulfillment of the time constraints for each of the resulting alternative process networks without loss of generality, partly by extending the process models with both synchronization steps and bypasses. The proposed model generation algorithm will be illustrated by its application to production and logistic case studies.

To rank or not to rank the students in stable project allocation?

Kolos Csaba Ágoston

Corvinus University of Budapest,Hungary

Péter Biró

KRTK and Corvinus University of Budapest, Hungary

Richárd Szántó

Corvinus University of Budapest, Hungary

Keywords: stable matching, integer programming, project allocation

Following up our earlier work [Ágoston, K. C., Biró, P., Szántó, R. (2018). Stable project allocation under distributional constraints. Operations Research Perspectives, 5, 59-68.] we investigate the student allocation problem under preferences and distributional constraints. We conduct simulations on real data of CEMS project allocation at Corvinus University of Budapest and SGH Warsaw School of Economics to study the effect of providing rankings by the companies by using integer programming for solving the underlying problem. The rankings create additional constraints when requiring fairness of the matching, so their relaxation can increase the expected welfare of the students. We measure these welfare gains when increasing the proportion of companies without rankings.

Large scale performance analysis of international kidney exchange programmes by the ENCKEP simulator

Kristóf Druzsin

KRTK and Óbuda University,Hungary

Péter Biró

KRTK and Corvinus University of Budapest, Hungary

Rita Fleiner

Óbuda University, Hungary

Xenia Klimentova

INESC TEC, Porto, Portugal

Keywords: kidney exchanges, computer simulation, hierarchical optimisation

We conduct performance analysis for national and international kidney exchange programmes (KEPs) with the simulator developed in the ENCKEP COST Action. Most of the large European countries operate kidney exchange programmes where kidney patients can exchange their willing but immunologically incompatible donors with each other. The ENCKEP simulator can mimic the operation of the European national and international KEPs by generating realistic datasets for a given time period, and then conducting regular matching runs by using the hierarchic optimisation criteria of the countries considered. In this new study we conduct large number of simulations to obtain robust findings on the performance of specific national programmes and on the possible benefits of international collaborations.

The effects of draw restrictions on knockout tournaments

László Csató

SZTAKI and Corvinus University of Budapest,Hungary

Keywords: draw procedure; knockout tournament; OR in sports; simulation; UEFA Europe League

The paper analyses how draw constraints influence the outcome of a knockout tournament. The research question is inspired by European club football competitions, where the organiser generally imposes an association constraint in the first round of the knockout phase: teams from the same country cannot be drawn against each other. Its effects are explored in both theoretical and simulation models. An association constraint in the first round(s) is found to increase the likelihood of same nation matchups to approximately the same extent in each subsequent round. If the favourite teams are concentrated in some associations, they will have a higher probability to win the tournament under this policy but the increase is less than linear if it is used in more rounds. Our results can explain the recent introduction of the association constraint for both the knockout round play-offs with 16 teams and the Round of 16 in the UEFA Europa League and UEFA Europa Conference League.

Input design - from convex to non-convex problems

László Gerencsér

ELKH Institute for Computer Science and Control,Hungary

György Michaletzky

Eötvös Loránd University, Hungary

Zsuzsanna Vágó

Pázmány Péter Catholic University, Hungary

Keywords: experiment design, active learning, generalized moment problem, sparse estimation

We consider discrete time, single input single output linear stochastic control system with input u external noise e and output y. The identification of the dynamics under frequency-weighted energy constraints on the input is of fundamental importance in most industrial control systems. More recently the problem of input design has become central in machine learning. In a preliminary analysis the optimal input design for a particular performance measure is a convex optimization problem with a unique solution. However, the full solution of the problem, i.e. finding an actual optimal spectrum for the input, leads to a non-convex problem. We discuss various approaches of convexification, none of them is fully satisfactory, and point out issues for further research.

Hypergraph-based optimisation of manufacturing systems

László Nagy

University of Pannonia,Hungary

Ruppert Tamás

University of Pannonia, Hungary

Abonyi János

University of Pannonia, University of Pannonia Anton Trstenjak Institute of Gerontology and Intergenerational Relations, Slovenia

Keywords: Hypergraphs, Manufacturing, Optimisation, Resource allocation

Hypergraphs can model the structure of manufacturing systems. The proposed hypergraphs representation defines the sets of events, workflows, assets, resource allocations, logical dependencies, and all the attributes of the elements of manufacturing systems. The edge- and node centralities, the s-walks, and the modularity of these hypergraphs can provide helpful information for forming manufacturing cells and allocating resources. In this work, we present the building elements of hypergraph models of manufacturing systems and study the relationships of the optimisation tasks related to resource allocation, line balancing, cell formation, and scheduling and network properties. Based on this analysis, heuristic optimization algorithms will be developed that rely on problem-oriented hypergraph representations.

Learning preferences through consumption

Marek Kapera

Institute of Economics Polish Academy of Sciences,Poland

Keywords: Taste uncertainty, Preference discovery, Learning through consumption, Conditional preferences, Experimental preferences

This paper provides theoretical foundations for preference discovery theory. We propose to relax the assumption that the consumer has perfect knowledge of their own preferences, so that the consumer knows only the subjective probability of those alternatives being in any given relation, which is conditional on the information available to the consumer. To achieve that, we construct probabilistic measures on the space of all permissible preference relations and consider the consumer to be equipped with one such measure, instead of a preference relation. These measures are intrinsically linked by construction to the information structure available to the consumer and allow for indirect learning. We visualize how these measures correspond to the choices of the consumer, we consider three distinct decision procedures. These procedures formalize how under different assumptions regarding the underlying probability measure, the consumer guesses their own tastes. Finally, we use these measures to define the value of the information provided by the consumption of a chosen alternative and study the properties of the preference ranking induced by it.

A family of Ai-Zhang type interior point algorithms for linear optimization

Marianna E.-Nagy

Corvinus University of Budapest,Hungary

Anita Varga

Budapest University of Technology and Economics, Hungary

Keywords: interior point algorithm, linear programming problem, algebraically equivalent transformation

We provide a unified analysis of a family of long step interior point algorithms for linear programming problems. We apply the algebraically equivalent technique introduced by Darvay to consider different search directions for an Ai-Zhang type interior point algorithm. In 2005, Ai and Zhang proposed the first long step interior point algorithm with exactly the same complexity as the best short-step algorithms. Our aim is to give sufficient conditions for the function defining the algebraically equivalent transformation to ensure this best complexity of the method. In our talk, we will analyse this family of functions. Anita Varga will present the related numerical results.

Solving a dynamic route planning problem in case of 1 vehicle

Martin Tóth

Széchenyi István University,Hungary

Adrián Horváth

Széchenyi István University, Hungary

Dr. Tamás Hajba

Széchenyi István University, Hungary

Keywords: Dial a Ride, sequential planning, last mile delivery, on-demand, passenger transport

Consumer behavior has undergone a major transformation in recent years. Fast service has become one of the most important expectations. In passenger transport and in some areas of freight transport, there is a very short time available to process, order and meet incoming needs. In order for service providers to meet this through the efficient use of available resources, they need to take a new planning approach to maintain their efficiency and maintain a high standard of service. In this article, we present a suitable mathematical model that can solve the transport organization task quickly and efficiently for a single vehicle. The usability of the model is also illustrated through an example. At the end of the article, we summarize the conclusions and further research opportunities.

Computing the nucleolus: misconceptions, efficiency and applications

Márton Benedek

KRTK, Corvinus,Hungary

Keywords: cooperative game theory; nucleolus; lexicographic optimisation

Computing the nucleolus requires finding the unique point in a polytope of linear dimensions (in the number of players) that lexicographically minimises the non-increasingly ordered vector of excesses of all groups of players, a vector of exponential size (in the number of players). The mainstream approaches, traditionally and nowadays alike, break down the lexicographical minimisation problem into solving a series of linear programs (LPs), a series that is linear in size, however where each LP is still of exponential size. We introduce the state-of-the-art lexicographical descent method for the computation of the nucleolus as the first algorithm guaranteeing strict lexicographical improvement of the excess vector in every (pivot) step. The efficiency of the method is demonstrated through computational tests as well as new areas of application of the nucleolus arising.

Scheduling CNC Manufacturing by P-graphs

Márton Frits

University of Pannonia,Magyarország

Botond Bertók

University of Pannonia, Magyarország

Keywords: P-graph, Process Network Synthesis, Scheduling

In the presentation an automated scheduling method and it implementation as software service are introduced based on process network synthesis (PNS) approach extended with time constraints (TCPNS) and P-graphs are utilized for model prototyping. Scheduling by TCPNS is to be illustrated by real case studies of the CNC manufacturing. Major challenge is the to provide the optimal schedule where not only the manufactured items have to be ordered and assigned to appropriate CNC machines, but the tool kits applied by the machines have to be revised, adjusted, and replaced in parallel to the machining tasks. Scheduling of manufacturing systems is an active research area resulting in methods typically providing effective solutions to specific classes of problems. In practice, however, only few of them are flexible enough to handle the constraints and objectives of an already functioning manufacturing process. IT implementation is even more difficult in an environment where enterprise resource planning (ERP) system is not available, especially at small manufacturing plants. The conditions set by the company require a flexible modeling and effective solution method capable to consider the required parallel availability of machining tool sets, complex calculation of changeover times, deadlines, and order priorities as well.

Statistical analysis of Ising-based quantum annealer samples

Mátyás Koniorczyk

Wigner Research Centre for Physics, H-1121 Budapest, Konkoly-Thege M. út 29-33.,Hungary

Krzysztof Domino

Institute of Theoretical and Applied Informatics, Polish Academy of Sciences, Baltycka 5, 44-100 Gliwice, Poland

Zbigniew Puchała

Institute of Theoretical and Applied Informatics, Polish Academy of Sciences, Baltycka 5, 44-100 Gliwice, and Faculty of Physics, Astronomy and Applied Computer Science, Jagiellonian University, 30-348 Kraḱow, Poland

Keywords: quadratic unconstrained binary optimization (QUBO), quantum annealing, output evaluation

Ising-based quantum annealers, such as the DWave machines can be considered as heuristic solvers of quadratic unconstrained binary optimization problems (QUBOs). The idea behind them is to translate the QUBO into an equivalent problem of the energy of a two-state quantum Ising spin gas system, and use the physical implementation of the result to obtain the optimum as its minimal energy. Such a solver is probablilistic, providing a so-called sample of possible solutions and objective values; the smallest objective value is expected to be the minimum sought after, but that is not always the case. In the present contribution we apply a statistical analysis to the sample resulting from an Ising-based quantum annealer. Our aim is to assess whether the optimal or a close-to-optimal solution has been sampled. Assuming a plausible model of the univariate energy distribution of the sample, we express the objective a function of cumulants up to the third order. Using the annealer samples, we evaluate this multiple times using Bootstrap resampling, resulting in an estimated histogram of the objective, and deduce a parameter describing the likelihood of the ground state being sampled. The approach provides an easily implementable method for the primary validation of Ising-based annealers’ output. We demonstrate its behavior through experiments made using actual samples originating from quantum annealer devices.

Random generation of capacities

Michel Grabisch

Institut Universitaire de France (IUF), France

Capacities, introduced by Choquet in 1953, are monotone set functions vanishing on the empty set. They are widely used in decision making under risk and uncertainty, multicriteria decision making, game theory (monotone TU-games) and related to many combinatorial problems in Operations Research. In the phase of determining or learning a model based on capacities, it is often important to generate randomly capacities in a uniform way. The problem appears to be surprisingly difficult, due to the monotonicity constraints, and naive methods give poor results, with extremely biased distributions for some coefficients of the capacity. The theoretical exact solution to this problem is however known, since the set of capacities on a finite set N is an order polytope, associated to the Boolean lattice (2 N , ⊆), and generating uniformly random elements in an order polytope amounts to generating all linear extensions of the the underlying poset, that is, in our case, the Boolean lattice. Unfortunately, the number of linear extensions in (2 N , ⊆) grows tremendously with the size of N, and is even not known beyond |N | = 7. This obliges to resort to approximate methods. The literature has produced many such methods, among which the Markov chain approach seems to give the best results.

In this talk, after reviewing most representative methods, we propose a new approximate method, counting a limited number of linear extensions, eliminating linear extensions with very low probability. The basic idea for this is to concentrate on the two top and two bottom layers on the Boolean lattice, and to eliminate one by one maximal elements in the two top layers, and minimal elements in the two bottom layers. For this reason, the method is called 2-layer approximation method. In a second part, we explain how to measure the performance of a given random capacity generator. Our experiments show that our 2-layer approximation method performs equally well as the Markov chain method, while taking much less computation time.


TU-Games with Utility: The Core

Miklós Pintér

CUB,Hungary

Zsófia Dornai

BME, Hungary

Keywords: Game Theory

A generalization of the core – the u-core – can be defined by using the so-called utility function on the excesses and the coalitions. A special case for this is the per-capita core, where the utility function is the function which divides the excess by the cardinality of the coalition. We provide a Bondareva-Shapley theorem for u-core, that is, we introduce the notion of u-balancedness, and show that the u-core of a game is not empty if and only if the game is u-balanced.

Recent developments in conic optimization

Mirjam Dür

Augsburg University,Germany

Keywords: Conic optimization

A conic optimization problem is a problem involving a constraint that the optimization variable be in some closed convex cone. Linear optimization is a prominent example, where the nonnegativity constraint can be interpreted as requiring that the variable should be in the cone of nonnegative vectors. Other examples are second order cone problems (SOCP) where the variable is constrained to be in the second order cone, and semidefinite programming (SDP) where the matrix variable is required to be in the cone of positive semidefinite matrices. More general cones like the cones of copositive or completely positive matrices appear especially modelling quadratic or combinatorial optimization problems. In this talk, we will review recent progres made in this field. We will discuss both theoretical progress and algorithmic issues.

LSOS: Line-Search Second-Order Stochastic Optimization Methods for Nonconvex Finite Sums

Daniela di Serafino

University of Naples Federico II,Italy

Nataša Krejić

University of Novi Sad, Serbia

Nataša Krklec Jerinkić

University of Novi Sad, Serbia

Marco Viola

University of Campania “L. Vanvitelli”, Italy

Keywords: Stochastic optimization

We develop a line-search second-order algorithmic framework for minimizing finite sums. We do not make any convexity assumptions, but require the terms of the sum to be continuously differentiable and have Lipschitz continuous gradients. The methods fitting into this framework combine line searches and suitably decaying step lengths. A key issue is a two-step sampling at each iteration, which allows us to control the error present in the line-search procedure. Stationarity of limit points is proved in the almost-sure sense, while almost-sure convergence of the sequence of approximations to the solution holds with the additional hypothesis that the functions are strongly convex. Numerical experiments, including comparisons with state-of-the art stochastic optimization methods, show the efficiency of our approach.

Testing re-optimisation strategies in international kidney exchange programmes by the ENCKEP simulator

Péter Biró

KRTK and Corvinus University of Budapest,Hungary

Lilla Matyasi

Budapest University of Technology and Economics, Hungary

Keywords: kidney exchange programmes, integer programming, computer simulation

We test re-optimisation strategies for international kidney exchange programmes using the simulator developed by the ENCKEP COST Action. Kidney exchange programmes (KEPs) are operating in most of the European countries to facilitate the exchange of kidney donors for the recipients with incompatible living donors. The optimal solutions for national and international KEPs are typically selected in every three months based on the compatibilities estimated on the individual immunological data. However, these estimations are not always accurate, and if a positive crossmatch is found in the laboratory crossmatch tests then the corresponding exchange cycle must be cancelled. Depending on the matching process, the coordinators may use different re-optimisation strategies to repair the failed solutions. We examine the effects of using multiple rounds of re-optimisation with different optimisation strategies, such as fixing good cycles in the intermediate solutions or prioritising edges with negative crossmatch tests in previous rounds. In the context of international KEPs we also consider the possibility of testing and prioritising internal edges in the solutions. We measure the performance of these policies regarding the number of transplants and the number of compatibility tests conducted in the given time period.

Uniqueness of Clearing Payment Matrices in Financial Networks

Péter Csóka

Corvinus University of Budapest and KRTK,Hungary

P. Jean-Jacques Herings

Tilburg University, The Netherlands

Keywords: Financial networks, systemic risk, bankruptcy rules, fixed points

We study bankruptcy problems in financial networks in the presence of general bankruptcy laws. The set of clearing payment matrices is shown to be a lattice, which guarantees the existence of a greatest and a least clearing payment. Multiplicity of clearing payment matrices is both a theoretical and a practical concern. We present a new condition for uniqueness that generalizes all the existing conditions proposed in the literature. Our condition depends on the decomposition of the financial network into strongly connected components. A strongly connected component which contains more than one agent is called a cycle and the involved agents are called cyclical agents. If there is a cycle without successors, then one of the agents in such a cycle should have a positive endowment. The division rule used by a cyclical agent with a positive endowment should be positive monotonic and the rule used by a cyclical agent with a zero endowment should be strictly monotonic. Since division rules involving priorities are not positive monotonic, uniqueness of the clearing payment matrix is a much bigger concern for such division rules than for proportional ones. We also show how uniqueness of clearing payment matrices is related to continuity of bankruptcy rules.

Connectivity graphs of quadratic unconstrained binary optimization problems and the feasibility polyhedra of their standard linearization

Péter Naszvadi

Wigner Research Centre for Physics, H-1121 Budapest, Konkoly-Thege M. út 29-33,Hungary

Mátyás Koniorczyk

Wigner Research Centre for Physics, H-1121 Budapest, Konkoly-Thege M. út 29-33, Hungary

Keywords: quadratic unconstrained binary optimization (QUBO), QUBO standard linearization, QUBO connectivity graph, feasibility polytope

Quadratic Unconstrained Binary Optimization (QUBO) problems have gained attention recently for the reason that quantum annealers, such as the DWave machines, can solve problems of this kind. The problem is defined as the minimization of the quadratic expression xT Q x, where x is a binary vector and Q can be upper triangular without loss of generality. Each QUBO problem defines a connectivity graph whose nodes are the variables and there is an edge between the nodes for which Qi,j is nonzero. The structure of this graph is very important for its independent subgraphs obviously induces one of its decompositions into independent smaller QUBOs. When using actual quantum hardware, the graph has to be embedded as an induced subgraph into a graph that is defined by the physical solver architecture. QUBOs can also be converted into a linear MIP problem; this conversion is termed as the standard linearization (c.f. Fortêt, R., Revue Française de Recherche Opérationelle 4, pp. 17-26. (1960).). The auxiliary continuous variables in the standard linearization define a mixed integer feasibility polyhedron. We prove that this polyhedron is completely determined by the derived connectivity graph of the problem (up to possible graph isomorphisms induced by the appropriate relabeling of the variables). This introduces a correspondence between finite connected graphs and QUBOs that are, in a sense, significantly different.

Interior-point algorithms for symmetric cone horizontal linear complementarity problems using a new class of search directions

Petra Renáta Rigó

Corvinus University of Budapest,Hungary

Zsolt Darvay

Babes-Bolyai University, Faculty of Mathematics and Computer Science, Cluj-Napoca, Romania

Tibor Illés

Corvinus University of Budapest, Hungary

Roland Török

Corvinus University of Budapest, Hungary

Keywords: horizontal linear complementarity problems, symmetric cones, interior-point algorithms

In this talk, we present the extension of the interior-point algorithms proposed recently by Illés et al. to horizontal linear complementarity problems on a Cartesian product of symmetric cones. The optimality conditions of different optimization problems can be given in the form of this general class of problems. We show that the introduced algorithms have the same complexity bound as the best known interior-point algorithms for solving these types of problems.

A probabilistic formulation of a demand-side management problem, and its solution with a randomized scheme

Rajmund Drenyovszki

Department of Informatics, GAMF: Faculty of Engineering and Computer Science, John von Neumann University, Hungary

Edit Csizmás

Department of Informatics, GAMF: Faculty of Engineering and Computer Science, John von Neumann University, Hungary

Csaba Fabian

Department of Informatics, GAMF: Faculty of Engineering and Computer Science, John von Neumann University, Hungary

Lorant Kovacs

Department of Informatics, GAMF: Faculty of Engineering and Computer Science, John von Neumann University, Hungary

Tamas Szantai

Department of Differential Equations, Institute of Mathematics, Budapest University of Technology and Economics, Hungary

Keywords: demand side management, stochastic optimization

The number of electric cars is growing, and charging them is putting an increasing strain on the existing electricity grid. Without optimising their charging, the grid would very quickly become overloaded during peak periods. When using wind and solar power, fluctuating energy is injected into the grid. Another uncertainty is the level of energy use, which is caused by weather conditions and individual habits. In this article, we formulate a demand side management problem. In our model, we make a decision on the number of electric cars that can be charged over several time periods, taking into account random consumption as well. We keep demand within a given cost bound, while maximizing the probability of keeping loads within a given upper limit. We solve the probability maximization problem with our own solver, which is based on an internal approximation scheme and, besides being easy to implement, it well tolerates noise in gradient calculation.

New interior-point algorithm based on AET function having inflection point

Roland Török

Corvinus University of Budapest,Hungary

Tibor Illés

Corvinus University of Budapest, Hungary

Petra Renáta Rigó

Corvinus University of Budapest, Hungary

Keywords: Interior-point algorithms, Linear complementarity problems, Mathematical programming, Algebraic equivalent transformation

The algebraic equivalent transformation (AET) technique is a method to determine search directions in case of interior-point algorithms. We propose a new AET function, which has inflection point. The kernel function corresponding to it is neither eligible nor self-regular function. With this new AET function we introduce a primal-dual interior-point algorithm. We present the numerical results obtained by using this new method. We compare the algorithm to other interior-point algorithms using different AET functions.

Mono-unstable convex polyhedra with point masses: tight lower bounds on the number of faces and vertices

Sándor Bozóki

Institute for Computer Science and Control (SZTAKI); Corvinus University of Budapest,Hungary

Dávid Papp

North Carolina State University, USA

Krisztina Regős

Budapest University of Technology and Economics, Hungary

Gábor Domokos

Budapest University of Technology and Economics, Hungary

Keywords: mono-unstable polyhedron, polyhedral 0-skeleton, polynomial inequalities, semidefinite optimization

A convex polyhedron is called mono-(un)stable if it has one (un)stable static equilibrium point. We show that the minimal numbers of vertices and faces of mono-unstable polyhedral 0-skeletons (homogeneous point sets with unit masses at each point) are 11 and 8, respectively. Such polyhedral 0-skeletons are also presented. The proof of minimality is based on (i) the algebraic transformation of the geometrical problem, leading to the unsolvability of systems of multivariate polynomial inequalities; (ii) semidefinite optimization, to find positive coefficients of the inequalities' polynomials such that their weighted sum is a positive polynomial.

Parallelizing zero-one linear programs using graph theoretical considerations

Sandor Szabo

University of Pecs, Institute of Mathematics and Informatics,Hungary

Bogdan Zavalnij

Renyi Institute of Mathematics, Hungary

Keywords: 0-1 LP, weighted maximum clique

In this short note we will present a method to parallelize certain zero-one linear programs. In this context parallelization simply means that we divide the original instance into a number of smaller instances that can be solved independently of each other. Finally, from the solutions of the small problems we are able to piece together the solution of the original program. Graph theoretical arguments play the most important role in this parallelization technique. The ready availability of microprocessors with a dozen or a few dozens of cores is the main motivation of the proposed parallelization method. In other words we are aiming small moderate scale parallelization. For supercomputers with thousands of cores one should turn to different parallelization ideas. In order to illustrate the suggested parallelization method we will work out a small toy size numerical example in details. As an addition, the proposed technique also let us aid the solution with preconditioning methods known for solving different graph problems.

A new strategy for solving the Same-Day-Delivery Problem

Tamás Kis

Institute for Computer Science and Control,Hungary

Sunil Morapitiye

Budapest University of Technology and Economics, Hungary

Keywords: vehicle routing and scheduling, online requests

The Same-Day-Delivery Problem is the online version of a variant of the well-studied Vehicle Routing Problem. The problem data consists of a set of locations with known travel times between them, a set of identical vehicles and a set of requests. Each request has an arrival time, upon which its location, and time window become known - this last property makes the problem online. The goal is to assign a set of consecutive routes for every vehicle, such that the arrival times of the requests served by a vehicle in the same route must precede the departure time of the vehilce on the route. At any time moment only those requests are considered which are not served yet, and which admit an arrival time not larger than the current time during the planning, but the arrival rate of the requests for every location is known in advance. Not necessarily all the requests can be served in this way, so the objective is to minimize the number of omitted requests. The most widespread method to solve such problems is based on scenario sampling, which means that at every decision point several possible scenarios (set of possible future requests) are generated, these extended problems are solved, and then a solution is calculated for the original problem. One disadvantage of this method is that the quality of the solution strongly depends on the number of considered possible futures, and the way in which the final decision is made after solving the scenarios. Our purpose is to omit the scenario generation, so the decisions are only made based on what is actually known at the time of route planning. At each decision, a tabu search heuristic is applied for the set of known, unfulfilled requests to find routes for the vehicles. We have devised an objective function, which the tabu search tries to minimize, to impose a certain structure on the solution to facilitate the serving of the yet unknown jobs in the future while serving as many requests as possible among the known ones. We also devised a strategy for determining the departure times of the vehicles on their routes. We tested our algorithm on benchmark instances from the literature, and the quality of our solution surpasses the known results in most cases.

What can you achieve by sprucing up your house?

Ildikó Schlotter

Budapest University of Technology and Economics, Budapest,Hungary

Péter Biró

Centre for Economic and Regional Studies, Budapest, Hungary

Tamás Fleiner

Budapest University of Technology and Economics, Hungary

Keywords: Shapley-Scarf housing markets, top trading cycles algorithm

In our work, we study the core of housing markets in various aspects. We determine the computational complexity of similar problems like deciding whether a certain house can be owned by a particular agent in a core solution. In the talk we focus on one particular result on the change of the core resulted by a change in the preferences. Namely, with the help of a polynomial-time algorithm, we prove that if the house of some agent a improves on other agents' preference lists then the best house that agent a can receive in a core solution cannot get worse along the change.

Incorporation of heuristic search into Q-learning

Tamás Kegyes

MTA-PE "Lendület" Complex Systems Monitoring Research Group, University of Pannonia, Veszprém,Hungary

Zoltán Süle

University of Pannonia, Veszprém, Hungary

János Abonyi

MTA-PE "Lendület" Complex Systems Monitoring Research Group, University of Pannonia, Veszprém, Hungary

Keywords: Reinforcement Learning, Q-learning, integrating heuristic, disassembly line optimization

In the last years, Reinforcement Learning (RL) methods have proven its efficiency on multiple complex problems. For discrete problems Q-learning methods delivered an easily implementable solution. In this context a major challenge is to find a proper ratio between exploiting the current knowledge and exploring unknown patterns. For exploration, usually a purely random action taking mechanism is applied, which leads to a slow convergence to an optimal solution. In our presentation we will take an overview how different heuristics were built-in into RL methods to improve their learning efficiency, and we will present a categorization of the different approaches. Then we will demonstrate the effect of incorporating selected heuristic into a standard Q-learning based solution on a disassembly line optimisation problem.

Matching markets with middlemen under transferable utility

Tamás Solymosi

Corvinus University of Budapest,Hungary

Ata Atay

University of Barcelona, Spain

Eric Bahel

Virginia Polytechnic Institute and State University, USA

Keywords: Matching market, middlemen, assignment game, core

We study matching markets in the presence of middlemen. In our framework, a buyer-seller pair may either trade directly or use the services of a middleman; and a middleman may serve multiple buyer-seller pairs. Direct trade between a buyer and a seller is costlier than a trade mediated by a middleman. For each such market, we examine an associated TU game. First, we show that an optimal matching for a matching market with middlemen can be obtained by considering the two-sided assignment market where each buyer-seller pair is allowed to use the mediation service of the middlemen free of charge. Second, we prove that matching markets with middlemen are balanced. Third, we show the existence of a buyer-optimal and a seller-optimal core allocations. In general, the core does not exhibit a middleman-optimal allocation.

Towards interval based verification of artificial neural networks - integer programming

Tibor Csendes

University of Szeged,Hungary

Balázs Bánhelyi

University of Szeged, Hungary

Dániel Zombori

University of Szeged, Hungary

István Megyeri

University of Szeged, Hungary

Keywords: artificial neural network, adversarial example, verification, mixed integer optimization

Recent machine learning models are highly sensitive to adversarial input perturbation. That is, an attacker may easily mislead a well-performing image classification system by altering some pixels. However, proving that a network will have correct output when changing some regions of the images, is quite challenging -- mostly due to the high dimensionality and/or the nonlinearity of the problem. Because of this, only a few works targeted this problem, and some of these verification tools are not reliable. Although there are an increasing number of studies on this field, reliable robustness evaluation is still an open issue. We will present interval arithmetic and rational arithmetic based algorithms to provide adversarial example free image patches for trained artificial neural networks. The obtained results are illustrated for some of the studied images from the MNIST dataset. The calculated number of pixels to be changed arbitrarily were between 88 and 190 (compare it with the 28 * 28 = 784 pixels in the images). We report preliminary results of an ongoing research that enable us to verify realistic size networks. The talk will detail the role of integer programming within the method used. Dániel Zombori, Balázs Bánhelyi, Tibor Csendes, István Megyeri, Márk Jelasity: Fooling a Complete Neural Network Verifier. Int. Conf. on Learning Representations (ICLR 2021), https://openreview.net/forum?id=4IwieFS44l.

Interior-point algorithms for solving sufficient linear complementarity problems based on a new class of AET functions

Tibor Illés

Corvinus University of Budapest,Hungary

Petra R. Rigó

Corvinus University of Budapest,

Roland Török

Corvinus University of Budapest,

Keywords: interior-point algorithms, sufficient linear complementarity problems, algebrically equivalent transformation

We propose new interior-point algorithms to solve sufficient linear complementarity problems. To define the search directions, we apply the algebraic equivalent transformation (AET) technique. We introduce a new class of AET functions, which differs from the classes used in the literature to determine search directions. We provide example AET function from our new class, which does not belong to the class of concave functions proposed by Haddou et al. Furthermore, the kernel function belonging to this AET function is neither eligible, nor self-regular kernel function. We prove that the interior-point algorithms using any member of this new class of AET functions have polynomial iteration complexity in the size of the problem, bit length of the data and in a special parameter, called handicap.

Offline and online algorithms for the combined joint replenishment and single machine scheduling problem

Tímea Tamási

Institute for Computer Science and Control, 1111, Kende Str. 13-17., Budapest, Hungary; Department of Operations Research, Institute of Mathematics, ELTE Eötvös Loránd University, Budapest, Hungary,Hungary

Péter Györgyi

Institute for Computer Science and Control, 1111, Kende Str. 13-17., Budapest, Hungary, Hungary

Tamás Kis

Institute for Computer Science and Control, 1111, Kende Str. 13-17., Budapest, Hungary, Hungary

József Békési

Department of Computer Algorithms and Articial Intelligence, Faculty of Science and Informatics, University of Szeged, Hungary, Hungary

Keywords: joint replenishment, single machine scheduling, polynomial time algorithms, online algorithms

We consider the combination of the well-known joint replenishment problem with single machine scheduling. In the former problem, there are demands (jobs) arriving online, and requiring some items. A demand can be fulfilled by ordering the required items not sooner than the arrival time of the demand. The orders can be combined, and the ordering cost only depends on the types of items ordered simultaneously. We extend this problem in the following way: each demand (job) has a processing time, and they have to be scheduled on a single machine, which incurs an additional scheduling cost. Possible examples for the scheduling related cost are the total completion time, the total flow time, and the maximum flow time. In the talk we present recent results and ongoing work in this fresh area of research. We consider offline complexity results, and a number of online variants. We propose polynomial time algorithms for some special cases of the offline problem along with NP-hardness results for some more general variants. For the online problem, we propose a competitive analysis for all of the above scheduling cost function.

Solving SDP relaxations of Max-Cut problem with large number of hypermetric inequalities by L-BFGS-B

Timotej Hrga

Faculty of Mechanical Engineering, University of Ljubljana,Slovenia

Janez Povh

Faculty of Mechanical Engineering, University of Ljubljana, Slovenia

Keywords: semidefinite programming, cutting-plane method, augmented Lagrangian method, max-cut problem

We present a computational study of SDP-based bounds for Max-Cut that use a subset of hypermetric inequalities as cutting planes to strengthen the basic relaxation. Solving these relaxations is a computational challenge because of the potentially large number of violated constraints. To overcome this difficulties we describe heuristic separation algorithm for hypermetric inequalities and propose to use the augmented Lagrangian method as the bounding routine. Computational experiments show that the resulting relaxations yield a very tight bounds for the Max-Cut.

The Hungarian bank market structure – An empirical analysis

Veronika Szádoczkiné Varga

Corvinus University of Budapest,Hungary

Zoltán Madari

Corvinus University of Budapest, Hungary

Keywords: Panzar and Rosse model, dynamic panel model, monopolistic competition

This paper analyzes the market structure of the Hungarian bank market. In the years of socialism state-owned companies dominated the market, they were financed by the Hungarian National Bank. After the regime change the two-tier banking system has developed, and foreign companies appeared. An interesting question is how close the market is to a state of perfect competition. The Panzar and Rosse method offers different theorems about the value of the sum of elasticities of gross revenue with respect to input price for competitive and monopolistic markets to be able to determine the market structure. In the case of a neoclassical monopolist or collusive oligopolist, the elasticity is nonpositive. It is equal to unity in the case of competitive price-taking insurance in the long-run competitive equilibrium. With a static and a dynamic panel model we estimate the elasticity of total revenues with respect to changes in input prices so that we can determine the market structure based on the Panzar and Rosse methodology. We test the input price elasticity according to the balance sheet data of the top 13 companies between 2003 and 2020. Our models show that the Hungarian bank market was monopolistic competition in long run equilibrium. The market structure of a sector is important for modelling phenomena and new regulations effectively, which is relevant for bank and competition supervision in the protection of customers.

Illustration of the Effect of Learning on the optimal number of stations in Simple Assembly Line Balancing Problems

Zakaria Zine el abidine

Department of Management and Business Economics Budapest University of Technology and Economics H-1521, Budapest, P.O.B. 91, Hungary,Hungary

Imre Dimény

Department of Management and Business Economics Budapest University of Technology and Economics H-1521, Budapest, P.O.B. 91, Hungary, Hungary

Tamás Koltai

Department of Management and Business Economics Budapest University of Technology and Economics H-1521, Budapest, P.O.B. 91, Hungary,

Keywords: Assembly line balancing, simple assembly line, learning effect, learning curve, mathematical programming.

The objective of simple assembly line balancing problem (SALB) is the optimal allocation of tasks to workstations. In the case of the type1 SALB problem, the number of workstations is minimized. In assembly lines, where manual tasks are executed repeatedly, the learning of workers can have a significant effect on the operation. In this case, when the minimal number of workstations is determined, the change of task times as a consequence of learning must be considered. This paper investigates the effects of learning when the minimal number of workstations is determined. A modified simple assembly line balancing model is defined, in which task times are updated as parts proceed in the learning curve. Benchmark models with different complexity characteristics are used to illustrate the application of the model. Some general conclusions related to the type of the learning curve function applied, to the change of the learning rate, and to the characteristics of the models are also presented.

Measuring Digital Development: Ranking Using Data Envelopment Analysis (DEA) and Network Readiness Index (NRI)

Zoltán Bánhidi

Department of Economics, Faculty of Economic and Social Sciences, Budapest University of Technology and Economics,Hungary

Imre Dobos

Department of Economics, Faculty of Economic and Social Sciences, Budapest University of Technology and Economics, Hungary

Keywords: Digital transformation measurement, Network Readiness Index, Data Envelopment Analysis (DEA), DEA Without Explicit Input (WEI), Common Weight Analysis (CWA)

The Network Readiness Index (NRI) is a composite index comprising three levels (pillars, sub-pillars and individual indicators) that benchmarks the digital development of countries based on various aspects of “network readiness”. In contrast to the International Digital Economy and Social Index (I-DESI) of the European Union (EU), it aims to provide an inclusive, truly global view on digital transformation: NRI’s 2021 edition shows the development of 134 countries compared to 45 countries in I-DESI, which measures only the most developed countries. The aim of this presentation is to test the robustness of NRI’s scoring model using the Data Envelopment Analysis (DEA) Without Explicit Input (WEI) and Common Weight Analysis (CWA) methods. After establishing the rankings based on NRI’s scoring model and our DEA models, we compare the digital development of the countries in our data set, and evaluate the impact of the different models on the position of countries.

Characterization of convex functions by level sets

Zoltán Kánnai

Department of Mathematics, Corvinus University of Budapest, Hungary

Keywords: convex functions, minimal sets, additive perturbations

The examination of convex functions through their level sets is arrestive thing in itself. But it has some relevance in theory of concave games, as well. In 1986, I. Joó for proving a minimax theorem used the following: a continuous function f defined on a compact interval is concave if and only if the set of maximum points of f + c id form a closed interval for any constant c. But the author left it unproven. Recently F. Forgó and me used this lemma and also proved it. However, the idea of the proof was typically one-dimensional, while the relevance of convexity/concavity appears principally for multivariate functions. Here we prove the convex analogue of Joó’s lemma for multivariate lower semicontinuous functions. Namely, we show that a lsc function is convex if and only if the minimal set of any affine perturbation of the function is convex. That fact also yields that the convexity of a function is equivalent to the quasiconvexity of its all affine perturbations.

TU-Games with Utility: The Prenucleolus

Zsófia Dornai

BME,Hungary

Miklós Pintér

CUB, Hungary

Keywords: Game Theory

A generalization of the prenucleolus – the u-prenucleolus – can be defined by using the so-called utility function on the excesses. A special case for this is the per-capita prenucleolus, where the utility function is the function which divides the excess by the cardinality of the coalition. We focus on which coalitions are to be considered essential in order to calculate the u-prenucleolus. The goal of this approach is to show the connection between the different results on the subject of essentiality.

New interior-point algorithm working in a wide neighborhood of the central path

Zsolt Darvay

Babes-Bolyai University, Faculty of Mathematics and Computer Science, Cluj-Napoca,Romania

Tibor Illés

Corvinus University of Budapest, Hungary

Petra Renáta Rigó

Corvinus University of Budapest, Hungary

Roland Török

Corvinus University of Budapest, Hungary

Keywords: interior-point algorithm, linear complementarity problem, wide neighborhood

We propose a predictor-corrector interior-point algorithm for solving sufficient linear complementarity problems. The presented algorithm works in a new wide neighborhood of the central path. For the determination of the search directions, we use the algebraically equivalent transformation technique. We provide numerical results, and we compare them to other ones, where the used search directions are different.

Multicommodity network flow model of a human resource allocation problem considering time periods

Zoltán Kovács

Optin Kft.,Hungary

Zsolt Ercsey

University of Pécs, Faculty of Engineering and Information technology, Department of Systems and Software Technology, Hungary

Keywords: human resource allocation, multicommodity network flow, time period, work force demand

Human resource is the set of people who make up the workforce of an organization, both in the case of producers as well as of services. Human resource allocation is the scheduling and assignment of human resources available to the various activities of the organisation within a considered time frame. Obviously, the managed solution should meet the requirements of the activities within the time frame on one hand, while on the other the labour constraints should also be satisfied. Moreover, there are often comfort expectations of the people which should also be considered. In the current paper a human resource allocation problem is defined and solved with the help of the multicommodity network flow model. When defining the problem, the first step is to define the time horizon, within the human resource is to be managed. This can be given with the number of days to be considered, i.e. NoD. The demands may vary widely within various periods of the working hours, for example in a supermarket less cashiers are requested during the day, while more cashiers are requested in frequent hours. With respect to this fact, let us define the smallest time section that needs to be considered separately, i.e. the time period, together with its length in minutes, i.e. MoP. As a consequence, the length of a work day within the period, PoD, as well as the number of periods, NoP, will also be known. Further, let the minimum working hours within the period, NWH, together with the maximum working hours within the period, XWH, be also given. Let the minimum rest time, RWH, be also given, i.e. within this minimum time after the completion of a work shift, the worker may not be reassigned to his work place. Finally, let the number of available employee be also given. Based on the previous experience, let the human resource demand be given within each time period. Moreover, beside the exact demand, let a “small” plus – minus difference be also given. This is necessary because of the simple fact that the time period may often be too short compared to the working hours, which may result in an unsolvable situation. Obviously, these differences are also to be minimised during the optimisation, where the main problem is to determine the minimum number of work force. The solution presented in the current paper is based on the multicommodity network flow model, where the product of each arc of the graph identifies a work force and the direction of the flow represents the time between starting work and completion of work. Based on the network flow, a MILP model is given. An example illustrates the efficacy of the method.

Optimal filling in sequences for incomplete pairwise comparison matrices

Zsombor Szádoczki

Research Laboratory on Engineering & Management Intelligence Institute for Computer Science and Control (SZTAKI), Eötvös Loránd Research Network (ELKH), Budapest, Hungary ; Department of Operations Research and Actuarial Sciences, Corvinus University of B,Hungary

Sándor Bozóki

Research Laboratory on Engineering & Management Intelligence Institute for Computer Science and Control (SZTAKI), Eötvös Loránd Research Network (ELKH), Budapest, Hungary ; Department of Operations Research and Actuarial Sciences, Corvinus University of B, Hungary

Keywords: pairwise comparison, incomplete pairwise comparison matrix, graph of comparisons, filling in sequence, GRAPH of graphs

Pairwise comparisons form the basis of preference modelling and decision making. It is important from both theoretical and practical points of view to determine the number of questions and their arrangements to ask from the decision maker. We focus on incomplete pairwise comparison matrices, and provide the optimal filling in patterns, which result in the closest (LLSM) weight vectors on average to the complete case for at most six alternatives and for all possible number of comparisons, when the underlying representing graph is connected. These results are obtained by extensive numerical simulations with large sample sizes. Many of the optimal filling structures resulted in optimal filling in sequences, one optimal case can be reached by adding a comparison to a previous one, which are presented on GRAPH of graphs for the different number of alternatives. The star graph is revealed to be optimal among the cases with the same cardinality (spanning trees), while the optimal graphs are always close to bipartite ones. Regular graphs also correspond to optimal cases for the examined parameters. Regularity is important for all optimal graphs, as the degrees of the different vertices are always as close to each other as possible. Besides applying the optimal filling structure in given decision making problems, practitioners can utilize the optimal filling sequences in the cases, when the decision maker can abandon the problem at any period of the process. The effect of an additional comparison is also examined for the optimal cases, which can serve as a guide to determine the minimal sufficient number of comparisons in a given problem.

Conference Registration