swe flag På svenska
bild
School of
Electrical Engineering
and Computer Science
KTH / EECS / TCS / Jakob Nordström / MIAO video seminars

I have moved to the Department of Computer Science (DIKU) at the University of Copenhagen and also have a part-time affiliation with the Department of Computer Science at Lund University. For now, my webpages still reside at KTH, and are being updated here, but this should change during 2021.

MIAO Video Seminars

The MIAO video seminars are what we call the mostly informal series of seminars arranged by the Mathematical Insights into Algorithms for Optimization (MIAO) research group at the University of Copenhagen and Lund University.

These are typically fairly low-key, informal events, but all public seminars are announced beforehand on our seminar mailing lists (with two different lists for more theoretically oriented or more applied seminars, respectively). In case you want to receive the announcements, please just let us know by sending an e-mail message to jakob.nordstrom@cs.lth.se and we will add you to the relevant list(s).

In response to multiple requests, and as an experiment, since the spring of 2021 we have started posting most seminars on the MIAO Research YouTube channel.

Upcoming video seminars

During the spring semester of 2021, there are online virtual seminars running every weekday at 17:30 CET from early February to mid-May as part of the Satisfiability: Theory, Practice, and Beyond and Theoretical Foundations of Computer Systems programs at the Simons Institute for the Theory of Computing which we will be enjoying (and will partly be organizing, in the form of the Beyond Satisfiability workshop/seminar series on Tuesdays). To attend these seminars, go to the Simons program webpages above, scroll down to the links for the "workshops" and click on the workshops of interest, and then click on the "Register" button for each program. However, we also have some additional seminars as listed below.

  • Monday May 10 at 14:00
    On the complexity of branch and cut
    (Noah Fleming, University of Toronto)

    The Stabbing Planes proof system was introduced to model practical branch-and-cut integer programming solvers. As a proof system, Stabbing Planes can be viewed as a simple generalization of DPLL to reason about linear inequalities. It is powerful enough to simulate Cutting Planes and produce short refutations of the Tseitin formulas — certain unsatisfiable systems of linear equations mod 2 — which are canonical hard examples for many algebraic proof systems. In a surprising recent result, Dadush and Tiwari showed that these short Stabbing Planes refutations of the Tseitin formulas could be translated into Cutting Planes proofs. This raises the question of whether all Stabbing Planes can be efficiently translated into Cutting Planes. In recent work, we give a partial answer to this question.

    In this talk I will introduce and motivate the Stabbing Planes proof system. I will then show how Stabbing Planes proofs with quasi-polynomially bounded coefficients can be quasi-polynomially translated into Cutting Planes. As a consequence of this translation, we can show that Cutting Planes has quasi-polynomial size refutations of any unsatisfiable system of linear equations over a finite field. A remarkable property of our translation for systems of linear equations over finite fields, and the translation of Dadush and Tiwari for the Tseitin formulas, is that the resulting proofs are also quasi-polynomially deep. A natural question is thus whether these depth bounds can be improved. In the last part of the talk, I will discuss progress towards answering this question in the form of a new depth lower bound technique for Cutting Planes, which works also for the stronger semantic Cutting Planes system, and which allows us to establish the first linear lower bounds on the depth of semantic Cutting Planes refutations of the Tseitin formulas.

  • Monday May 17 at 14:00
    Average-case perfect matching lower bounds from hardness of Tseitin formulas
    (Kilian Risse, KTH Royal Institute of Technology)

    We study the complexity of proving that a sparse random regular graph on an odd number of vertices does not have a perfect matching, and related problems involving each vertex being matched some pre-specified number of times. We show that this requires proofs of degree Ω(n / log n) in the Polynomial Calculus (over fields of characteristic ≠ 2) and Sum-of-Squares proof systems, and exponential size in the bounded-depth Frege proof system. This resolves a question by Razborov asking whether the Lovász-Schrijver proof system requires nδ rounds to refute these formulas for some δ > 0. The results are obtained by a worst-case to average-case reduction of these formulas relying on a topological embedding theorem which may be of independent interest.

    Joint work with with Per Austrin.

  • Friday May 28 at 14:00
    TBA
    (Tuomas Hakoniemi, Universitat Politècnica de Catalunya)

  • Friday Jun 4 at 14:00
    TBA
    (Jeremias Berg, University of Helsinki)

  • Monday Jun 7 at 14:00
    Proof complexity meets finite model theory
    (Joanna Ochremiak, LaBRI, Université de Bordeaux and CNRS)

    Finite model theory studies the power of logics on the class of finite structures. One of its goals is to characterize symmetric computation, that is, computation that abstracts away details which are not essential for the given task, by respecting the symmetries of the input.

    In this talk I will discuss connections between proof complexity and finite model theory, focussing on lower bounds. For certain proof systems, the existence of a succinct refutation can be decided in a symmetry-preserving way. This allows us to transfer lower bounds from finite model theory to proof complexity. I will introduce this approach and explain its key technical ideas such as the method of folding for dealing with symmetries in linear programs.

Video seminars spring 2021

  • Monday May 3 at 14:00
    SAT-encodings and scalability (video)
    (André Schidler, Technische Universität Wien)

    Boolean satisfiability (SAT) solvers have reached stunning performance over the last decade. This performance can be harnessed for other problems by means of SAT encodings: translating the problem instance into a SAT instance. SAT encodings have been used successfully for many combinatorial problems and are established in industry. In this talk, I discuss the general idea behind SAT encodings and two specific application areas: graph decompositions and machine learning.

    Graph decompositions allow for specialized algorithms that can solve hard problems efficiently. SAT encodings provide not only the means to optimally compute these decompositions, but can be easily extended by extra constraints specific to the hard problem to be solved.

    For machine learning, SAT encodings gained increased attention in recent years, as they allow the induction of very small AI models. Learning small models has become more important in the context of explainable AI: smaller models are usually easier to understand for humans. We will focus specifically on decision tree induction.

    Scalability is one of the main issues when using SAT encodings. We can overcome this problem by embedding SAT-based solving into a heuristic framework. This allows us to trade off a slightly worse result for large applicability. In the last part of our talk, we discuss our SAT-based local improvement framework that implements this idea.

  • Wednesday Apr 28 at 15:00
    Slicing the hypercube is not easy (video)
    (Amir Yehudayoff, Technion – Israel Institute of Technology)

    How many hyperplanes are needed to slice all edges of the hypercube? This question has been studied in machine learning, geometry and computational complexity since the 1970s. We shall describe (most of) an argument showing that more than n0.57 hyperplanes are needed, for large n. We shall also see a couple of applications. This is joint work with Gal Yehuda.

  • Monday Apr 26 at 14:00
    Recent lower bounds in algebraic complexity theory (video)
    (Ben Lee Volk, University of Texas at Austin)

    Algebraic complexity theory studies the complexity of solving algebraic computational tasks using algebraic models of computation. One major problem in this area is to prove lower bounds on the number of arithmetic operations required for computing explicit polynomials. This natural mathematical problem is the algebraic analog of the famous P vs. NP problem. It also has tight connections to other classical mathematical areas and to fundamental questions in complexity theory.

    In this talk I will provide background and then present some recent progress on proving lower bounds for models of algebraic computation, such as the algebraic analogs of NC1 and NL.

    Based on joint works with Prerona Chatterjee, Mrinal Kumar, and Adrian She.

  • Friday Apr 23 at 14:00
    On the complexity of branching proofs (video)
    (Daniel Dadush, CWI)

    We consider the task of proving integer infeasibility of a bounded convex K in Rn using a general branching proof system. In a general branching proof, one constructs a branching tree by adding an integer disjunction ax ⩽ b or ax ⩾ b+1, for an integer vector a and an integer b, at each node, such that the leaves of the tree correspond to empty sets (i.e., K together with the inequalities picked up from the root to leaf is empty). Recently, Beame et al (ITCS 2018), asked whether the bit size of the coefficients in a branching proof, which they named stabbing planes (SP) refutations, for the case of polytopes derived from SAT formulas, can be assumed to be polynomial in n. We resolve this question by showing that any branching proof can be recompiled so that the integer disjunctions have coefficients of size at most (nR)O(n2), where R is the radius of an l1 ball containing K, while increasing the number of nodes in the branching tree by at most a factor O(n). As our second contribution, we show that Tseitin formulas, an important class of infeasible SAT instances, have quasi-polynomial sized cutting plane (CP) refutations, disproving the conjecture that Tseitin formulas are (exponentially) hard for CP. As our final contribution, we give a simple family of polytopes in [0,1]n requiring branching proofs of length 2n/n.

    Joint work with Samarth Tiwari.

  • Monday Apr 19 at 14:00
    Kidney exchange programmes — saving lives with optimisation algorithms (video)
    (William Pettersson, University of Glasgow)

    Kidney Exchange Programmes (KEPs) increase the rate of living donor kidney transplantation, in turn saving lives. In this talk, I will explain exactly what KEPs are, how KEPs achieve this goal, and the role of optimisation algorithms within KEPs. This will include brief explanations of some of the models used for kidney exchange programmes, some of the more generic optimisation techniques that we use, as well as recent advances and specific research directions for the field into the future. This talk will very much be an overview of kidney exchange, without going into complex details of optimisation algorithms, making it suitable for a wider audience.

  • Thursday Apr 8 at 14:00
    (Semi)Algebraic proofs over {±1} variables (video)
    (Dmitry Sokolov, St.Petersburg State University)

    One of the major open problems in proof complexity is to prove lower bounds on AC0[p]-Frege proof systems. As a step toward this goal Impagliazzo, Mouli and Pitassi in a recent paper suggested proving lower bounds on the size for Polynomial Calculus over the {± 1} basis. In this talk we show a technique for proving such lower bounds and moreover, we also give lower bounds on the size for Sum-of-Squares over the {± 1} basis.

    We discuss the difference between {0, 1} and {+1, -1} cases and problems in further generalizations of the lower bounds.

  • Courtesy of Zuse Institute Berlin:
    Friday Feb 12 at 14:00
    Introduction to IP presolving techniques in PaPILO
    (Alexander Hoen, Zuse Institute Berlin)

    Presolving is an essential part contributing to the performance of modern MIP solvers. PaPILO, a new C++ library, provides presolving routines for MIP and LP problems. This talk will give an introduction to basic IP-presolving techniques and provide insights into important design choices enabling PaPILO's capabilities, in particular regarding its use of parallel hardware. While presolving itself is designed to be fast, this can come at the cost of failing to find important reductions due to working limits and heuristic filtering. Yet, even most commercial solvers do not use multi-threading for the preprocessing step as of today. PaPILO's design facilitates use of parallel hardware to allow for more aggressive presolving and presolving of huge problems. The architecture of PaPILO allows presolvers generally to run in parallel without requiring expensive copies of the problem and without special synchronization in the presolvers themselves. Additionally, the use of Intel's TBB library aids PaPILO to efficiently exploit recursive parallelism within expensive presolving routines, such as probing, dominated columns, or constraint sparsification. Despite PaPILO's use of parallelization, its results are guaranteed to be deterministic independently of the number of threads available.

Video seminars autumn 2020

  • Friday Oct 9 at 13:15
    Model counting with probabilistic component caching
    (Shubham Sharma and Kuldeep S. Meel, National University of Singapore)

    Given a Boolean formula F, the problem of model counting, also referred to as #SAT, seeks to compute the number of solutions of F. Model counting is a fundamental problem with a wide variety of applications ranging from planning, quantified information flow to probabilistic reasoning and the like. The modern #SAT solvers tend to be either based on static decomposition, dynamic decomposition, or a hybrid of the two. Despite dynamic decomposition based #SAT solvers sharing much of their architecture with SAT solvers, the core design and heuristics of dynamic decomposition-based #SAT solvers has remained constant for over a decade. In this paper, we revisit the architecture of the state-of-the-art dynamic decomposition-based #SAT tool, sharpSAT, and demonstrate that by introducing a new notion of probabilistic component caching and the usage of universal hashing for exact model counting along with the development of several new heuristics can lead to significant performance improvement over state-of-the-art model-counters. In particular, we develop GANAK, a new scalable probabilistic exact model counter that outperforms state-of-the-art exact and approximate model counters for a wide variety of instances.

  • Friday Sep 25 at 13:15
    Manthan: A data-driven approach for Boolean function synthesis
    (Priyanka Golia and Kuldeep S. Meel, National University of Singapore)

    Boolean functional synthesis is a fundamental problem in computer science with wide-ranging applications and has witnessed a surge of interest resulting in progressively improved techniques over the past decade. Despite intense algorithmic development, a large number of problems remain beyond the reach of the current state-of-the-art techniques. Motivated by the progress in machine learning, we propose Manthan, a novel data-driven approach to Boolean functional synthesis. Manthan views functional synthesis as a classification problem, relying on advances in constrained sampling for data generation, and advances in automated reasoning for a novel proof-guided refinement and provable verification. On an extensive and rigorous evaluation over 609 benchmarks, we demonstrate that Manthan significantly improves upon the current state of the art, solving 356 benchmarks in comparison to 280, which is the most solved by a state-of-the-art technique; thereby, we demonstrate an increase of 76 benchmarks over the current state of the art. The significant performance improvements, along with our detailed analysis, highlights several interesting avenues of future work at the intersection of machine learning, constrained sampling, and automated reasoning.

  • Friday Aug 28 at 13:15
    Tuning Sat4j PB solvers for decision problems
    (Romain Wallon, Université d'Artois)

    During the last decades, many improvements in CDCL SAT solvers have made possible to solve efficiently large problems containing millions of variables and clauses. Despite this practical efficiency, some instances remain out of reach for such solvers. This is particularly true when the input formula requires an exponential size refutation proof in the resolution proof system (as for pigeonhole formulae). This observation motivated the development of another kind of solvers, known as pseudo-Boolean (PB) solvers. These solvers take as inputs a conjunction of PB constraints (integral linear inequations over Boolean variables) and benefit from the cutting planes proof system, which is (in theory) stronger than the resolution proof system. To implement this proof system, current PB solvers follow the direction of modern SAT solvers, by implementing a conflict analysis procedure relying on the application of cutting planes rules. However, adapting the CDCL architecture to take into account PB constraints is not as straightforward as it may look. In particular, many CDCL invariants and properties do not hold anymore as long as PB constraints are considered. While some of them may be safely ignored (e.g., when they affect the decision heuristic, the deletion strategy or the restart policy), some others must be fixed to ensure the soundness of the solver (e.g., by ensuring to preserve the conflict during its analysis).

    In this talk, we will give an overview of the main differences between PB solving and classical SAT solving, and present different approaches that have been designed to take these differences into account to extend the CDCL architecture to PB problems. We will in particular discuss the pros and cons of these approaches, and we will explain how they can be enhanced to improve the practical performance of PB solvers.

  • Friday Aug 21 at 10:15
    Using proofs to analyze SAT solvers
    (Janne Kokkala, Lund University and University of Copenhagen)

    The main idea of this seminar is to give an overview of what one can ask and what one can learn about SAT solvers when analyzing the proof to estimate what part of the work was useful. I will start by giving a 17-ish minute alpha test run for my presentation for our CP paper [1]. After that, I will discuss earlier works that use the same or a similar idea for general insights [2], analyzing VSIDS usefulness [3], and clause exchange for parallel solvers [4,5]. To conclude, we can discuss how this approach could be used for pseudo-Boolean solvers.

    [1] J. I. Kokkala, J. Nordström. Using Resolution Proofs to Analyse CDCL SAT solvers.
    [2] L. Simon. Post Mortem Analysis of SAT Solver Proofs.
    [3] S. Malik, V. Ying. On the efficiency of the VSIDS decision heuristic. (Workshop presentation.)
    [4] G. Audemard, L. Simon. Lazy clause exchange policy for parallel SAT solvers.
    [5] G. Katsirelos, A. Sabharwal, H. Samulowitz, L. Simon. Resolution and parallelizability: Barriers to the efficient parallelization of SAT solvers.

  • Wednesday Aug 19 at 13:15
    On computational aspects of the antibandwidth problem
    (Markus Sinnl, Johannes Kepler University Linz)

    In this talk, we consider the antibandwidth problem, also known as dual bandwidth problem, separation problem and maximum differential coloring problem. Given a labeled graph (i.e., a numbering of the vertices of a graph), the antibandwidth of a node is defined as the minimum absolute difference of its labeling to the labeling of all its adjacent vertices. The goal in the antibandwidth problem is to find a labeling maximizing the antibandwidth. The problem is NP-hard in general graphs and has applications in diverse areas like scheduling, radio frequency assignment, obnoxious facility location and map-coloring.

    There has been much work on deriving theoretical bounds for the problem and also in the design of metaheuristics in recent years. However, the optimality gaps between the best known solution values and reported upper bounds for the HarwellBoeing Matrix-instances, which are the commonly used benchmark instances for this problem, were often very large (e.g., up to 577%).

    We present new mixed-integer programming approaches for the problem, including one approach, which does not directly formulate the problem as optimization problem, but as a series of feasibility problems. We also discuss how these feasibility problems can be encoded with various SAT-encodings, including a new and specialised encoding which exploits a certain staircase-structure occuring in the problem formulation. We present computational results for all the algorithms, including a comparison of the MIP and SAT-approaches. Our developed approaches allow to find the proven optimal solution for eight instances from literature, where the optimal solution was unknown and also provide reduced gaps for eleven additional instances, including improved solution values for seven instances, the largest optimality gap is now 46%. Instances based on the problem were submitted to the SAT Competition 2020.

Video seminars spring 2020

  • Wednesday Jul 15 at 15:00
    Naïve algorithm selection for SAT solving
    (Stephan Gocht, Lund University and University of Copenhagen)

    Although solving propositional formulas is an NP-complete problem, state-of-the-art SAT solvers are able to solve formulas with millions of variables. To obtain good performance it is necessary to configure parameters for heuristic decisions. However, there is no single parameter configuration that is perfect for all formulas, and choosing the parameters is a difficult task. The standard approach is to evaluate different configurations on some formulas and to choose the single configuration, that performs best overall. This configuration, which is called single best solver, is then used to solve new unseen formulas. In this paper we demonstrate how random forests can be used to choose a configuration dynamically based on simple features of the formula. The evaluation shows that our approach is able to outperform the single best solver on formulas that are similar to the training set, but not on formulas from completely new domains.

  • Friday Jun 26 at 10:00
    Behind the scenes of chronological CDCL
    (Sibylle Möhle and Armin Biere, Johannes Kepler University Linz)

    Combining conflict-driven clause learning (CDCL) with chronological backtracking is challenging: Multiple invariants considered crucial in modern SAT solvers are violated, if after conflict analysis the solver does not jump to the assertion level but to a higher decision level instead. In their SAT'18 paper "Chronological Backtracking", Alexander Nadel and Vadim Ryvchim provide a fix to this issue. Moreover, their SAT solver implementing chronological backtracking won the main track of the SAT Competition 2018. In our SAT'19 paper, "Backing Backtracking", we present a formalization and generalization of this method. We prove its correctness and provide an independent implementation.

    In this seminar, we demonstrate the working of chronological CDCL by means of an example. In this example, after a conflict the conflicting clause contains only one literal at conflict level. It is therefore used as a reason for backtracking, thus saving the effort of conflict analysis. We further show which invariants are violated and present new ones followed by a discussion of the rules of our formal framework. We also shed light onto implementation details, including those which are not mentioned in our SAT'19 paper.

  • Monday Jun 22 at 13:30
    McSplit: A partitioning algorithm for maximum common subgraph problems
    (Ciaran McCreesh and James Trimble, University of Glasgow)

    We will give a short introduction to McSplit, an algorithm for the maximum common (connected) subgraph problem coauthored with Patrick Prosser and presented at IJCAI 2017. McSplit resembles a forward-checking constraint programming algorithm, but uses a partitioning data structure to store domains which greatly reduces memory use and time per search node. We will also present our recent work with Stephan Gocht and Jakob Nordström on adding proof logging to the algorithm, turning McSplit into a certifying algorithm whose outputs can be independently verified.

  • Thursday Jun 18 at 20:30
    A pseudo-Boolean approach to nonlinear verification
    (Vincent Liew, University of Washington)

    We discuss some new experimental results showing the promise of using pseudo-Boolean solvers, rather than SAT solvers, to verify bit-vector problems containing multiplication. We use this approach to efficiently verify the commutativity of a multiplier output bit by output bit. We also give some examples of simple bit-vector inequalities where the pseudo-Boolean approach significantly outperformed SAT solvers and even bit-vector solvers. Finally, we give some of our observations on the strengths and weaknesses of different methods of conflict analysis used by pseudo-Boolean solvers.

  • Friday May 8 at 13:15
    Pseudo-Boolean solvers for answer set programming
    (Bart Bogaerts, Vrije Universiteit Brussel)

    Answer set programming (ASP) is a well-established knowledge representation formalism. Most ASP solvers are based on (extensions of) technology from Boolean satisfiability solving. While these solvers have shown to be very successful in many practical applications, their strength is limited by their underlying proof system, resolution. In this research, we present a new tool LP2PB that translates ASP programs into pseudo-Boolean theories, for which solvers based on the (stronger) cutting plane proof system exist. We evaluate our tool, and the potential of cutting-plane-based solving for ASP on traditional ASP benchmarks as well as benchmarks from pseudo-Boolean solving. Our results are mixed: overall, traditional ASP solvers still outperform our translational approach, but several benchmark families are identified where the balance shifts the other way, thereby suggesting that further investigation into a stronger proof system for ASP is valuable.

Published by: Jakob Nordström <jakobn~at-sign~kth~dot~se>
Updated 2021-05-05