Configurable SAT Solver Challenge (CSSC) 2014

CSSC 2014 is a competitive event that assesses the peak performance of solvers for the Boolean satisfiability (SAT) problem. A broad range of SAT solvers expose parameters to enable automated customization for different instance distributions. Indeed, such customization often yields large improvements over the solver defaults. This competition recognizes that the value of a SAT solver therefore often comes from its customizability rather than just its performance in a default configuration.

Flexibility in a solver is most useful if it can be exploited automatically to customize the solver to a given application domain (defined by a set of training benchmark instances). This year's competition follows last year's inaugural event CSSC 2013.

News

Mechanics

Solver developers will submit their solvers as usual in the SAT Competition, but they will also submit a file listing solver parameters and their possible values and ensure that these parameters can be set in a command line call to their solver. (Please see "Solver Submission".) On each of several sets of homogeneous instances, the organizers will then apply an automated algorithm configuration procedure to identify good parameter settings and score solvers by their performance with these optimized settings. Solver developers who submit an entry to the competition will be given feedback on which parameters turned out to be important empirically, and which values yielded high performance.

In line with the emphasis of the SAT Competition 2014, this year’s competition focuses on core solvers. Proprietary solvers are eligible for submission, but prizes will only be awarded to solvers whose developers agree to make source code (and parameter specification / solver wrapper) available on the competition website. Solvers will be ranked as in the SAT Competition 2014, with the important difference that the CSSC 2014 Competition will measure peak performance (i.e., performance with optimized parameters) rather than performance of the solver defaults.

Categories

Similar to previous SAT competitions, there will be four categories:

  1. Industrial SAT+UNSAT
  2. Crafted SAT+UNSAT
  3. Random SAT+UNSAT
  4. Random SAT (new this year)

At submission time, you will have to specify which categories you want to compete in. Each category will include multiple sub-categories of benchmark instances, where each such sub-category is relatively homogeneous (e.g., instances created with the same generator) and is split into a training and a test set. For each sub-category, submitted solvers will be automatically configured on the training set portion and the best configuration found will then be evaluated on the test set portion. The performances on the respective test sets (which are of equal size across sub-categories) will then be merged and used to score solvers just like in the SAT Competition. For simplicity, in this event, parallel solvers will not be supported.

Prizes

In each category, gold, silver, and bronze medals will be awarded to the open-source solvers whose customized versions perform best, provided that tracks receive enough entries. (Specifically, a gold medal will be awarded only if there are at least three solvers participating in a track; for silver and bronze, respectively, four and five solvers will need to participate.) A solver will be disqualified for a track if the final configuration determined by algorithm configuration returns a wrong solution. Closed-source solvers are welcome to participate, but are not eligible for prizes.

Important Dates (all 2014; coordinated to come just after the SAT Competition 2014)

Automated Algorithm Configuration

To obtain representative results of the peak performance that can be obtained by means of automated algorithm configuration, we will use a combination of several state-of-the-art algorithm configuration methods. Specifically, we plan to perform at least 4 runs of each of the configuration procedures ParamILS and SMAC, and one run of GGA using 4 processor cores (GGA is inherently parallel). We will use the best configuration found by any of these runs (best with respect to training performance; test runs will only be performed for that one configuration). The time budget for each individual configuration run will be 48 wall clock hours.

Solver Submission

Configurators operate by running a solver hundreds or thousands of times, on different instances and with different assignments to its parameters. Thus, next to source code (or a binary), solver designers need to provide a specification of their solver's parameters and their possible values. Since every solver has its own format for specifying parameter values on the command line (and since we don't want to force algorithm designers to support a format we define), we also require a wrapper method for translating an assignment of parameter values into a command line call.

We provide a test environment that mirrors the environment we will use in the competition. This test environment contains several examples of parameterized solvers, each of them in a subdirectory of the solvers/ directory. Your submission will simply be another subdirectory of solvers/ following the interface described in the README.txt. That README also describes how to define the parameter file for your solver (using the .pcs format ParamILS, SMAC, and GGA support), how to define your solver wrapper, and how to test and debug your wrapper and your parameterized solver. Once you can run the example SMAC runs below with your solver, please simply zip up your solver subdirectory and submit it (along with directions for building your solver from source) via e-mail to cssc.organizers@gmail.com. (If the file is too large for an attachment, you can also email us a link to it instead). After the competition, we will make all submissions publically available on this website; for this purpose, please provide a reference (e.g., bibtex entry) for your solver at submission time.

Apart from your solver archive, please also include the following information in your email:

Contribution of Benchmarks

We invite you to contribute benchmarks. We are particularly interested in benchmark generators with settings that produce instances of which at least 20-40% can be solved within ~300 CPU seconds on a recent machine by the default configuration of a SAT solver that would perform reasonably well in the SAT competition. Benchmark generators and sets should come with ground truth (satisfiability status), where possible. Pre-made sets of benchmark instances should contain at least 150 (preferably 500 to 2000) CNF formulae. Please contact us by email (see below) to submit benchmarks.

Execution Environment and Resources

For each solver run, we will use a single core on the META cluster in Freiburg, using Olivier Roussel's runsolver tool to limit the memory of each solver run to 3GB and its time to 300 seconds.

The cluster nodes have the following specification:

Organizers

Please send submissions, as well as any questions, concerns or comments to cssc.organizers@gmail.com.