CSSC 2014 is a competitive event that assesses the peak performance of solvers for the Boolean satisfiability (SAT) problem. A broad range of SAT solvers expose parameters to enable automated customization for different instance distributions. Indeed, such customization often yields large improvements over the solver defaults. This competition recognizes that the value of a SAT solver therefore often comes from its customizability rather than just its performance in a default configuration.
Flexibility in a solver is most useful if it can be exploited automatically to customize the solver to a given application domain (defined by a set of training benchmark instances). This year's competition follows last year's inaugural event CSSC 2013.
- July 24: results, log files, solvers, incumbents and instance sets are online
- May 27: list of participants and benchmarks is online
- May 15: by popular demand we extended the submission deadline to May 25
- April 28: final website published
- February 6: initial website published
Solver developers will submit their solvers as usual in the SAT Competition, but they will also submit a file listing solver parameters and their possible values and ensure that these parameters can be set in a command line call to their solver. (Please see "Solver Submission".) On each of several sets of homogeneous instances, the organizers will then apply an automated algorithm configuration procedure to identify good parameter settings and score solvers by their performance with these optimized settings. Solver developers who submit an entry to the competition will be given feedback on which parameters turned out to be important empirically, and which values yielded high performance.
In line with the emphasis of the SAT Competition 2014, this year’s competition focuses on core solvers. Proprietary solvers are eligible for submission, but prizes will only be awarded to solvers whose developers agree to make source code (and parameter specification / solver wrapper) available on the competition website. Solvers will be ranked as in the SAT Competition 2014, with the important difference that the CSSC 2014 Competition will measure peak performance (i.e., performance with optimized parameters) rather than performance of the solver defaults.
Similar to previous SAT competitions, there will be four categories:
- Industrial SAT+UNSAT
- Crafted SAT+UNSAT
- Random SAT+UNSAT
- Random SAT (new this year)
At submission time, you will have to specify which categories you want to compete in. Each category will include multiple sub-categories of benchmark instances, where each such sub-category is relatively homogeneous (e.g., instances created with the same generator) and is split into a training and a test set. For each sub-category, submitted solvers will be automatically configured on the training set portion and the best configuration found will then be evaluated on the test set portion. The performances on the respective test sets (which are of equal size across sub-categories) will then be merged and used to score solvers just like in the SAT Competition. For simplicity, in this event, parallel solvers will not be supported.
In each category, gold, silver, and bronze medals will be awarded to the open-source solvers whose customized versions perform best, provided that tracks receive enough entries. (Specifically, a gold medal will be awarded only if there are at least three solvers participating in a track; for silver and bronze, respectively, four and five solvers will need to participate.) A solver will be disqualified for a track if the final configuration determined by algorithm configuration returns a wrong solution. Closed-source solvers are welcome to participate, but are not eligible for prizes.
Important Dates (all 2014; coordinated to come just after the SAT Competition 2014)
- 28 April : Solver registration and testing period opens
- 28 April : Benchmark and generator submission opens
- 25 May : Solver registration and testing period closes
- 25 May : Benchmarks and generators submission closes
- 26 May - 30 June : Execution of the competition
- 14-17 July : Announcement of results at the SAT 2014 Conference
Automated Algorithm Configuration
To obtain representative results of the peak performance that can be obtained by means of automated algorithm configuration, we will use a combination of several state-of-the-art algorithm configuration methods. Specifically, we plan to perform at least 4 runs of each of the configuration procedures ParamILS and SMAC, and one run of GGA using 4 processor cores (GGA is inherently parallel). We will use the best configuration found by any of these runs (best with respect to training performance; test runs will only be performed for that one configuration). The time budget for each individual configuration run will be 48 wall clock hours.
Configurators operate by running a solver hundreds or thousands of times, on different instances and with different assignments to its parameters. Thus, next to source code (or a binary), solver designers need to provide a specification of their solver's parameters and their possible values. Since every solver has its own format for specifying parameter values on the command line (and since we don't want to force algorithm designers to support a format we define), we also require a wrapper method for translating an assignment of parameter values into a command line call.
We provide a test environment that mirrors the environment we will use in the competition. This test environment contains several examples of parameterized solvers, each of them in a subdirectory of the solvers/ directory. Your submission will simply be another subdirectory of solvers/ following the interface described in the README.txt. That README also describes how to define the parameter file for your solver (using the .pcs format ParamILS, SMAC, and GGA support), how to define your solver wrapper, and how to test and debug your wrapper and your parameterized solver. Once you can run the example SMAC runs below with your solver, please simply zip up your solver subdirectory and submit it (along with directions for building your solver from source) via e-mail to firstname.lastname@example.org. (If the file is too large for an attachment, you can also email us a link to it instead). After the competition, we will make all submissions publically available on this website; for this purpose, please provide a reference (e.g., bibtex entry) for your solver at submission time.
Apart from your solver archive, please also include the following information in your email:
- name(s) of author(s) and affiliation(s)
- contact email address
- name of your solver
- indication of which competition categories you want to participate in
- literature reference as bibtex entry (if possible)
Contribution of Benchmarks
We invite you to contribute benchmarks. We are particularly interested in benchmark generators with settings that produce instances of which at least 20-40% can be solved within ~300 CPU seconds on a recent machine by the default configuration of a SAT solver that would perform reasonably well in the SAT competition. Benchmark generators and sets should come with ground truth (satisfiability status), where possible. Pre-made sets of benchmark instances should contain at least 150 (preferably 500 to 2000) CNF formulae. Please contact us by email (see below) to submit benchmarks.
Execution Environment and Resources
For each solver run, we will use a single core on the META cluster in Freiburg, using Olivier Roussel's runsolver tool to limit the memory of each solver run to 3GB and its time to 300 seconds.
The cluster nodes have the following specification:
- Operating System: Ubuntu 14.04 LTS; 64bit
- Processor: 2 x Eight Core Intel Xeon E5-2650v2 2.60GHz processor
- Memory: 64GB GB per node
- Cache: 20 MB L2 per 8-core CPU
- Compilers: GCC 4.6.3, GCC 4.8.2, javac 22.214.171.124
- Interpreters: perl 5.14.2, python 2.7.6, ruby 1.9.1
- Libraries: Although most standard libraries are available, some less frequently used ones (for example, readline, termcap), are not available by default.
- Frank Hutter (Freiburg University; Co-Chair): Mail
- Marius Lindauer (Freiburg University; Co-Chair): Mail
- Sam Bayless (University of British Columbia)
- Holger Hoos (University of British Columbia)
- Kevin Leyton-Brown (University of British Columbia)
Please send submissions, as well as any questions, concerns or comments to email@example.com.