QBFEVAL’20 is the 2020 competitive evaluation of QBF solvers, and the fifteenth event aimed to assess the performance of QBF solvers. QBFEVAL’20 awards solvers that stand out as being particularly effective on specific categories of QBF instances.
Registrations are open, for more info about the call please visit the following website: http://www.qbflib.org/qbfeval20.php
The 2020 MaxSAT Evaluation (MSE 2020) is the 14th edition of MaxSAT evaluations, the primary competition-style event focusing on the evaluation of MaxSAT solvers organized yearly since 2006. The main goals of MaxSAT Evaluation 2020 are:
- to assess the state of the art in the field of MaxSAT solvers,
- to collect and re-distribute a heterogeneous MaxSAT benchmark set for further scientific evaluations, and
- to promote MaxSAT as a viable option for solving instances of a wide range of NP-hard optimization problems.
The 1st International Competition on Model Counting (MC 2020) is a competition to deepen the relationship between latest theoretical and practical development on the various model counting problems and their practical applications. It targets the problem of counting the number of models of a Boolean formula. MC 2020 aims to identify new challenging benchmarks and to promote new solvers for the problem as well as to compare them with state-of-the-art solvers.
The 2020 SAT Competition is a competitive event for solvers of the Boolean Satisfiability (SAT) problem. It is organized as a satellite event to the 23rd International Conference on Theory and Applications of Satisfiability Testing and stands in the tradition of the yearly SAT Competitions and SAT-Races / Challenges. New this year is a cloud track sponsored by AWS to evaluate distributed solvers on 100 machines with 16 cores per machine on a suite of hard benchmarks. Another novelty is the planning track with 200 benchmarks originating from automated planning.