*Termination and Complexity Competition 2025*
http://www.termination-portal.org/wiki/Termination_Competition_2025
*Call for Participation*
Since the beginning of the millennium, many research groups developed
tools for fully automated termination and complexity analysis.
After a tool demonstration at the 2003 Termination Workshop in Valencia,
the community decided to start an annual termination competition to spur
the development of tools and termination techniques.
The termination and complexity competition focuses on automated
termination and complexity analysis for all kinds of programming
paradigms, including categories for term rewriting, imperative
programming, logic programming, and functional programming. In all
categories, we also welcome the participation of tools providing
certifiable proofs. The goal of the termination and complexity
competition is to demonstrate the power of the leading tools in each of
these areas.
The competition will be affiliated with the International Workshop on
Termination (WST 2025, https://www.imn.htwk-leipzig.de/WST2025/). It
will be run on the RWTH University HPC cluster
<https://help.itc.rwth-aachen.de/service/rhr4fjjutttf/>. The final run
and a presentation of the final results will be live at WST.
We strongly encourage all developers of termination and complexity
analysis tools to participate in the competition. We also welcome the
submission of termination and complexity problems, especially problems
that come from applications.
A category is only run in the competition if there are at least 2
participants and at least 40 examples for this category in the
underlying termination problem data base. If there is no category that
is convenient for your tool, you can contact the organizers, since other
categories can be considered as well if enough participants are guaranteed.
For further information, we refer to the website of the termination and
complexity competition:
https://termination-portal.org/wiki/Termination_Competition_2025
<https://termination-portal.org/wiki/Termination_Competition_2025>
_Important dates_
* Tool and Benchmark Submission: August 6, 2025
* First Run: August 20, 2025
* Bugfix Deadline: August 27, 2025
* Final Run: September 3/4, 2025
* Presentation of the results at WST: September 4, 2025
Dear all,
thanks again for participating in CHC-COMP 2025.
While the competition was generally successful, not everything went
perfectly, and the model validation track is still in preparation.
1. Expected Verdicts:
There was disagreement among the solvers on some benchmarks.
Martin Blicha has kindly looked into this and prepared pull requests to
determine the ground truth which include justifications for the change.
If you have stakes in this, please review the pull requests and comment,
so that we can merge these:
https://github.com/chc-comp/chc-comp25-benchmarks/pulls
The community should also agree on how to deal with tools that produce
wrong results. SV-COMP has a scoring scheme with heavy penalties but
there are other options. At least, we should mark wrong results in the
tables prominently.
2. Missing logfiles
I am aware that logfiles are currently not linked correctly in the
result table, however, you can access them by manually specifying the
URL, as in
https://www.cip.ifi.lmu.de/~ernstg/chc-comp2025/golem-x64-linux/test/result…
I'm not sure what the problem is but I'll figure it out sometime.
3. Missing results and Tool updates
Due to an oversight, MuCyc was not considered in LIA. Many apologies to
the tool authors, it appears the omission had been introduced (by me)
only after having the final results checked. I will add the results from
the existing evaluation, which means, potentially the ranking will change.
Moreover, some tool authors have submitted updated tool archives in
response to results from the competition. I'd therefore like to offer
*all* tool authors the chance to update their archives. Moreover, please
consider uploading your tool to Zenodo, in which case we can link it on
the competition website.
4. Tool versions
Some tools currently lack version information or output dummy versions.
You can see this in the benchmark tables resp. the absence of version
information on the main CHC-COMP page. It would be great if this can be
fixed, in particular for re-submissions, by including a flag "--version"
or similar.
5. Model validation (yay)!
Until HCVS we really want to have a model validation track ready.
The following script takes as first argument the benchmark file and
reads from stdin the solver output ("sat/unsat" followed by a model).
Output of the script is an SMT-LIB file with a single assertion and
expected status "unsat" in which case the model is confirmed.
https://github.com/chc-comp/chc-comp25-scripts/blob/main/validate-model.py
Currently, only Eldarica and Golem have indicated that they can produce
models. It would be great if some more participants can participate!
In that case, please confirm that your tool outputs a model that can be
parsed by the script above (there are already some variations across Z3,
Golem, Eldarica!). We plan to cross-check the models with at least Z3
and CVC5.
6. Proof validation (nay!)
Probably not this year. I don't feel confident on making a proposal for
a common format. Martin, Philipp, any thoughts?
Best,
--
Gidon Ernst
Software and Computational Systems Lab, LMU Munich
https://www.sosy-lab.org/people/ernst/
12th Workshop on Horn Clauses for Verification and Synthesis (HCVS)
**Deadline Extension**
22 July 2025, Zagreb, Croatia
Co-located with CAV 2025
https://www.sci.unich.it/hcvs25/
---
* Important Dates *
Paper submission deadline **extended**: 30 May 2025
Paper notification: mid June 2025
Workshop: 22 July 2025
* Scope *
Many Program Verification and Synthesis problems of interest can be
modeled directly using Horn clauses and many recent advances in the CLP
and CAV communities have centered around efficiently solving problems
presented as Horn clauses.
This series of workshops aims to bring together researchers working in
the communities of Constraint/Logic Programming (e.g., ICLP and CP),
Program Verification (e.g., CAV, TACAS, and VMCAI), and Automated
Deduction (e.g., CADE, IJCAR), on the topic of Horn clause based
analysis, verification, and synthesis.
Horn clauses for verification and synthesis have been advocated by these
communities in different times and from different perspectives and HCVS
is organized to stimulate interaction and a fruitful exchange and
integration of experiences.
Topics of interest include, but are not limited to the use of Horn
clauses, constraints, and related formalisms in the following areas:
- Analysis and verification of programs and systems of various kinds
(e.g., imperative, object-oriented, functional, logic, higher-order,
concurrent, transition systems, petri-nets, smart contracts)
- Program synthesis
- Program testing
- Program transformation
- Constraint solving
- Type systems
- Machine learning and automated reasoning
- CHC encoding of analysis and verification problems
- Resource analysis
- Case studies and tools
- Challenging problems
* Submission Guidelines *
We solicit regular papers describing theory and implementation of
Horn-clause based analysis and tool descriptions. We also solicit
extended abstracts describing work-in-progress, as well as presentations
covering previously published results, extended abstracts of doctoral
theses, and overviews of research projects that are of interest to the
workshop. At least one author of each accepted paper will be required to
attend the workshop to present the contribution.
Submission has to be done in one of the following formats:
- Extended abstracts (up to 3 pages in EPTCS format), which describe
work in progress or aim to initiate discussions.
- Presentation-only papers, i.e., papers already submitted or presented
at a conference or another workshop. Such papers can be submitted in any
format, and will not be included in the workshop post-proceedings.
- Regular papers (up to 12 pages plus bibliography in EPTCS
(http://www.eptcs.org/) format), which should present previously
unpublished work (completed or in progress), including descriptions of
research, tools, and applications.
- Tool papers (up to 4 pages in EPTCS format), including the papers
written by the CHC-COMP participants, which can outline the theoretical
framework, the architecture, the usage, and experiments of the tool.
All submitted papers will be refereed by the program committee and will
be selected for inclusion in accordance with the referee reports. If
enough regular papers are accepted, both regular papers and extended
abstracts will be published electronically. The publication of a paper
is not intended to preclude later publication. Full versions of extended
abstracts, or substantial revisions, may later be published elsewhere.
Papers must be submitted through the EasyChair system using the web
page: https://easychair.org/conferences/?conf=hcvs2025
* Committees *
Program Chairs:
Emanuele De Angelis, IASI-CNR, Rome, Italy
Florian Frohn, RWTH Aachen University, Aachen, Germany
Program Committee:
Nikolaj Bjørner, Microsoft, USA
Martin Blicha, University of Lugano, Switzerland
Konstantin Britikov, University of Lugano, Switzerland
Catherine Dubois, ENSIIE-Samovar, France
Gidon Ernst, Ludwig Maximilian University of Munich, Germany
Zafer Esen, Uppsala University, Sweden
Grigory Fedyukovich, Florida State University, USA
Carsten Fuhs, Birkbeck, University of London, UK
Hossein Hojjat, Tehran Institute for Advanced Studies, Iran
Petra Hozzová, Czech Technical University, Czechia
Lorenz Leutgeb, Max Planck Institute for Informatics, Germany
Pedro Lopez-Garcia, IMDEA Software Institute and Spanish Council for
Scientific Research (CSIC), Spain
Dale Miller, INRIA and LIX/Institut Polytechnique de Paris, France
Jose F. Morales, IMDEA Software Research Institute, Spain
Sabina Rossi, Dipartimento di Informatica, Università Ca' Foscari di
Venezia, Italy
Philipp Rümmer, University of Regensburg, Germany
Jonas Schöpf, University of Innsbruck, Austria
Wim Vanhoof, University of Namur, Belgium
German Vidal, MiST, VRAIN, Universitat Politecnica de Valencia, Spain
Dear all,
please find a detailed and brand-new presentation of the results of
CHC-COMP 2025 on the website: https://chc-comp.github.io/
Congratulations to all participants for making this competition
exciting! As you can see, results are close in some categories, wheres
in others there is still lots of potential for improvement.
I will shortly include an acknowledgement all tool authors.
Action item: Solver archives!
- it would be really really cool to have all solver archives available
on Zenodo, and I strongly encourage all participants to do this
- ensure the archive has a License and Readme
- please patch your submitted archive with these files if present:
https://github.com/chc-comp/chc-comp25-scripts/tree/main/participants
- consider entering your tool here:
https://gitlab.com/sosy-lab/benchmarking/fm-tools/ with an entry for the
Zenodo archive
Follow-up: decide on ground truth, model validation, parallel track (TBD)
Benchmarks are available here:
https://github.com/chc-comp/chc-comp25-benchmarks
Scripts are available here (pending documentation):
https://github.com/chc-comp/chc-comp25-scripts
Please let me know if you have any questions.
Kind regards,
--
Gidon Ernst
Software and Computational Systems Lab, LMU Munich
https://www.sosy-lab.org/people/ernst/
Dear all,
please find below the Zoom link to remotely join the CHC-COMP 2025
presentation that will take place at 15:30 Toronto time (1.5h from now):
https://mcmaster.zoom.us/j/93507005710
Meeting ID: 935 0700 5710
Passcode: 927387
Kind regards,
--
Gidon Ernst
Software and Computational Systems Lab, LMU Munich
https://www.sosy-lab.org/people/ernst/
Dear all,
To participate in CHC-COMP 2025, we kindly ask you to register your
solver via this form, the earlier the better, so we can check for
technical issues, but no later than Friday, April 25:
https://forms.gle/Lh6VmRjRDaknZ4xU9
Submission should include a download link pointing to the tool archive,
such as a public release on a code-hosting platform.
Note, you may update your response later, for example in case you want
to make a new release between initial submission and submission of the
final version for the competition.
Please indicate the desired command line, too, as well as a command line
that produces evidence (models, proofs), which can be the same as the
main command line.
Please also indicate the categories you wish to be evaluated in.
LRA and BV are up to the availability of benchmarks.
If you wish to contribute benchmarks still, please get in touch.
Best,
--
Gidon Ernst
Software and Computational Systems Lab, LMU Munich
https://www.sosy-lab.org/people/ernst/