RV2026: Runtime Verification 2026 Kingston, Canada, October 6-9, 2026 |
| Conference website | https://rv2026.smithengineering.queensu.ca/ |
| Submission link | https://easychair.org/conferences/?conf=rv2026 |
| Abstract registration deadline | May 31, 2026 |
| Submission deadline | May 31, 2026 |
International Conference on Runtime Verification (RV26)
We are pleased to invite you to submit papers for the 26th International Conference on Runtime Verification (RV26).
Dates
- Paper submission: 31 May, 2026
- Tutorial proposal submission: 31 May, 2026
- Notification: 16 July, 2026
- Camera-ready: 27 July, 2026
- Conference: 6-9 October, 2026
All deadlines are anywhere on Earth.
Paper Categories
There are four categories of papers that can be submitted: regular, short, tool demo, and benchmark papers. Papers in each category will be reviewed by at least three members of the Program Committee in a single-blind review process.
Regular Papers (up to 16 pages, not including references)
-
Regular Papers should present original unpublished results. We welcome theoretical papers, system papers, papers describing domain-specific variants of RV, and case studies on runtime verification.
The Program Committee of RV 2026 will give a Springer-sponsored Best Paper Award to one eligible regular paper.
Short Papers (up to 8 pages, not including references)
-
Short Papers may present novel but not necessarily thoroughly worked out ideas, for example emerging runtime verification techniques and applications, or techniques and applications that establish relationships between runtime verification and other domains.
Tool Demonstration Papers (up to 8 pages, not including references)
-
Tool Demonstration Papers should present a new tool, a new tool component, or novel extensions to existing tools supporting runtime verification.
The paper must include information on tool availability, maturity, and selected experimental results and it should provide a link to a website containing the theoretical background and user guide. Furthermore, we strongly encourage authors to make their tools and benchmarks available with their submission.
The Program Committee of RV 2026 will give a Formal Methods Europe-sponsored Best Tool Award to one eligible tool demonstration paper.
Benchmark Papers (up to 8 pages, not including references)
-
Benchmark Papers should describe a benchmark, suite of benchmarks, or benchmark generator useful for evaluating RV tools.
Papers should include information as to what the benchmark consists of and its purpose (what is the domain), how to obtain and use the benchmark, an argument for the usefulness of the benchmark to the broader RV community and may include any existing results produced using the benchmark.
We are interested in both benchmarks pertaining to real-world scenarios and those containing synthetic data designed to achieve interesting properties. Broader definitions of benchmarks e.g. for generating specifications from data or diagnosing faults are within scope. We encourage benchmarks that are tool-agnostic, especially if they have been used to evaluate multiple tools.
We also welcome benchmarks that contain verdict labels and rigorous arguments for the correctness of these verdicts, and benchmarks that are demonstrably challenging with respect to the state-of-the-art tools. Benchmark papers must be accompanied by an easily accessible and usable benchmark submission.
Papers will be evaluated by a separate benchmark evaluation panel who will assess the benchmarks’ relevance, clarity, and utility as communicated by the submitted paper.
Objectives and Scope
Runtime verification is concerned with the monitoring and analysis of the runtime behavior of software and hardware systems. Runtime verification techniques are crucial for system correctness, reliability, and robustness; they provide an additional level of rigor and effectiveness compared to conventional testing and are generally more practical than exhaustive formal verification.
Runtime verification can be used prior to deployment, for testing, verification, and debugging purposes, and after deployment for ensuring reliability, safety, and security and for providing fault containment and recovery as well as online system repair.
The topics of the conference include, but are not limited to:
- specification languages for monitoring
- monitor construction techniques
- program instrumentation
- logging, recording, and replay
- combination of static and dynamic analysis
- specification mining and machine learning over runtime traces
- monitoring techniques for concurrent and distributed systems
- runtime checking of privacy and security policies
- metrics and statistical information gathering
- program/system execution visualization
- fault localization, containment, resilience, recovery, and repair
- monitoring systems with learning-enabled components
- dynamic type checking
- runtime verification for autonomy and runtime assurance
- runtime verification for assurance cases
- out-of-distribution and anomaly detection in ML-based systems
- safe reinforcement learning
- runtime verification of large language model (LLM) agents
Organiziation
General Chairs
- Sean Kauffman (Queen’s University, Canada)
- Giulia Pedrielli (Arizona State University, USA)
Publicity Chair
- Lars Lindemann (ETH Zurich, Switzerland)
Invited Speakers
- Ruzica Piskac (Yale University)
- Mauricio Castillo Efren (Lockheed Martin)
- John-Baptiste Tristan (Amazon Web Services)
Venue
The conference will be held at Queen’s University in Kingston, Canada.
Contact
All questions about submissions should be emailed to the General Chairs.
