The project ForML is a research project founded by the French National Research Agency.
The ForML project will investigate the use of well-known paradigms from software analysis in the context of ML reasoning. Two specific paradigms will be investigated, building on preliminary results by the research team: abstract interpretation and counterexample-guided abstraction refinement. The application of these paradigms to robustness, fairness and explainability will be investigated. Furthermore, ForML will make inroads into the problem of certifying rigorous algorithms for reasoning about ML models, thus ensuring that implementations produce results that are guaranteed to be correct.
The overarching research hypothesis is that reasoning about large, complex ML models can be achieved through the use of formal methods.
To validate the main research hypothesis, the ForML project will investigate a number of important research topics, which also serve to create research synergies between the teams participating in the project.
Team
Toulouse
- IRIT – Toulouse INP
Aurélie Hurault (project leader) - IRIT – Toulouse III
Martin Cooper - IRIT
Imane Bousdira (phd student) - IRIT
Rosalie Defourné (postdoctoral researcher)
Paris
- INRIA Paris
Caterina Urban - LIP6 – Sorbonne Université
Antoine Miné - INRIA Paris
Priyanka Maity (phd student)
Open positions
In Toulouse : Funding for Master2 internships
Explanation of models derived by machine learning (ML) is crucial to help humans understand the decisions made by these models. Indeed, providing explanations with a guarantee of correctness is one of the major challenges facing the use of ML techniques in safety-critical systems. The ANR project ‘ForML’ will focus on the search for new families of explanation problems that are computationally tractable, followed by the construction of reliable explanation software. The user must have confidence in these programs, so either they will be proven correct or they will generate a certificate to justify correctness. The ultimate goal is to make publicly available efficient and reliable software. Practical information: funding is available for a postdoc, PhD and Master internships. Candidates are expected to have a background in theoretical computer science and/or mathematics, particularly in complexity theory and/or formal methods. Successful candidates will be co-supervised by Aurélie Hurault from Toulouse INP and Martin Cooper from the University of Toulouse 3. For further information, please contact: aurelie.hurault @ enseeiht.fr, cooper @ irit.fr
Publications
- Xuanxiang Huang, Martin C. Cooper, Antonio Morgado, Jordi Planes, Joao Marques-Silva :
Feature Necessity and Relevancy in Machine Learning Explanations, to appear in JAR. - Imane Bousdira, Aurélie Hurault, Martin Cooper :
Formally Correct Search for Interpretable DNFs, FASE 2026
https://github.com/hurault/FASE26 - Clément Carbonnel, Martin Cooper, Emmanuel Hébrard, D. Morales, Joao Marques-Silva:
Explaining Multivariate Decision Trees: Characterising Tractable Languages, JAIR 2026. - Leila. Amgoud, Matin Cooper:
Axiomatic Foundations of Counterfactual Explanations, AAMAS 2026. - Clément Contet, Rosalie Defourné, Aurélie Hurault:
Certified Enumeration of AI Explanations: A Focus on Monotonic Classifiers. ICECCS 2025: 123-142
DOI : 10.1007/978-3-032-00828-2_8
https://github.com/hurault/iceccs25 - Martin Cooper, Imane Bousdira, Clément Carbonnel:
Interpretable DNFs. IJCAI 2025: 4985-4993
https://arxiv.org/abs/2505.21212
Meetings
Kick-off – April 2, 2024 · Toulouse
- Welcome
- Martin Cooper, Explaining Decisions via Formal Reasoning
- Caterina Urban, Machine Learning Interpretability and Verification
- Antoine Miné, Projet MOPSA
- Aurélie Hurault, Certified Logic-Based Explainable AI – The Case of Monotonic Classifiers
- Discussions
T+12 – March 25, 2025 · online
- Welcome
- Imane Bousdira and Martin Cooper, Interpretable DNFs
- Rosalie Defourné and Aurélie Hurault, Certified Enumeration of AI Explanations: A Focus on Monotonic Classifiers
- Antoine Miné, Classifier stability verification via abstract interpretation
- Caterina Urban, Neural Network Explanations
- Rosalie Defourné, SAT encoding of Random Forest explainability
- Discussions
T+24 – January 19–20, 2025 · Paris
- Welcome
- Imane Bousdira, Formally Correct Search for Interpretable DNFs
- Antoine Miné, Formal Verification of the Stability of Decision Tree Models
- Rosalie Defourné, Formal Explanation of Random Forests
- Caterina Urban, Faster Verified Explanations for Neural Networks
- Priyanka Maity, Presentation of her (initial) work on formal explanations based on activations
- Martin Cooper, Fairness of Classifiers in the Presence of Constraints between Features
- Discussions
