About
Why This Workshop
Modern AI systems can accelerate real research, but using them effectively remains nontrivial. This workshop will develop a community resource of workflows to drive progress at the AI–Math/CS/ML interface — especially in machine learning, optimization, statistics, algorithms, and adjacent areas.
While some researchers focus on how AI may be able to work on research
independently, others are eager to share how AI is able to help humans in their research. However, it is often difficult to extract and reproduce the specific, frequently
complex, workflows and “tricks” used by these researchers. As the degree of usefulness of generative
AI depends greatly on the workflow, many scientists who currently only use AI for basic tasks, can
substantially benefit from access to these methods.
Our workshop aims to help researchers at ICML – especially in machine learning, optimization,
statistics, algorithms, and adjacent areas – to more effectively integrate AI tools in their core
research workflow. The workshop will cover:
AI-Assisted Workflows
Iterative verification loops, failure modes and how to detect them, prompting patterns that improve correctness, decomposition and self-critique, multi-agent strategies, and when to switch from informal to formal reasoning.
Tool-Augmented Reasoning
Integrating LLMs with computation (code, symbolic algebra, numerics), literature navigation, and proof assistants (e.g., Lean) to reduce hallucinations.
Research Acceleration
Using AI for derivations, counterexample search, and experiment design — with an emphasis on methods that transfer across subfields.
Call for Papers
Contribute Your Work
We welcome submissions that highlight workflows using AI for machine learning, math, and computer science research more generally. Your contribution should illustrate—in an accessible way for a non-expert—how a simple workflow has proven to be useful in solving a cognitive research task (e.g., time-saving, energy-saving, result-strengthening, etc.).
The workflow should be reproducible by ML researchers within a few hours and with academic-level financial resources. Therefore, the workflow should either involve simple prompting-based strategies, or more sophisticated strategies where the submitters provide a package/agent/repository that can be readily integrated into a chat interface or an API call. All accepted submissions will be made publicly accessible, creating a shared repository of AI-assisted research workflows.
We also welcome submissions that illustrate interesting failure modes, to improve the community's understanding of the limitations of AI assistance.
We focus on the following tasks where AI can help, and challenges associated with AI-assisted research:
- AI-assisted research problem formulation
- AI-assisted experiment design for ML research
- Solving mathematical research problems with AI assistance
- Formalization of mathematical research, especially as it is relevant for machine learning
- Verification of AI-generated proofs
- Automation of iterative loops
- Other tasks that are integral to an AI-assisted research workflow
The focus is on how researchers can become more productive, more rigorous, and do better research with the help of AI, and less on autonomous AI research, which is the focus of another exciting workshop: AI4Math.
In order to maintain focus, our workshop will not consider tasks that are not research-centered, such as simple writing, basic literature search (aka “deep research”), slide and poster creation, and pure software engineering (e.g., installing packages, version compatibility, etc.). There are already many resources available for such tasks.
What to include in your submission:
- Explain the cognitive task that appeared in your research, for which AI has either significantly saved your time, or improved the results that you could obtain otherwise. Estimate the time-savings or performance improvements.
- Describe the AI workflow you used in a way that is reproducible and usable by other researchers in a few hours.
- This may include specific prompting techniques, code to call the API (once or repeatedly), a detailed description of how you set up the agentic frameworks, etc.
- If your workflow relies on web-based prompting, share the exact prompt and ideally the exact transcripts of your conversation.
- If your workflow relies on an API-based interaction and/or agentic research, provide a link to a repository with your code.
- If applicable, discuss failure modes and what you learned from them.
- Ensure that submissions are as close as possible to the working workflow itself. We aim to convert as many submissions as possible into workflows to test their performance. Make sure that the workflow is reproducible, including a README explaining how to install packages, and include any code that is not publicly available.
- Explain how you verified the correctness of the results.
The contributions will be evaluated according to accessibility, reproducibility and correctness.
Submission Format
- 4-page paper (excluding references and supplementary materials); detailed walkthroughs, screenshots, and conversation transcripts can be included in the appendix
- Must follow ICML 2026 format
- Indicate preferred presentation: computer demo or poster
- Submissions via OpenReview
Policies
- Non-archival: papers published at other venues are welcome
- Double-blind review: submissions must be anonymized
- Accepted papers will be presented as demos or posters; select papers may be invited for contributed talks
- Review criteria: accessibility, reproducibility and correctness
Important Dates
- Submission deadline: May 13, 2026
- Notification of acceptance: May 31, 2026
- Workshop date: July 10 or 11, 2026
All deadlines are Anywhere on Earth (AoE)
Invited Speakers
Sergei Gukov is the John D. MacArthur Professor of Theoretical Physics and Mathematics at Caltech, Director of the Merkin Center for Pure and Applied Mathematics, and Consulting Director of the American Institute of Mathematics. His research interests include mathematics and machine learning, quantum topology, gauge theory, and knot and 3-manifold invariants.
Remy Degenne is a tenured researcher in the Scool team at the Inria centre at the University of Lille. He works on sequential machine learning, especially bandit theory, and is interested in online and reinforcement learning, statistics, and optimization. He is also a maintainer of Mathlib for the Lean theorem prover.
Damek Davis is an Associate Professor in Wharton's Department of Statistics and Data Science. His research interests are optimization and machine learning, and he also works on AI for mathematics. He is currently an associate editor at Mathematical Programming and Foundations of Computational Mathematics.
Rachel Ward is the W. A. "Tex" Moncrief Distinguished Professor in Computational Engineering and Sciences - Data Science and professor of mathematics at UT Austin. She is recognized for contributions to sparse approximation, stochastic optimization, and numerical linear algebra. Her research lies broadly in the mathematics of data.
Mehtaab Sawhney is a Clay Research Fellow and a tenure-track assistant professor at Columbia University. His research interests are broadly within combinatorics, probability, analytic number theory, and theoretical computer science.
Schedule
Full-Day Program
All times are local time in Seoul, South Korea (KST (UTC+9))
Debate
Structured Discussion
Four of the speakers will debate in teams of two (Affirmative vs. Negative) on the motion:
"Within five years, researchers at ICML today will consider AI-generated analyses, results, and written conclusions as reliable as those from leading theoretical researchers."
The debate follows a structured format with strict timing:
- Opening speeches of 4 minutes each, presenting the main arguments
- Crossfire of 2 minutes with alternating questions (10 sec) and answers (20 sec)
- Rebuttal and new arguments with 3 minutes each
- Second crossfire round of 2 minutes
- Another round of rebuttal with 3 minutes each
- Closing speech summarizing the debate, 3 minutes each
After the debate, questions from the audience will follow, moderated by the organizers.
Venue & Attendance
Logistics
Workshop Dates
July 10 or 11, 2026
(ICML 2026 workshop days)
Full-day, in-person
Registration
Workshop attendance requires ICML 2026 registration.
A workshop-only registration is sufficient. Please register through the main conference website.
Organizers
Workshop Committee
HDSI Endowed Chair Professor in Artificial Intelligence at UC San Diego. His research spans artificial intelligence, machine learning, and high-dimensional statistics.
Member of Technical Staff at OpenAI. His work includes large language models, convex optimization, online algorithms, and adversarial robustness.
Associate Professor of Statistics and Computer & Information Science at Penn. His research sits at the interface of statistics, machine learning, and AI.
Professor at the Halicioglu Data Science Institute at UC San Diego. His research interests are in optimization, high-dimensional statistics, machine learning, and AI.
Robert Grimmett Professor of Mathematics at Stanford University and President of the American Mathematical Society. He works in algebraic geometry.
Assistant Professor of Computer Science at ETH Zurich. She works on high-dimensional and robust machine learning.
Volunteers
Get Involved
Call for volunteers: our workshop aims to serve the needs of the community, and be community-driven. If you are interested in volunteering, please contact us. Sign up to help review contributions, build a platform to share workflows and make them easily accessible and searchable, or support the workshop in other ways.
Interested in volunteering? Sign up here.
Federico Di Gennaro is a PhD student at ETH Zürich, advised by Prof. Fanny Yang. His research interests include statistical learning and trustworthy ML.
Sunay Joshi is a PhD student at the University of Pennsylvania, advised by Prof. Edgar Dobriban and Prof. Hamed Hassani. His research interests include conformal prediction and uncertainty quantification for AI.
Qingsong Wang is a postdoctoral researcher at UC San Diego, working with Prof. Mikhail Belkin and Prof. Yusu Wang. His research interests include diffusion and flow-matching generative models, data geometry, and representation learning.
Tao Wang is a PhD student in Statistics and Data Science at the University of Pennsylvania, advised by Prof. Edgar Dobriban. His research interests include uncertainty quantification, optimal transport, and LLM post-training.