News

  • [4/26/2026] Workshop Day is Here — Looking forward to seeing you in Room 204 A/B! Check out our X and LinkedIn posts.
  • [4/10/2026] Schedule Posted — The final workshop schedule is now available, featuring invited and contributed talks, poster sessions, and a panel discussion.
  • [4/1/2026] AIWILD @ ICML 2026 — The Second Workshop on Agents in the Wild has been accepted to ICML 2026! See you in Seoul!
  • [2/9/2026] Deadline Extended — The paper submission deadline has been extended to February 12, 2026 AoE. Also, the deadline to apply to be a reviewer has been extended to February 12, 2026 AoE.
  • [2/4/2026] Final Call for Papers — The final call for papers is posted! Check out our X and LinkedIn posts.
  • [2/4/2026] News Coverage — We're excited that the workshop was featured in Agentic AI Weekly!
  • [2/2/2026] Call for Reviewers — The call for reviewers is posted! Check out our X and LinkedIn posts. We invite researchers to serve as reviewers for the workshop. Prior reviewing experience is helpful but not required; familiarity with workshop topics is expected. Please submit your application by February 9th. We plan to present Best Reviewer Awards!
  • [1/11/2026] Call for Papers — The call for papers is posted! Check out our X and LinkedIn posts. And the submission portal is now open!
  • [12/1/2025] Workshop Accepted — The workshop has been accepted by ICLR! See you in Rio!

About

Recent advances in agentic AI have enabled powerful AI systems capable of reasoning, acting, and adapting in diverse real-world environments. However, deploying such "agents in the wild" introduces profound challenges related to safety, security, trustworthiness, and more.

This workshop aims to bring together researchers and practitioners across academia and industry to discuss emerging research directions and practical challenges in developing safe and secure agents. By fostering collaboration across disciplines, the workshop will help chart the next steps toward reliable and trustworthy agent systems that can operate responsibly in open environments.

Call for Papers

The Workshop on Agents in the Wild at ICLR 2026 invites submissions from researchers and practitioners exploring how intelligent agents can reason, act, and adapt safely and securely in open-ended real-world environments.

As agentic AI systems grow more capable, their deployment in dynamic, unpredictable settings introduces challenges in safety, security, and general trustworthiness. This workshop aims to spark discussion across academia and industry on methods, benchmarks, and frameworks for building reliable and trustworthy agents that can operate responsibly "in the wild."

Scope

We welcome contributions on a wide range of topics related to AI agents, including but not limited to:

  • Agentic safety and alignment
  • Agent security, privacy, and robustness
  • Agentic hallucination and factuality
  • Agentic interpretability and transparency
  • Agentic fairness and bias
  • Evaluating and benchmarking agents
  • Multimodal and computer-use agents
  • Multi-agent coordination and long-horizon safety
  • Post-training and adapting agents
  • Agent systems and infrastructure
  • Interdisciplinary agentic considerations
  • Ethics, society, and governing of agents

Important Dates

  • Paper Submission Open: January 5, 2026 AoE
  • Paper Submission Deadline: February 5, 2026 AoE February 12, 2026 AoE
  • Paper Notification Deadline: March 1, 2026 AoE
  • Camera-ready Version Deadline: April 15, 2026 AoE

Submission Guidelines

Format: This workshop offers two seperate submission tracks:

  • Regular Papers Track: The workshop welcomes submissions of research and position papers (10 pages). References and supplementary materials will not count against these limits.
  • Short Papers Track: We encourage submission of short papers (4 pages) to make the workshop more accessible to researchers outside the ML conference publication circuit. These submissions can present implementations of unpublished ideas, modest theoretical results, follow-up experiments, or fresh perspectives on existing work. References and supplementary materials do not count against the 4-page limit.

Since 2025, ICLR has discontinued the separate "Tiny Papers" track, and is instead requiring each workshop to accept short (3–5 pages in ICLR format, exact page length to be determined by each workshop) paper submissions, with an eye towards inclusion; see https://iclr.cc/Conferences/2025/CallForTinyPapers for a history of the ICLR tiny papers initiative. Authors of these papers will be earmarked for potential funding from ICLR, but need to submit a separate application for Financial Assistance that evaluates their eligibility. This application for Financial Assistance to attend ICLR 2026 will become available on https://iclr.cc/Conferences/2026/ at the beginning of February and close early March.

Submission site: Submit papers through the Workshop Submission Portal on OpenReview.

Style file: You must format your submission using the ICLR 2026 LaTeX style file. For convenience, we provide a modified template that refers to our workshop: ICLR 2026 Style Files. Submissions that violate the ICLR style or page limits may be desk-rejected.

Dual-submission policy: The workshop will adopt a non-archival policy, welcoming ongoing and unpublished work, as well as papers under review or recently accepted at other venues (provided they do not breach dual-submission or anonymity policies of the other venue). Workshop submissions can be subsequently or concurrently submitted to other venues.

Visibility: Accepted papers will be made public, but rejected submissions and reviews will not.

Double-blind reviewing: Submissions must be fully anonymized. This policy applies to any supplementary or linked material as well, including code. Any papers found to be in violation of this policy may be desk-rejected.

LLM usage policy: AI-generated papers are not allowed. AI assistance is permitted, but submissions must be primarily human-authored, reflecting original thought and analysis.

Contact: For any questions, please contact us at agentwild-workshop@googlegroups.com.

Schedule

All times are local.

9:00–9:10Opening Remarks — Dawn Song
9:10–9:40Invited Talk 1 — Bing Liu
From LLMs to Agents: The Evaluation Challenge
9:40–10:10Invited Talk 2 — Chi Wang
Frontiers of Agentic AI
10:10–11:00Poster Session 1 + Morning Snack Break
11:00–11:30Invited Talk 3 — Bo Li
Guarding the Age of Agents: Advancing Risk Assessment, Guardrails, and Security Certification
11:30–12:00Invited Talk 4 — Devina Jain
Securing Agents in the Wild: Lessons from Large-Scale Attacker-Defender Competitions
12:00–1:00Lunch Break
1:00–1:45Panel Discussion
The Future of Safe and Secure Agents
Moderator: Ruoxi Jia
Panelists: Jared Quincy Davis, Devina Jain, Bo Li, Bing Liu, Yizhou Sun
1:45–1:50Contributed Talk 1 — Ilham Wicaksono
Mind the Gap: Evaluating Model- and Agentic-Level Vulnerabilities in LLMs with Action Graphs
1:50–1:55Contributed Talk 2 — Weijie Xu
Visual Exclusivity Attacks: Automatic Multimodal Red Teaming via Agentic Planning
1:55–2:00Contributed Talk 3 — Qiushi Sun
OS-Sentinel: Towards Safety-Enhanced Mobile GUI Agents via Hybrid Validation in Realistic Workflows
2:00–2:30Invited Talk 5 — Yoshua Bengio
Avoiding Uncontrolled Agency with the Scientist AI
2:30–3:00Invited Talk 6 — Jared Quincy Davis
Compound AI Systems: Design Patterns for Secure Multi-Agent Deployments
3:00–3:50Poster Session 2 + Afternoon Snack Break
3:50–4:20Invited Talk 7 — Yunyao Li
Building & Querying Enterprise Knowledge Bases: From Declarative Languages to Secure Agents
4:20–4:50Invited Talk 8 — Yizhou Sun
ARLArena: Training Framework for Agentic Reinforcement Learning
4:50–5:00Awards & Closing Remarks — Chenguang Wang

Speakers and Panelists

Speaker
Mila
LawZero
Université de Montréal
Speaker
Mithril
Stanford
Speaker
Virtue AI
University of Illinois Urbana-Champaign
Speaker
Scale AI
Speaker
Amazon
UCLA

Workshop Organizers

Dawn Song
UC Berkeley
Chenguang Wang
UC Santa Cruz
Nicholas Crispino
UC Santa Cruz
Ruoxi Jia
Virginia Tech
Kyle Montgomery
UC Santa Cruz
Yujin Potter
UC Berkeley
Vincent Siu
UC Santa Cruz
Zhun Wang
UC Berkeley

Sponsors

Platinum

Scale AI Skywork

Gold

Lambda AG2