Generative AI models like LLMs are transforming how we create and access information while also raising concerns about manipulation, deception, and the integrity of public discourse at unprecedented scale.
The AI Manipulation and Information Integrity (AIMII) workshop will bring together researchers from computer science, cognitive science, philosophy, political science, and policy to clarify core concepts, evaluate the evidence on AI's persuasive and manipulative capabilities, and explore implications for society and democracy.
The workshop will feature three panel discussions with leading researchers as well as a poster session showcasing new work from the broader community.
Schedule
| 12:45 - 1:15 |
Lunch |
| 1:15 - 2:15 |
Poster Session |
| 2:15 - 2:20 |
Opening Remarks from Organizing Committee |
| 2:20 - 3:20 |
Panel 1: What is AI manipulation? When and why is it bad? Carina Prunkl, Elizabeth Edenberg, Gökhan Onel |
| 3:20 - 4:20 |
Panel 2: Measuring manipulative capabilities and behaviors Maurice Jakesch, Hannah Kirk, Kobi Hackenburg, Jason Hoelscher-Obermaier |
| 4:20 - 4:45 |
Break |
| 4:45 - 5:45 |
Panel 3: Societal impacts and Information Integrity Dan Williams, Hugo Mercier, Chloé Bakalar, Dino Pedreschi |
| 5:45 - 6:00 |
Closing Remarks & Next Steps |
Topics
We welcome submissions on topics including (but not limited to):
Conceptual & Philosophical Foundations
- Definitions and taxonomies of persuasion, manipulation, and deception
- Moral and epistemic dimensions of AI influence
- Autonomy, consent, and the ethics of personalized persuasion
- Boundary cases and edge cases (e.g. when does influence become manipulation?)
Measurement & Evaluation
- Benchmarks and evaluations of persuasive or manipulative capabilities
- Ecological validity of current measurement approaches
- Sycophancy, reward hacking, and training dynamics that produce manipulative behaviors
- Detecting deception, sandbagging, or strategic behavior in AI systems
- Human studies of AI persuasion (attitude change, belief updating, behavioral effects)
Psychology & Cognitive Science
- Human susceptibility to AI-generated persuasion
- Trust, overreliance, and calibration in human-AI interaction
- Cognitive and affective mechanisms of AI influence
- Individual differences in vulnerability to AI manipulation
Societal & Political Impacts
- AI and misinformation/disinformation
- Effects on journalism, media ecosystems, and information environments
- Implications for democratic deliberation and political discourse
- Manipulation in AI companions, chatbots, and productivity tools
- Targeted advertising, recommender systems, and algorithmic influence
Mitigations & Governance
- Technical approaches to reducing manipulative capabilities or behaviors
- Transparency, disclosure, and labeling interventions
- Regulatory frameworks (EU AI Act, DSA, etc.) and their effectiveness
- Red-teaming, auditing, and third-party evaluation
- Platform governance and content moderation
Broader Perspectives
- Historical and comparative perspectives on information manipulation
- Human-AI co-evolution in communication
- Manipulation in multi-agent and agentic AI systems
- Dual-use concerns and beneficial applications of persuasive AI
Organizers