TRIDENT: Tri-modal Deepfake Perception, Detection, and Hallucination Grand Challenge



In conjunction with ACM MM 2026.

Rio de Janeiro, Brazil

TRIDENT:
Tri-modal Deepfake Perception, Detection, and Hallucination Grand Challenge

The rapid maturation of generative artificial intelligence has fundamentally altered the digital ecosystem, moving beyond simple image synthesis toward a sophisticated, multi-modal reality. While these advancements drive creative innovation, they have simultaneously given rise to a Synthetic Media Paradox: as deepfakes become visually and acoustically indistinguishable from authentic recordings, our detection systems are becoming increasingly powerful but alarmingly less transparent. In a 2026 landscape, where disinformation campaigns leverage high-fidelity images, temporal video sequences, and cloned audio in tandem, traditional binary detection that labels content as simply Real or Fake is no longer a sufficient defense. The TRIDENT Grand Challenge is born from the urgent need to transition from black-box detection toward a white-box forensic paradigm that prioritizes accountability, interpretability, and explanability

The foundational philosophy of TRIDENT is rooted in the Forensic Triad: the belief that a trustworthy AI must not only reach the correct binary conclusion but must do so with the correct reasons. Previous challenges often overlook the Interpretability Gap, where models achieve decent accuracy by potentially exploiting dataset biases or non-semantic shortcuts rather than identifying actual generative artifacts, resulting in the Hallucination Delemma, a critical failure mode where a detector justifies a correct classification through fabricated evidence and points to phantom artifacts that do not exist. In legal, journalistic, or security contexts, such correct but ungrounded decisions are inadmissible and dangerous. TRIDENT mitigates the gap by requiring the model to demonstrate a triad-based Deepfake examination:

Perception

Can the model identify and localize the fine-grained manipulation artifacts across image, video, and audio?

Detection

Does the model maintain high classification accuracy across diverse forgery families?

Hallucination

Is the model’s explanation grounded in reality, or is it fabricating evidence?

Participation Guidelines

1. Overview

The TRIDENT Challenge 2026 is an ACM MM 2026 Grand Challenge on tri-modal deepfake perception, detection, and hallucination analysis. Participants are invited to compete across three independent tracks: Image, Video, and Audio, using the TRIDENT dataset. These guidelines govern all aspects of participation, from eligibility to submission and evaluation.

2. Eligibility

Participation is open to any research team worldwide, subject to the following conditions:

  • Each team must have a responsible team leader who is a faculty member, researcher, or staff member with formal affiliation. The team leader is responsible for the team's compliance with these guidelines and the TRIDENT End User License Agreement (EULA).
  • Students may participate as team members but may not serve as the team leader unless otherwise approved by the organizers.
  • Each participant may only be a member of one team per track.
  • Teams must register via the official registration form and receive approval from the organizers before accessing the dataset. Participation prior to approval is not permitted.

3. Tracks

Participants may enter one or more of the following tracks:

  • Image Track
  • Video Track
  • Audio Track

Each track includes the following task types:

Structured VQA

Participants must answer True/False Questions (TFQ) and Multiple-Choice Questions (MCQ). Required outputs are "True" or "False" for TFQ, and the selected option label for MCQ.

Type-A Open-Ended Questions

Given that the sample is known to be manipulated, the model must provide a structured description of observable artifacts and manipulation evidence.

Type-B Open-Ended Questions

Given a sample of unknown authenticity, the model must provide an authenticity label ("Likely Authentic" or "Likely Manipulated") and a short reasoning paragraph supporting the decision.

4. Challenge Phases

Phase 1 – Training and Validation Data Released

The organizers will release the Training Set and the Validation Set. These datasets are provided for model training, method development, and local evaluation. No official leaderboard will be provided during this phase.

Phase 2 – Testing Data Released; Submission Start

The organizers will release the Test Set (without ground-truth labels). Participants must run inference on the test set and submit their predictions through the official competition platform.

5. Submission

5.1 Platform
All submissions must be made through the official Codabench competition page. Links to each track's competition page will be provided upon dataset access approval. A starter kit will be provided after registration is validated, containing submission format examples and instructions to help participants get started.

5.2 Submission Limits
Each team is allowed a maximum of 3 submissions per day per track. Submissions exceeding this limit will be automatically rejected by the platform.

5.3 File Format
Submissions must be in JSON format. Files must follow the naming convention: {teamname}_{track}.json
For example: TeamAlpha_image.json, TeamA_video.json, TeamA_audio.json.
Only submissions that pass automatic format validation will be counted as valid. Invalid submissions will not appear on the leaderboard and will count toward the daily limit.

5.4 Abstract Requirement
Each submission must be accompanied by a 1–2 page abstract describing the team's method. The abstract must be submitted as a PDF file named {teamname}_abstract.pdf and sent to trident.at.mm26.mgc@gmail.com before or alongside your final submission on Codabench. Submissions without a valid abstract will appear on the public leaderboard but will not be considered for the final award decision.

6. Evaluation Criteria and Metrics

6.1 Evaluation Dimensions
Each track is evaluated across three dimensions:

Detection

Binary classification performance distinguishing real from fake content.

Perception

Fine-grained identification of manipulation artifacts using human-annotated evidence.

Hallucination Robustness

Reliability of model reasoning and resistance to producing incorrect or unsupported predictions.

6.2 Official Ranking Metric
The official ranking metric is the Tri-Metric Composite Score (TCS), defined as:

TCS = 0.4 × Detection + 0.3 × Hallucination Robustness + 0.3 × Perception

Teams are ranked by their TCS score in descending order within each track.

6.3 Tie-Breaking
In the event of a tie in TCS, rankings will be determined in the following order:

  1. Higher Detection score
  2. Higher Hallucination Robustness score
  3. Higher Perception score

7. External Data and Models

7.1 Permitted Resources
The use of external resources is permitted, including: external datasets, pretrained models, foundation models, large language models (LLMs) and multimodal large language models (MLLMs), and other publicly available resources. All external resources used must be clearly disclosed in the submitted abstract and method description.

7.2 Prohibited Practices
The following are strictly prohibited:

  • Use of test set labels or private annotations
  • Any form of data leakage from the test set
  • Any attempt to reconstruct or infer hidden ground truth
  • Use of any resources not disclosed in the method description

8. Leaderboard

  • A public leaderboard will be maintained on Codabench throughout the submission period, updated automatically after each validated submission.
  • The leaderboard displays TCS scores for each track independently.
  • The final leaderboard is determined at the close of the submission period (June 10, 2026). Results submitted after the deadline will not be considered.
  • The organizers reserve the right to verify the top-ranked submissions before confirming final results. Winners will be announced on June 15, 2026.
  • If a participant believes there is a scoring error or evaluation issue, they may flag the concern by emailing trident.at.mm26.mgc@gmail.com before the final results are announced on June 15, 2026.

9. Code and Model Release

Winners are strongly encouraged to release their code and model weights to support reproducibility and benefit the research community. While release is not mandatory, teams that do so will be acknowledged in the official challenge report. All participants are required to retain their code and model weights for at least 30 days after the results announcement. Winners will be asked to provide their code to the organizers for validation purposes.

10. Awards and Recognition

  • One winner will be selected per track (Image, Video, Audio) based on the final TCS ranking.
  • Each winning team will be invited to submit a paper describing their method to the ACM MM 2026 main conference.
  • Acceptance of the invited paper is subject to review by the challenge organizers. The organizers do not guarantee acceptance.
  • At least one full registration to ACM MM 2026 is required for the paper to be published in the proceedings.

11. Presentation policy

ACM Multimedia 2026 is an on-site event only. This means that all papers and contributions must be presented by a physical person on-site; remote presentations will not be hosted or allowed. Papers and contributions not presented on-site will be considered a no-show and removed from the proceedings of the conference. More details will be provided to handle unfortunate situations in which none of the authors would be able to attend the conference physically.

12. Disqualification

The organizers reserve the right to disqualify any team found to have:

  • Violated these Participation Guidelines or the TRIDENT EULA
  • Used prohibited resources or engaged in data leakage
  • Submitted manipulated, fraudulent, or misleading results
  • Colluded with other teams to circumvent submission limits or evaluation procedures
  • Failed to disclose external resources used in their method
  • Failed to provide a valid abstract with their submission

Disqualified teams will be removed from the leaderboard and will not be eligible for awards.

13. Contact

For questions regarding these guidelines, please contact:
Email: trident.at.mm26.mgc@gmail.com

Registration

The challenge registration is open from April 3, 2026 to June 1, 2026. Please note that each team must be registered by a responsible team leader (e.g., a faculty member, researcher, or staff member with formal affiliation).

Procedure

1
Step 1: Registration

Please fill out the official Registration Form and review the EULA before submitting. Registration must be submitted by a responsible team leader (e.g., a faculty member, researcher, or staff member with formal affiliation). Registration closes on June 1, 2026.

2
Step 2: Download Dataset

Data for the TRIDENT Challenge can be acquired from HuggingFace after approval from the TRIDENT Organizing Committee.

3
Step 3: Download Starter Kit

Participants must download the starter kit from the official github repository and follow the instructions.

4
Step 4: Submit to Codabench

Participants must run inference on the released test set and submit their predictions through the official competition platform.

Important Dates

All deadlines are at 11:59 p.m. Anywhere on Earth (AoE).

Event Date Countdown
Registration Opens Apr 3, 2026
Test Data Released; Result Submission Opens May 8, 2026
Registration Closes Jun 1, 2026
Result Submission Deadline Jun 10, 2026
Winners Announced Jun 15, 2026
Paper Submission Deadline Jun 25, 2026
Paper Decision Jul 16, 2026
Camera Ready Deadline Aug 6, 2026

Organizers

Wen-Huang Cheng

Wen-Huang Cheng

National Taiwan University
Hong-Han Shuai

Hong-Han Shuai

National Yang Ming Chiao Tung University
Khoa D. Doan

Khoa D. Doan

VinUniversity
Hongxia Xie

Hongxia Xie

Jilin University
Ling Lo

Ling Lo

National Yang Ming Chiao Tung University
Jian-Yu Jiang-Lin

Jian-Yu Jiang-Lin

National Taiwan University
Kang-Yang Huang

Kang-Yang Huang

National Taiwan University
Ling Zou

Ling Zou

National Taiwan University

Sponsors

  • Artificial Intelligence Center of Research Excellence, National Taiwan University (NTU AI-CoRE)
  • NVIDIA Academic Grant Program

Call for Papers

Participant Logos