In conjunction with ACM MM 2026.
Rio de Janeiro, Brazil
The rapid maturation of generative artificial intelligence has fundamentally altered the digital ecosystem, moving beyond simple image synthesis toward a sophisticated, multi-modal reality. While these advancements drive creative innovation, they have simultaneously given rise to a Synthetic Media Paradox: as deepfakes become visually and acoustically indistinguishable from authentic recordings, our detection systems are becoming increasingly powerful but alarmingly less transparent. In a 2026 landscape, where disinformation campaigns leverage high-fidelity images, temporal video sequences, and cloned audio in tandem, traditional binary detection that labels content as simply Real or Fake is no longer a sufficient defense. The TRIDENT Grand Challenge is born from the urgent need to transition from black-box detection toward a white-box forensic paradigm that prioritizes accountability, interpretability, and explanability
The foundational philosophy of TRIDENT is rooted in the Forensic Triad: the belief that a trustworthy AI must not only reach the correct binary conclusion but must do so with the correct reasons. Previous challenges often overlook the Interpretability Gap, where models achieve decent accuracy by potentially exploiting dataset biases or non-semantic shortcuts rather than identifying actual generative artifacts, resulting in the Hallucination Delemma, a critical failure mode where a detector justifies a correct classification through fabricated evidence and points to phantom artifacts that do not exist. In legal, journalistic, or security contexts, such correct but ungrounded decisions are inadmissible and dangerous. TRIDENT mitigates the gap by requiring the model to demonstrate a triad-based Deepfake examination:
Can the model identify and localize the fine-grained manipulation artifacts across image, video, and audio?
Does the model maintain high classification accuracy across diverse forgery families?
Is the model’s explanation grounded in reality, or is it fabricating evidence?
The TRIDENT Challenge 2026 is an ACM MM 2026 Grand Challenge on tri-modal deepfake perception, detection, and hallucination analysis. Participants are invited to compete across three independent tracks: Image, Video, and Audio, using the TRIDENT dataset. These guidelines govern all aspects of participation, from eligibility to submission and evaluation.
Participation is open to any research team worldwide, subject to the following conditions:
Participants may enter one or more of the following tracks:
Each track includes the following task types:
Participants must answer True/False Questions (TFQ) and Multiple-Choice Questions (MCQ). Required outputs are "True" or "False" for TFQ, and the selected option label for MCQ.
Given that the sample is known to be manipulated, the model must provide a structured description of observable artifacts and manipulation evidence.
Given a sample of unknown authenticity, the model must provide an authenticity label ("Likely Authentic" or "Likely Manipulated") and a short reasoning paragraph supporting the decision.
The organizers will release the Training Set and the Validation Set. These datasets are provided for model training, method development, and local evaluation. No official leaderboard will be provided during this phase.
The organizers will release the Test Set (without ground-truth labels). Participants must run inference on the test set and submit their predictions through the official competition platform.
5.1 Platform
All submissions must be made through the official Codabench competition page. Links to each track's competition page will be provided upon dataset access approval. A starter kit will be provided after registration is validated, containing submission format examples and instructions to help participants get started.
5.2 Submission Limits
Each team is allowed a maximum of 3 submissions per day per track. Submissions exceeding this limit will be automatically rejected by the platform.
5.3 File Format
Submissions must be in JSON format. Files must follow the naming convention: {teamname}_{track}.json
For example: TeamAlpha_image.json, TeamA_video.json, TeamA_audio.json.
Only submissions that pass automatic format validation will be counted as valid. Invalid submissions will not appear on the leaderboard and will count toward the daily limit.
5.4 Abstract Requirement
Each submission must be accompanied by a 1–2 page abstract describing the team's method. The abstract must be submitted as a PDF file named {teamname}_abstract.pdf and sent to trident.at.mm26.mgc@gmail.com before or alongside your final submission on Codabench. Submissions without a valid abstract will appear on the public leaderboard but will not be considered for the final award decision.
6.1 Evaluation Dimensions
Each track is evaluated across three dimensions:
Binary classification performance distinguishing real from fake content.
Fine-grained identification of manipulation artifacts using human-annotated evidence.
Reliability of model reasoning and resistance to producing incorrect or unsupported predictions.
6.2 Official Ranking Metric
The official ranking metric is the Tri-Metric Composite Score (TCS), defined as:
Teams are ranked by their TCS score in descending order within each track.
6.3 Tie-Breaking
In the event of a tie in TCS, rankings will be determined in the following order:
7.1 Permitted Resources
The use of external resources is permitted, including: external datasets, pretrained models, foundation models, large language models (LLMs) and multimodal large language models (MLLMs), and other publicly available resources. All external resources used must be clearly disclosed in the submitted abstract and method description.
7.2 Prohibited Practices
The following are strictly prohibited:
Winners are strongly encouraged to release their code and model weights to support reproducibility and benefit the research community. While release is not mandatory, teams that do so will be acknowledged in the official challenge report. All participants are required to retain their code and model weights for at least 30 days after the results announcement. Winners will be asked to provide their code to the organizers for validation purposes.
ACM Multimedia 2026 is an on-site event only. This means that all papers and contributions must be presented by a physical person on-site; remote presentations will not be hosted or allowed. Papers and contributions not presented on-site will be considered a no-show and removed from the proceedings of the conference. More details will be provided to handle unfortunate situations in which none of the authors would be able to attend the conference physically.
The organizers reserve the right to disqualify any team found to have:
Disqualified teams will be removed from the leaderboard and will not be eligible for awards.
For questions regarding these guidelines, please contact:
Email: trident.at.mm26.mgc@gmail.com
The challenge registration is open from April 3, 2026 to June 1, 2026. Please note that each team must be registered by a responsible team leader (e.g., a faculty member, researcher, or staff member with formal affiliation).
Please fill out the official Registration Form and review the EULA before submitting. Registration must be submitted by a responsible team leader (e.g., a faculty member, researcher, or staff member with formal affiliation). Registration closes on June 1, 2026.
Data for the TRIDENT Challenge can be acquired from HuggingFace after approval from the TRIDENT Organizing Committee.
Participants must download the starter kit from the official github repository and follow the instructions.
Participants must run inference on the released test set and submit their predictions through the official competition platform.