The second Workshop on AI for Cyber Threat Intelligence (WAITI 2025) will be held on
Monday, December 8, 2025, in conjunction with the Annual Computer Security Applications
Conference - ACSAC (https://www.acsac.org/). ACSAC will be held in Hawaii, USA. We have
merged the IoT Security and Cyber Threat Intelligence (IoT-SCTI) Workshop with WAITI to
cater to a wider audience.
Call for Papers
The cybersecurity landscape is in constant flux, inundating security professionals with an
overwhelming and ever-growing data stream. Hidden within this torrent are critical insights
ranging from textual indicators on social media and technical reports to discussions on dark
web forums that, if properly harnessed, can offer a strategic edge. Cyber Threat Intelligence
(CTI) has traditionally relied heavily on manual analysis or rudimentary keyword-based
methods, resulting in significant inefficiencies, delayed responses, and missed threat signals.
Today, security analysts are challenged not only by the sheer volume of data but also by
increasingly sophisticated adversarial techniques such as code obfuscation, misinformation
campaigns, and advanced social engineering. The rapid pace of threat evolution demands not
just faster but smarter ways to extract intelligence and respond effectively. This is where
Natural Language Processing (NLP), and in particular, Large Language Models (LLMs), are
proving revolutionary. LLMs represent a paradigm shift in how we process and understand
unstructured text data. With their unparalleled ability to comprehend context, generate
insights, and reason over language, LLMs have become indispensable in the CTI pipeline.
They enable scalable automation, accurate threat interpretation, and real-time intelligence
extraction from diverse and complex textual sources. By leveraging the deep understanding
and generative capabilities of LLMs, organizations can go beyond reactive defense
mechanisms to develop proactive, anticipatory strategies that adapt to emerging threats with
unprecedented speed and precision.
This workshop aims to spotlight the transformative potential of Artificial Intelligence, NLP,
and especially LLMs in revolutionizing cybersecurity focusing on their application in CTI
gathering and analysis. It will provide a vibrant platform for researchers, practitioners, and
enthusiasts to explore cutting-edge approaches, share breakthroughs, and foster
collaboration in shaping the future of intelligent cyber defense.
We encourage original and high-quality contributions, preliminary work, and novel ideas on topics, including but not limited to:
Information extraction in cyber threat intelligence
Deep learning architectures for threat detection and analysis
Visualization techniques for CTI
Large Language Models for CTI
Intelligence-driven threat-hunting
Attribution
Sharing of CTI
Hunting and Tracking Adversaries
Threat Quantification & Prioritization
Explainable AI in Cybersecurity
Dynamic Threat Adaptation with LLMs
Multimodal Threat Intelligence Fusion
LLMs for Malware Detection
Bias Mitigation in LLMs for Cyber Threat Intelligence
Federated Learning for Threat Detection
LLMs for Social Media Threat Analysis
LLMs and Visual Content
Multimodal Large Language Models (MLLMs)
Understanding Technical Language for CTI
Cross-lingual Threat Intelligence using LLMs
Misinformation Detection in CTI with LLMs
LLM-powered Threat Scenario Generation
Human-in-the-loop systems for LLM-based CTI
Explainable Threat Intelligence Reports with LLMs
Benchmarking LLM Performance
Legal and Ethical Considerations for AI
Zero Trust and CTI
CTI in the IoT domain
AI/GenAI for users’ behavior analysis and inference
GenAI for mobility management and network control
AI/GenAI within 6G networks
Blockchain-based approaches for CTI
Applying CTI/Case Studies
CTI for IoT Systems
Using IoT for sourcing CTI
Network and host-based intrusion detection systems for IoT
Web crawlers and scrapers for IoT threat information
Cyber threat intelligence feeds focusing on IoT
Static and dynamic malware analysis for IoT
Machine learning and data mining techniques for IoT threat analysis
IoT-specific threat actor profiling and attribution
Domain generation algorithms analysis for IoT malware
Identifying trends and patterns in IoT attacks and vulnerabilities
Correlating IoT threat data from multiple sources
Evaluating the reliability and accuracy of IoT CTI sources
Addressing false positives and noise in IoT threat data
Collaborative platforms for sharing and responding to IoT threats
LLM-driven anomaly and fault detection in networks
LLMs for Network Security and Privacy
LLMs for detecting malware in network traffic
QoS/QoE prediction and optimization using LLMs
Federated learning with LLMs
LLMs and Network Data Analytics
Summarization of network incidents and logs via LLMs
LLM Applications in 6G, IoT, and space-terrestrial integrated networks
Ethical considerations, fairness, and bias in LLM-driven systems
LLMs for cybersecurity education and training in networked environments
Submissions should consist of a PDF with no more than 6 double-column pages, excluding references and appendices (max 2 pages). The total PDF must not exceed 8 pages.
Anonymity: Submissions must be anonymous. Author names and affiliations should not be included. Authors can cite their work but must do so in the third person.
Submission Website: https://cmt3.research.microsoft.com/WAITI2025/. (The Microsoft CMT service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft, and they bore all expenses, including costs for Azure cloud services as well as for software development and support.).
We also encourage submitting Systemization-of-Knowledge (SoK) papers that distill the intersection of LLM and cybersecurity of previously published articles.
Publication
Accepted papers will be published by IEEE Computer Society Conference Publishing Services (CPS) and will appear in the Computer Society Digital Library and IEEE Xplore® in an ACSAC Workshops 2025 volume alongside the main ACSAC 2025 proceedings.
An additional small publication fee will be required.