Tools

This section presents all the tools developed within the framework of the TELEMETRY project. For each tool, you will find a brief description along with information about the partner organisation responsible for its development.

Secure Information Event Acquisition (SIEA) Pipeline 

Responsible partner: NOKIA

The Secure Information Event Acquisition (SIEA) Pipeline digests events of different, aggregates and distributes them. First the aggregation collects the information from the different TELEMETRY tools, such as anomaly detection. During aggregation it will be ensure that the rate limit is adapted to the requirements. The aggregated, prioritized and filtered events are forwarded via the distributor to the SIEA. The SIEA then delivers the events to the downstream tools, such as the SSM tool or the ACRAM. The SIEA is aware of the status of the follow up tools and controls the message flow by polling their status. If the message flow is stalled, new incoming events are queued.

Spyderisk is an asset-based, knowledge-based risk simulator of socio-technical systems that follows an ISO 27005 approach and has been developed by the University of Southampton. It has a core schema that describes different fundamental concepts used within asset-based risk modelling and has domain-specific knowledge that uses this schema to encode knowledge of a given domain, such as different types of assets, threats and consequences that need to be considered.

A user of Spyderisk builds a model of their socio-technical system under test within it, which can be comprised of IoT devices, data, people, places and ICT hardware and software. Spyderisk then takes this model together with the knowledge base to determine the likelihood of different risks being present within the system along with their risk levels, based on their calculated likelihood levels and user-asserted impact levels. The user can then explore the different risks and threats in the system, along with controls that can be applied to block or mitigate different threats and risks, which lower the likelihoods of those threats and risks. Additionally, Spyderisk can recommend controls to reduce the likelihood of risks too.

The knowledge base developed for Spyderisk contains knowledge of assets, asset-based relationships, threats, risks, consequences and controls within a Device Under Test (DUT), as well as the cyber-physical systems in which it is deployed. TELEMETRY has extended this to include cybersecurity risks by considering a DUT as a system in its own right, but deployed in a wider system. As an example, a Residential Gateway (RGW) Router is a complex system in its own right of software, hardware from many different sources, but the RGW is deployed within a wider system of a domestic environment, where it provides internet access and private networking.

Responsible partner: NOKIA

The Nokia Anomaly Detection (NAD) Pipeline has a training phase, where data is transformed, enhanced and tested in various ways. In this phase the Time Series Processor(TSP) and the Automated Model Builder (AMB) work together to automatically identify and train the best fitting model. The Operator can decide whether no-control and fully automation is used, or if influence toward the type of final model is wanted to be taken. The final step of training is to fine tune the model output to the domain and require events, that specific use case requires. After training the NAD pipeline applies the model to the real-time stream of data and the security assessment (SA) outputs the events. The task of this pipeline is to report anomalies in near real-time during the operational phase of the DUT.

Responsible partner: ENG 

BACON (Anomaly-Based Component for intrusion detection) leverages Federated Learning to detect anomalies in IoT ecosystem traffic patterns, enabling decentralized training across devices while preserving data privacy. It operates by using Federated Learning to train a generative model across distributed IoT devices, enabling each device to learn and share model updates without transmitting raw data: a central server aggregates these updates to build a global model that captures the typical behaviour of IoT devices. It is then used locally to detect deviations from its normal behaviour for anomaly detection .This approach can help preserve the privacy of IoT devices while still effectively detecting and mitigating potential threats creating robust models based on potentially heterogeneous features and patterns.  

Anomaly detection model usually generates binary classification for each record in the dataset analysed, normal or abnormal, but the intention TELEMETRY toolkit has is to provide more expressive outcomes to help security teams to better understand and address the incidents: BACON provides initial binary classification of network traffic data, normal and abnormal, but in the future, it will offer more detailed and informative output, representing deviation from normal activities in terms of communication with unrecognized patterns and representing deviation from normal activities in terms of packet quantity. 

Responsible partner: i4RI  

The Misuse Detection ML Toolkit is an innovative solution designed to enhance the security and resilience of modern digital systems. Developed within the context of cutting-edge research, the toolkit combines advanced machine learning techniques with practical tools to detect and respond to misuse in real time. 

At its foundation, the toolkit learns what “normal” system behavior looks like – whether through patterns in user activities or the typical operation of devices and sensors. By continuously comparing real-world behavior against this baseline, it can quickly identify deviations that may indicate security threats, operational anomalies, or misuse. 

Initially envisioned as a set of libraries for training AI models, the toolkit has since evolved. It now also orchestrates and manages the execution of these models, while providing intuitive visualization tools that help security analysts interpret the results. When suspicious behavior is detected, the toolkit generates timestamped alerts highlighting the factors that triggered the detection, offering clear and actionable insights. 

Designed with both research and practical application in mind, the Misuse Detection ML Toolkit leverages a range of machine learning methods – from decision trees and support vector machines to deep learning techniques suited for analyzing complex data patterns. Models are trained and validated with real-world scenarios to ensure reliability, accuracy, and a low false-positive rate. 

To support real-time performance at scale, the toolkit integrates with high-throughput data ingestion technologies like Apache Kafka, allowing it to process large volumes of data efficiently. An integrated dashboard enables security teams to monitor system status, review alerts, and access explanations of AI decision-making, increasing transparency and trust. 

Through its combination of intelligent detection, scalable architecture, and user-friendly design, the Misuse Detection ML Toolkit represents a significant advancement in cybersecurity research – helping to bridge the gap between emerging AI technologies and their real-world application in safeguarding digital environments. 

Responsible partner: ATC (Athens Technology Center) 

r-Monitoring is an all-in-one, lightweight monitoring solution designed for a wide range of computing systems, including those with limited resources or diverse architectures such as IoT devices, edge nodes, and servers. It provides continuous monitoring of system performance, resource utilization, process activity (including scanning of running processes), sensitive file monitoring, CVE-based vulnerability detection, and network activity, ensuring full visibility and early detection of anomalies or security threats. Built on a modular architecture, it features a resource-efficient agent, an analysis engine, and a real-time dashboard interface. With support for multiple CPU architectures, including ARM and x86, and seamless integration via APIs and data pipelines, r-Monitoring fits easily into existing environments. It helps users reduce operational complexity and respond quickly to performance issues or security risks. 

Responsible partner: WRCVE 

ACRAM (Access Control Risk Assessment Model) is an advanced system designed to evaluate the risks associated with access control in IT systems by integrating vulnerability data, user behavior analytics, and access policy analysis. The core idea of ACRAM is to provide a comprehensive and dynamic assessment of the security posture of access control mechanisms by analyzing the interaction between subjects (users) and objects (services, resources) in an information system. 

The system is implemented entirely in MATLAB, using the Fuzzy Logic Toolbox to model and evaluate imprecise or ambiguous data. This choice of methodology allows ACRAM to reflect the inherent uncertainty in real-world environments, where exact parameters are often unavailable or fluctuate over time. ACRAM evaluates risk for each subject–object pair based on several key factors such as authentication level, access rights, behavioral anomalies, and the current vulnerability state of system components. 

A distinctive feature of ACRAM is its use of indicators derived from both software and hardware (data from integrated toolsTELEMETRY project partners). This data is used to quantify the severity and likelihood of potential vulnerabilities, forming the basis of a normalized risk score. These scores help system administrators prioritize mitigation actions and adapt access control policies accordingly. 

The ACRAM approach defines a knowledge base built on a set of “if-then” rules generated through fuzzy logic. These rules describe how input parameters—such as system vulnerabilities, user behavior, and access levels—influence the risk level. Each input factor is categorized into three to five tiers of significance, based on industry standards like CIS Benchmarks and known vulnerabilities (CVEs), as well as practical expertise from system administrators. ACRAM supports incident detection and response by monitoring access activity and identifying abnormal user behavior. It provides actionable recommendations for incident response, such as modifying access rights, retaining logs, quarantining suspicious processes or devices, and notifying responsible personnel. This makes it a valuable tool not only for real-time security management but also for auditing and long-term policy refinement. 

By leveraging a systematic approach to risk modeling and combining it with adaptive access policy mechanisms, ACRAM offers a modern solution for organizations seeking to improve the effectiveness of their access control systems. It not only helps detect weaknesses but also actively contributes to the dynamic and automated management of access policies in response to evolving threats. 

Responsible partner: MTU 

The Auditable Data Infrastructure (ADI) is a core element of the TELEMETRY architecture. It provides a common data infrastructure where cybersecurity indicators that are reported by the various TELEMETRY tools can be recorded and made available for other tools that perform risk and trust assessment. Further, it provides a reliable and tamper-proof way of recording so that the indicators are available for cybersecurity audits as well as for incident analysis. 

The ADI is based on Distributed Ledger Technology (DLT) as its technological basis. DLT provides key features that enable data sharing among the TELEMETRY tools and auditable record-keeping: 

  • Decentralised: There is no single database in a single location, but a ledger that is shared among several nodes, avoiding a single point of failure and reducing the risk of manipulation as multiple nodes maintain synchronised copies of the ledger. 
  • Immutable and append-only: Any data that is committed to the ledger cannot be altered or deleted. New data does not overwrite previous data but is appended to the ledger. This is a critical feature for auditability, as it ensures that whatever is recorded is being kept on the record and cannot be tampered with.  
  • Consensus-based: As multiple nodes maintain copies of the ledger, any addition to the ledger must be based on a consensus among the participating nodes, preventing single rogue actors from inserting false information. 
  • Transparent: Every participating node can see all transactions, when they occur and by whom they are made. This ensures that a reliable history is being kept for auditing purposes. 

While DLT is the underlying basis, the ADI provides a RESTful API on top which abstracts from the underlying DLT, providing the tools with an interface that is independent from the actual DLT implementation that is used underneath. Further, the ADI adds user management and access control to ensure only authorised users have access, which is essential as cybersecurity testing and monitoring results may contain sensitive information about the system’s current security posture. 

Data transactions that are sent to the ADI have to comply with a predefined “context”. A context is a JSON format which defines the structure and meaning of a data transaction, similar to a database schema in a conventional database. This includes structures for data as well as metadata. In addition, it defines permissions related to this context, i.e. which users can write or read data transactions for this context.  

Any data transaction must include a context id, indicating the context it relates to. It is then validated against that context, ensuring that the user is permitted to submit this data transaction and that the transaction complies with the structure that is given in the context. This also allows the consumers of those data, such as the risk or trust assessment tools as well as system operators or auditors, to query based on context to receive and understand all transactions that have been seen for that specific context. 

Responsible partner: SINTEF 

The TELEMETRY fuzzing tool is a powerful, user-friendly fuzz testing solution built on top of boofuzz, a proven open-source Python framework designed for network protocol fuzzing. Fuzz testing (or fuzzing) is a software testing technique where a system is exposed to a wide array of unexpected, malformed, or invalid inputs. This approach helps uncover hidden vulnerabilities, crashes, or unexpected behavior, ultimately enhancing the resilience and security of software systems – in our case those handling network communications.  

The fuzzing tool leverages the robust capabilities of boofuzz while adding an intuitive interface, making advanced fuzz testing accessible to developers and testers, including those without deep security expertise. By incorporating automation scripts and detailed documentation, the fuzzing tool reduces technical barriers and enables consistent, repeatable testing of firmware and network protocol implementations. 

Designed to support both physical and virtualized network devices, the fuzzing tool gives developers the flexibility to integrate fuzz testing into diverse development and testing environments. It allows for efficient detection of edge-case vulnerabilities early in the development cycle – well before firmware is released – greatly reducing the risk of post-deployment security flaws. 

For network device manufacturers, the fuzzing tool serves as a key component in a secure development lifecycle. It empowers teams to proactively identify and mitigate vulnerabilities, improve protocol handling robustness, and ensure compliance with industry security standards. The result is a more secure, reliable product that inspires trust among end-users and stakeholders. 

In summary, the TELEMETRY fuzzing tool enhances the vulnerability detection capabilities of network device developers by offering a user-friendly, accessible, and powerful fuzz testing solution – bridging the gap between open-source innovation and enterprise-grade usability. 

Responsible partner: SINTEF 

Gaining visibility into the software inside IoT devices is often difficult, especially when vendors do not provide a Software Bill of Materials (SBOM). Many IoT products rely on proprietary, closed-source software, leaving organizations with little insight into potential security risks.

Our SBOM generation tool makes it easier to create detailed SBOMs even when official documentation or any vendor support is unavailable. It helps security teams and system operators identify the software components running within devices, improving transparency and supporting better risk management.

Designed for the unique challenges of IoT environments, the tool offers a practical solution for maintaining up-to-date SBOMs, helping organizations meet growing security and regulatory expectations. By enabling a clearer understanding of device software, the tool empowers users to take proactive steps toward securing their technology ecosystems and ensuring greater trust in connected systems.

Responsible partner: KU Leuven (COSIC Research Group) 

The Secure Software Updates tool is a cryptographic library developed by KU Leuven’s COSIC research group within the TELEMETRY project. It provides a secure and lightweight mechanism for performing firmware and software updates on resource-constrained IoT and embedded devices. These devices often lack the capacity to use traditional update protocols that are too computationally heavy or bandwidth-intensive, leaving them exposed to vulnerabilities and outdated firmware. 

The tool addresses this by integrating lightweight Message Authentication Code (MAC) algorithms, optimised key management, and dynamic access control policies into a modular and efficient update framework. It ensures the authenticity, integrity, and authorised distribution of update packages, even in environments characterised by low power, intermittent connectivity, and decentralised infrastructure. 

Designed with scalability in mind, the Secure Software Updates tool is protocol-agnostic and can be easily embedded into existing firmware update workflows. It is compatible with diverse communication stacks and supports a decentralised architecture, making it suitable for modern IoT deployments where devices may be distributed across different administrative domains. 

The tool is currently being developed and evaluated in simulated IoT environments to fine-tune its performance and security properties. Upon completion, it is expected to be released as a cryptographic library, under an open-source licence, to support integration by research and developer communities working on secure device management. This work is built upon COSIC’s long-standing expertise in cryptographic protocol engineering and contributes to TELEMETRY’s broader goals of enhancing trust, security, and resilience in decentralised IoT ecosystems. 

Responsible partner: MTU 

The purpose of a Trust Analyser (TA) is to assess the trustworthiness of a device under test (DUT) or system under test (SUT). In the context of TELEMETRY, the TA developed in the project focuses on the long-term trustworthiness of the SUT in several Trust Evaluation Categories (TECs): 

  • Safety: Ensures that the system operates without causing harm to users, the environment, or other systems under both normal and abnormal conditions. 
  • Security: Protects the system from unauthorized access, malicious attacks, and data breaches, ensuring integrity, confidentiality, and availability. 
  • Privacy: Protects personal and sensitive information, ensuring compliance with data protection regulations and user consent. 
  • Reliability: Measures the system’s ability to perform its intended functions consistently and accurately over time. 
  • Resilience: Evaluates the system’s capacity to recover and maintain functionality during and after disruptions or failures. 
  • Uncertainty and Dependability: Assesses the system’s behavior under uncertain conditions and its ability to deliver expected outcomes despite variability. 
  • Goal Analysis: Examines the alignment of the system’s objectives with user needs, ensuring that trust-related goals are met effectively.

The TA uses outputs of other TELEMETRY tools as inputs to its trust analysis. Once it has all the inputs, it aggregates them, performs the analysis of the SUT and calculates a trust score that depends on predefined weights assigned to each TEC. This trust score represents the current trustworthiness of the SUT. 

Responsible partner: SINTEF 

By offering a platform for analyzing firmware (the software side of the device), it enables developers and security researchers to assess IoT device vulnerabilities without requiring physical hardware.

Unlike traditional digital twin approaches that rely on full device replication, this platform centers on firmware analysis, focusing on the software aspect of IoT security without the need for the original hardware. This streamlined approach provides a focused starting point for researchers and developers to conduct in-depth security assessments, offering visibility into software components that may pose security risks. By making firmware content more accessible and manageable, it enhances risk evaluation and enables organizations to strengthen their IoT security posture in a practical, scalable way.

Responsible partner: i4RI  

The Secure Deployment Platform is a modern, lightweight solution designed to make deploying applications safer, easier, and more reliable. Built on top of K3s, Docker, Helm charts, and a service mesh, it provides strong security guarantees without sacrificing simplicity or performance. 

At its core, the platform is powered by K3sa streamlined, production-grade version of Kubernetes. K3s simplifies deployment while ensuring key security features such as built-in encryption (TLS), reduced attack surfaces, fine-grained access controls (RBAC), and secure communication between services. These features make it ideal for a wide range of environments, from edge devices and IoT to full-scale deployments. 

The platform uses Docker containers to package applications securely and consistently, no matter where they are run. Helm charts add powerful package management, allowing teams to deploy applications quickly, version their configurations, and roll back if neededall while maintaining best practices in security and maintainability. 

A service mesh is integrated into the platform to enable encrypted service-to-service communication through automatic mutual TLS (mTLS). It also introduces traffic control, observability, and policy enforcement, supporting a “zero-trust” approach to network security inside the deployment. 

Managing the platform is made easier with Rancher, a user-friendly interface for controlling K3s clusters, and NGINX, which acts as the Ingress controller. NGINX handles incoming web traffic securely, providing advanced features like TLS termination, load balancing, and traffic filtering. 

By combining these components, the Secure Deployment Platform offers a robust foundation for securely running modern applications. It simplifies the complex task of Kubernetes-based deployment, giving research teams and operational users alike a safe and efficient environment for launching and managing services across different scenarios and use cases. 

Through this approach, the platform advances the state of secure deployments in research and innovation, helping to bring stronger cybersecurity practices into everyday application management. 

Responsible partner: ATC (Athens Technology Center) 

The Robot Anomaly Detection tool monitors robotic systems in real time to identify abnormal behavior and potential malfunctions. It uses advanced AI models to analyze multivariate sensor data and detect deviations from expected patterns. The tool includes a user-friendly dashboard that displays live robot status, severity levels of detected anomalies, and the specific sensors involved, using explainable AI techniques such as SHAP (SHapley Additive Explanations). By combining accurate time-series analysis with clear root cause insights, the tool enhances robotic system reliability, supports fast troubleshooting, and reduces operational risks. 

Responsible partner: ATC (Athens Technology Center) 

The Network Anomaly Detection tool identifies abnormal traffic patterns and potential cyber threats by analyzing network behavior rather than relying on predefined attack signatures.  It detects suspicious activities such as unexpected traffic spikes, unauthorized access attempts, or unknown communication patterns. The tool processes network data at the stream level and enriches it with contextual metadata such as IP geolocation, ISP, and ASN. It leverages AI-driven models for behavioral analysis and SHAP-based explanations to pinpoint the specific traffic features responsible for anomalies. This enables more accurate detection, and greater transparency. 

European Cyber Security Community Initiative (ECSCI)

The European Cyber Security Community Initiative (ECSCI) brings together EU-funded cybersecurity research and innovation projects to foster cross-sector collaboration and knowledge exchange. Its aim is to align technical and policy efforts across key areas such as AI, IoT, 5G, and cloud security. ECSCI organizes joint dissemination activities, public workshops, and strategic dialogue to amplify the impact of individual projects and build a more integrated European cybersecurity landscape.

Supported by the European Commission, ECSCI contributes to shaping a shared vision for cybersecurity in Europe by reinforcing connections between research, industry, and public stakeholders.

European Cluster for Cybersecurity Certification

The European Cluster for Cybersecurity Certification is a collaborative initiative aimed at supporting the development and adoption of a unified cybersecurity certification framework across the European Union. Bringing together key stakeholders from industry, research, and national authorities, the cluster facilitates coordination, knowledge exchange, and alignment with the EU Cybersecurity Act.

Its mission is to contribute to a harmonized approach to certification that fosters trust, transparency, and cross-border acceptance of cybersecurity solutions. The cluster also works to build a strong stakeholder community that can inform and support the work of the European Union Agency for Cybersecurity (ENISA) and the future European cybersecurity certification schemes.

CertifAI

CertifAI is an EU-funded project aimed at enabling organizations to achieve and maintain compliance with key cybersecurity standards and regulations, such as IEC 62443 and the EU Cyber Resilience Act (CRA), across the entire product development lifecycle. Rather than treating compliance as a one-time activity or post-development task, CertifAI integrates compliance checks and evidence collection as continuous, embedded practices within daily development and operational workflows.

The CertifAI framework provides structured, practical guidance for planning, executing, and monitoring compliance assessments. It supports organizations in conducting gap analyses, building compliance roadmaps, collecting evidence, and preparing for formal certification. The methodology leverages best practices from established cybersecurity frameworks and aligns with Agile and DevSecOps principles, enabling continuous and iterative compliance checks as products evolve.

A central feature of CertifAI is the use of automation and AI-driven tools—such as Retrieval-Augmented Generation (RAG) systems and Explainable AI—to support the interpretation of complex requirements, detect non-conformities, and generate Security Assurance Cases (SAC) with traceable evidence. The approach is organized into five main phases: preparation and planning, evidence collection and mapping, assessment execution, reporting, and ongoing compliance monitoring.

CertifAI’s methodology is designed to be rigorous yet adaptable, offering organizations a repeatable process to proactively identify, address, and document compliance gaps. This supports organizations not only in meeting certification requirements, but also in embedding a culture of security and compliance into daily practice.

Ultimately, CertifAI’s goal is to make compliance and security assurance continuous, transparent, and integrated, helping organizations efficiently prepare for certification while strengthening their overall cybersecurity posture.

DOSS

The Horizon Europe DOSS – Design and Operation of Secure Supply Chain – project aims to improve the security and reliability of IoT operations by introducing an integrated monitoring and validation framework to IoT Supply Chains.

DOSS elaborates a “Supply Trust Chain” by integrating key stages of the IoT supply chain into a digital communication loop to facilitate security-related information exchange. The technology includes security verification of all hardware and software components of the modelled architecture. A new “Device Security Passport” contains security-relevant information for hardware devices and their components. 3rd party software, open-source applications, as well as in-house developments are tested and assessed. The centrepiece of the proposed solution is a flexibly configurable Digital Cybersecurity Twin, able to simulate diverse IoT architectures. It employs AI for modelling complex attack scenarios, discovering attack surfaces, and elaborating the necessary protective measures. The digital twin provides input for a configurable, automated Architecture Security Validator module which assesses and provides pre-certification for the modelled IoT architecture with respect of relevant, selectable security standards and KPIs. To also ensure adequate coverage for the back end of the supply chain the operation of the architecture is also be protected by secure device onboarding, diverse security and monitoring technologies and a feedback loop to the digital twin and actors of the supply chain, sharing security-relevant information.

The procedures and technology will be validated in three IoT domains: automotive, energy and smart home.

The 12-member strong DOSS consortium comprises all stakeholders of the IoT ecosystem: service operators, OEMs, technology providers, developers, security experts, as well as research and academic partners.

EMERALD: Evidence Management for Continuous Compliance as a Service in the Cloud

The EMERALD project aims to revolutionize the certification of cloud-based services in Europe by addressing key challenges such as market fragmentation, lack of cloud-specific certifications, and the increasing complexity introduced by AI technologies. At the heart of EMERALD lies the concept of Compliance-as-a-Service (CaaS) — an agile and scalable approach aimed at enabling continuous certification processes in alignment with harmonized European cybersecurity schemes, such as the EU Cybersecurity Certification Scheme for Cloud Services (EUCS).

By focusing on evidence management and leveraging results from the H2020 MEDINA project, EMERALD will build on existing technological readiness (starting at TRL 5) and push forward to TRL 7. The project’s core innovation is the development of tools that enable lean re-certification, helping service providers, customers, and auditors to maintain compliance across dynamic and heterogeneous environments —including Cloud, Edge, and IoT infrastructures.

EMERALD directly addresses the critical gap in achieving the ‘high’ assurance level of EUCS by offering a technical pathway based on automation, traceability, and interoperability. This is especially relevant in light of the emerging need for continuous and AI-integrated certification processes, as AI becomes increasingly embedded in cloud services.

The project also fosters strategic alignment with European initiatives on digital sovereignty, supporting transparency and trust in digital services. By doing so, EMERALD promotes the adoption of secure cloud services across both large enterprises and SMEs, ensuring that security certification becomes a practical enabler rather than a barrier.

Ultimately, EMERALD’s vision is to provide a robust, flexible, and forward-looking certification ecosystem, paving the way for more resilient, trustworthy, and user-centric digital infrastructures in Europe.

SEC4AI4SEC

Sec4AI4Sec is a project funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101120393.

This project aims to create a range of cutting-edge technologies, open-source tools, and new methodologies for designing and certifying secure AI-enhanced systems and AI-enhanced systems for security. Additionally, it will provide reference benchmarks that can be utilized to standardize the evaluation of research outcomes within the secure software research community.

The project is divided into two main phases, each with its own name.

·       AI4Sec – stands for using artificial intelligence in security. Democratize security expertise with an AI-enhanced system that reduces development costs and improves software quality. This part of the project improves via AIs the secure coding and testing.

·       Sec4AI –  involves AI-enhanced systems. These systems also have risks that make them vulnerable to new security threats unique to AI-based software, especially when fairness and explainability are essential.

The project considers the economic and technological impacts of combining AI and security.

The economic phase of the project focuses on leveraging AI to drive growth, productivity, and competitiveness across industries. It includes developing new business models, identifying new market opportunities, and driving innovation across various sectors.