CASE STUDY

Securing and Accelerating
AI/ML Development by >45%

German Secure IT Provider

Client

German IT Services Provider

Industry

Government AI/ML Implementation

Use Case

AI/ML project lifecycle security

Product

Open source KitOps

Executive Summary

This case study covers a trusted IT security provider for critical government services. They are the go-to provider for delivering AI/ML projects for security conscious public sector agencies and private sector companies. Their customers have come to expect high-velocity innovation and development without compromising on security.

As a leader in secure IT services, their goal was to standardize and secure every stage of their AI/ML development lifecycle — from experimentation to deployment — while ensuring compliance, reproducibility, and operational scalability.

With this in-house development team, they considered building their own tools, or extending existing ones, but chose instead to adopt KitOps ModelKits for packaging, versioning, and storing both their customers’ and their own internal AI/ML projects. This immediately saved them weeks of time by being able to use an off-the-shelf open source project rather than build their own tooling. However, the larger savings came with each of their project cycles. Since using KitOps they have seen project times reduced by over 13% per cycle.

As a next step, adding the Jozu Hub will accelerate development cycles by an additional 34%.

The Challenge

A Lack of Central Control

To cover everything from data preparation, to model training, to testing and deployment to inference servers, the customer had brought together a dozen tools including popular tools like Jupyter Notebooks, MLFlow, and DvC. They also had existing tools used for application management like git, CI/CD pipelines, and Kubernetes.

Their portfolio of AI/ML tooling, while functional, didn’t provide the security and compliance their customers demanded. As is common in AI/ML, their toolchain was made up of a half-dozen tools which didn’t tightly integrate with each other - adding delays and risk to handoffs and coordination within their teams.

Their initial AI/ML projects had been successful, but lacked the centralized control and compliance that the high security organizations they worked with demanded:

  • Fragmented Asset Management Sensitive assets like datasets, models, and code were stored across disparate systems (Git, Amazon S3, Jupyter Notebooks, MLFlow, and Kubernetes).
  • Security Gaps Existing tools and storage solutions were vulnerable to tampering, lacked fine-grained and consistent authorization, and had no unified versioning or auditability.
  • Sluggish Handoffs Development and deployment teams struggled to move models from experimentation to production efficiently and securely.

These limitations caused delays, created needless compliance risks, and hindered the customer’s ability to scale their secure AI/ML offerings.

They considered forking the open source Jupyter and MLFlow projects to add secure storage to each independently but quickly dismissed the idea because of the high initial and on-going investment it would require from them to build and maintain these bespoke tools.

The Solution

Secure Packaging Using KitOps Plus Their Existing Toolchain

After investigating their options they found KitOps ModelKits to be the clear winner:

  • KitOps ModelKits include versioned components — model weights, code, datasets, metrics, documentation — tracked in a single manifest file.
  • ModelKits are tamper-proof, signed, and traceable, forming a cryptographically verifiable chain of provenance that can be audited at any point.
  • All assets are published to a central enterprise registry with policy-based access control.
  • ModelKits, because they are an OCI Artifact, are compatible with any tool that can use containers.. This makes integrating with existing test, CI/CD, or deployment systems easy.

This not only secured AI/ML assets but also standardized how teams built, shared, and reused components across projects. Using the open source KitOps project backed by Jozu, they implemented a standard and secure packaging process for all AI/ML projects across their development cycle.

"We have MLFlow, but executives need a tamper proof and auditable solution.
We were considering customizing MLFlow to add secure storage but now we don’t have to!"
MLOps Tech Lead

Building a Secure Data Pipeline Using ModelKits

The customer was able to reuse many of their existing data pipelines, while adding KitOps packaging to give teams reusable base datasets as well as tuned datasets specific to each project. This maximizes reuse and sharing allowing teams to move faster and with greater confidence.

At a high level, there are three main stages to their data pipelines:

  1. Extract & Clean Raw Data → Shared as General Use Base Dataset ModelKits
  2. Tune Base Datasets for Specific Projects → Published as Tuning Dataset ModelKits
  3. Validate with Project-Specific Data → Stored as Validation Dataset ModelKits

Secure Model Development using ModelKits

During development, most data science teams use Python in Jupyter Notebooks or other development tools.
KitOps provides a Python library for building and managing ModelKits, so teams simply import that package into their code and then package up a ModelKit for each milestone or checkpoint that they (or other teams) might want to come back to.

For example, a ModelKit is created for each new version of the model (usually through a CI/CD build pipeline), but also whenever teams get to a point where they are choosing between two significant paths - this allows them to quickly load a ModelKit and investigate another path if the first choice doesn’t work out.

At a high level this process involves four steps:

  1. Pull or Author a Base Model → Encapsulated as the Base ModelKit
  2. Tune & Experiment → Each version or experiment is stored as a Tuned ModelKit
  3. Integrate with Agents / Services → Completed integrations are stored as a Ready-for-Production ModelKit
  4. Deploy to Production → Teams can have the Jozu Hub generate Kubernetes or container deployments that match the infrastructure and requirements of the specific customer
Image

The entire workflow ensures reproducibility, security, and speed from initial model development through production rollout. It’s easy for teams to create their own secure data and model pipelines using templates.

Measurable Impact

Measurable Impact with KitOps

Initially the customer chose to start with KitOps because they favour open source technologies for their internal systems. This resulted in significant and immediate improvements:

Metric Before After KitOps
Time Saved per Iteration 1.7 days saved (13%) per cycle
Asset Reuse Word of mouth Template ModelKits simplify reuse
Deployment Manual and risk-prone Automated with existing pipelines
Auditability Difficult, days of work Integrated into existing auditing tools

Additionally, the customer avoids vendor lock-in because KitOps is the market’s only vendor-neutral solution solution for secure packaging, versioning, and storage of AI/ML projects. It is even governed by the Cloud Native Computing Foundation (CNCF) who ensures critical projects like Kubernetes, Prometheus, and Envoy are maintained, secure, and vendor-neutral.

At the same time, Jozu supports KitOps and ensures that bugs or features that the customer needs are swiftly addressed.

Improve Security and Speed

How They Will Further Improve their Security and Speed with Jozu Hub

As a next stage, the customer is planning to use the Jozu Hub as their secure service catalogue for AI/ML projects. This will allow them to easily stay on top of public open source model advances, while ensuring that their development teams are only using base models that have acceptable licenses, lineage, and security posture.

Storing KitOps ModelKits in Jozu Hub will give them even greater benefits:

  • A centralized service catalogue for AI/ML projects, surfacing approved datasets and models with metadata and lineage.
  • Guided paths for development teams making it easier for teams to use approved and vetted ModelKit-packaged datasets and models than pull them from Hugging Face where provenance and security are unknown.
  • Vulnerability scanning and provenance tracking for every model and pipeline stage.
  • Automated model deployment generation reducing handoff time and ensuring consistency across Kubernetes environments.

Together, KitOps and Jozu Hub will give the customer complete visibility, control, and agility over their AI/ML lifecycle — from data ingestion to production inference.

Benefits

Benefits of KitOps + Jozu Hub

Adding Jozu Hub further accelerates project cycle time while significantly reducing handoff friction between teams and between the customer and its clients.

Jozu’s Rapid Inference Containers speed deployment to Kubernetes by >7x even compared to “fast” deployment solutions like NVIDIA NIM or Tensorflow streaming. By using Jozu Hub in conjunction with an internal developer portal, teams can be given “golden paths” for AI/ML development and avoid costly license and security issues.

Metric Before After KitOps After KitOps + Jozu
Average Iteration Time Saved 1.7 days saved (13%) 5.9 days saved (45%)
Security Posture Fragmented Consistent & Centralized End-to-End Provenance & Scanning
Asset Reuse Word of mouth Template ModelKits simplify reuse Central service catalogue maximizes safe reuse
Deployment Manual and risk-prone Automated with existing pipelines 7x increase in deployment speed to Kubernetes
Auditability Difficult, days of work Instantly available Instantly available

Conclusion

KitOps and Jozu Accelerate and Secure AI/ML Projects

As an IT services provider, the customer has strict expectations from their clients on the security, speed, and accuracy of the work they perform. The customer transformed its AI/ML operations into a secure, scalable, and high-velocity development environment thanks to KitOps and Jozu. The result is a model for how IT organizations can:

  • Safely operationalize AI
  • Meet stringent security and compliance requirements
  • Shorten project development times
  • Reduce costs
  • Protect themselves against vendor lock-in

Get Started by
Speaking with our
Engineering Team

Connect with our team to start a conversation. We're ready to collaborate, troubleshoot, and help you move forward.

Let's talk