Erasure Code Settings

This page covers settings that configure the Erasure Code parity to use for objects written to the MinIO AIStor cluster. This impacts how MinIO AIStor uses the space on the drive(s) and how MinIO AIStor can recover objects stored on lost drives or similar issues.

You can establish or modify settings by defining:

  • an environment variable on the host system prior to starting or restarting the AIStor Server. Refer to your operating system’s documentation for how to define an environment variable.
  • a configuration setting using mc admin config set.

If you define both an environment variable and the similar configuration setting, MinIO AIStor uses the environment variable value.

Some settings have only an environment variable or a configuration setting, but not both.

Each configuration setting controls fundamental MinIO AIStor behavior and functionality. Test configuration changes in a lower environment, such as DEV or QA, before applying to production.

Standard storage class

MinIO AIStor ‘storage classes’ have distinct functionality for AWS S3 storage classes. In context of MinIO AIStor, changing storage classes affects the erasure code parity of the deployment.

The parity level for the deployment. MinIO AIStor shards objects written with the default STANDARD storage class using this parity value.

MinIO AIStor references the x-amz-storage-class header in request metadata for determining which storage class to assign an object. The specific syntax or method for setting headers depends on your preferred method for interfacing with the MinIO AIStor server.

Specify the value using EC:M notation, where M refers to the number of parity blocks to create for the object.

The following table lists the default values based on the erasure set size of the initial server pool in the deployment:

Erasure Set Size Default Parity (EC:M)
1 EC:0
2-3 EC:1
4-5 EC:2
6 - 7 EC:3
8 - 16 EC:4

The minimum supported value is 0, which indicates no erasure coding protections. These deployments rely entirely on the storage controller or resource for availability / resiliency.

The maximum value depends on the erasure set size of the initial server pool in the deployment, where the upper bound is ERASURE_SET_SIZE/2. For example, a deployment with erasure set stripe size of 16 has a maximum standard parity of 8.

You can change this value after startup to any value between 0 and the upper bound for the erasure set size. MinIO AIStor only applies the changed parity to newly written objects. Existing objects retain the parity value in place at the time of their creation.

Reduced redundancy storage class

The parity level for objects written with the REDUCED storage class.

MinIO AIStor references the x-amz-storage-class header in request metadata for determining which storage class to assign an object. The specific syntax or method for setting headers depends on your preferred method for interfacing with the MinIO AIStor server.

Specify the value using EC:M notation, where M refers to the number of parity blocks to create for the object.

This value must be less than or equal to MINIO_STORAGE_CLASS_STANDARD.

You cannot set this value for deployments with an erasure set size less than 2. Defaults to EC:1 for deployments with erasure set size greater than 1. Defaults to EC:0 for deployments of erasure set size of 1.

Parity retention optimization

Controls how MinIO AIStor handles parity when one or more drives in an erasure set are offline at the time of a write operation.

Valid values: upgrade | ignore

Defaults to upgrade.

upgrade (default)

When one or more drives in the target erasure set are offline but write quorum is maintained, MinIO AIStor automatically increases the parity of the object by one for each offline drive.

For example, consider a deployment with an erasure set size of 16 and a configured parity of EC:4. If 2 drives in the target erasure set are offline at the time of write, MinIO AIStor writes the object with EC:6 parity instead of the configured EC:4. MinIO AIStor records this upgraded parity in the object’s metadata.

This is the recommended default because it ensures every object has the same level of data protection as objects written to a fully healthy erasure set. Without parity upgrade, objects written during a drive or node outage have fewer parity shards and are at greater risk of data loss if additional drives fail before the cluster heals.

Capacity planning for write-heavy deployments

Parity upgrades increase the per-object storage footprint. On clusters with heavy write workloads, this can cause measurable capacity growth on the affected erasure sets while drives or nodes remain offline.

When a node goes offline for a prolonged period, every new object written to the affected erasure sets carries additional parity shards. On write-heavy clusters that already operate at high utilization (above 80% per drive), this additional parity can:

  • Cause uneven drive usage across the cluster, where drives in the affected erasure sets fill faster than drives in healthy sets.
  • Accelerate the approach toward full capacity on those drives, potentially triggering write rejections before other parts of the cluster are full.

To operate safely with the default upgrade behavior:

  • Keep per-drive utilization at or below 70%. This provides headroom for parity upgrades during extended outages without risking drive-full conditions.
  • Use lifecycle management expiration rules to remove expired or unnecessary content. Proactively reclaiming space reduces the impact of temporarily elevated parity on cluster capacity.
  • Plan for capacity expansion before utilization consistently exceeds 70%. Adding a new server pool provides additional capacity to absorb the overhead of parity upgrades during node-level failures.

ignore

MinIO AIStor writes the object with the configured parity regardless of the state of drives in the erasure set. MinIO AIStor does not create additional parity shards for the object.

This prioritizes overall cluster capacity at the cost of reduced fault tolerance for objects written while drives are offline. For example, with EC:4 on a 16-drive erasure set where 2 drives are already offline, an object written with ignore still has only 4 parity shards spread across 14 available drives. That object can tolerate 2 more drive failures before losing read availability, compared to 4 more if parity had been upgraded to EC:6.

When to use ignore

Consider ignore for capacity-centric deployments where maximizing usable storage is more important than maintaining full fault tolerance during transient drive or node failures. This includes deployments that:

  • Store data that is reproducible or backed by an external source of truth, such as analytics outputs, derived datasets, or cache tiers.
  • Operate with higher base parity (for example, EC:8 on a 16-drive erasure set) that already provides substantial fault tolerance even without upgrades.
  • Run near capacity by design and cannot absorb the overhead of upgraded parity without risking drive-full conditions.
Objects written with ignore while drives are offline have fewer parity shards than objects written to a healthy erasure set. These objects tolerate fewer subsequent drive failures before becoming unavailable. Ensure your operational procedures account for this reduced protection during outages.

Legacy environment variable

MINIO_STORAGE_CLASS_OPTIMIZE

MINIO_STORAGE_CLASS_OPTIMIZE provides the same functionality as MINIO_ERASURE_PARITY_FAILURE and is supported for backward compatibility. Use MINIO_ERASURE_PARITY_FAILURE for new deployments.

Valid values: availability | capacity

Defaults to availability.

  • availability is equivalent to MINIO_ERASURE_PARITY_FAILURE=upgrade
  • capacity is equivalent to MINIO_ERASURE_PARITY_FAILURE=ignore

If both MINIO_ERASURE_PARITY_FAILURE and MINIO_STORAGE_CLASS_OPTIMIZE are set, parity upgrade occurs if either setting enables it.

Storage class comment

Adds a comment to the storage class settings.