For AI agents: A markdown version of this page is available at https://docs.datadoghq.com/experiments.md. A documentation index is available at /llms.txt.

Experiments

This product is not supported for your selected Datadog site. ().
Join the Preview!

Datadog Experiments is in Preview. Complete the form to request access.

Request Access

Overview

Datadog Experiments helps teams run and analyze randomized experiments, such as A/B tests. These experiments help you understand how new features affect business outcomes, user behavior, and application performance, so you can make confident, data-backed decisions about what to implement.

Datadog Experiments consists of two components:

Getting started

To start using Datadog Experiments, configure at least one of the following data sources:

After configuring a data source, follow these steps to launch your experiment:

  1. Create a metric to evaluate your experiment.
  2. Create an experiment to define your hypothesis and optionally calculate a sample size.
  3. Create a feature flag and implement it using the SDK to assign users to the control and variant groups. A feature flag is required to launch your experiment.
  4. Launch your experiment to see the impact of your change on business outcomes, user journey, and application performance.
The Experiments metrics view showing business, funnel, and performance metrics with control and variant values and relative lift for each metric. A tooltip is open on the Revenue metric showing Non-CUPED values for Revenue per User, Total Revenue, and User Assignment Count across the control and variant groups.

Further reading

Additional helpful documentation, links, and articles: