A/B testing is the lifeblood of data-driven product development. You launch an experiment, eagerly anticipating the results that will validate your hypothesis and improve user experience. But what comes next is often a tedious process of manual data pulling, spreadsheet gymnastics, and dashboard-checking that slows you down.
What if you could treat experiment analysis not as a manual chore, but as an automated, programmable service? What if your key A/B test metrics were defined, version-controlled, and tracked right alongside your application code?
This is the power of "KPIs as Code." With a platform like KPIs.do, you can transform your A/B test analysis from a reactive, manual process into a proactive, automated workflow that delivers insights directly where you work.
For most teams, the cycle of A/B test analysis is filled with friction:
This manual approach is a bottleneck. It keeps valuable data locked in silos and prevents your team from making decisions at the speed of development.
KPIs.do enables you to manage your experimentation metrics just like you manage your infrastructure: with code. By defining your A/B test KPIs as a service, you create a single source of truth that automates calculation, tracks performance, and integrates seamlessly into your existing tools.
Before you even ship your experiment, you can define the metrics that will determine its success. Using a simple, version-controlled configuration file, you establish the "what" and "how" of your analysis. This creates a transparent and repeatable contract for your experiment's performance.
Imagine you're testing a new signup flow. You can define a KPI that tracks the conversion rate for both the control and challenger variants.
Example kpi-config.yml:
kpi: signup-conversion-rate
description: "Tracks conversion rate for the new signup page A/B test."
metric: conversionRate
# Define the logic for calculating the metric
calculation:
numerator: unique(userId where event='signup_complete')
denominator: unique(userId where event='view_signup_page')
window: 7d # Calculate over a 7-day rolling window
# Define the dimensions we can slice this KPI by
dimensions:
- experimentId
- variantName # 'control' or 'challenger'
# Set up automated alerts
triggers:
- name: "Significant Winner Alert"
condition: |
kpi(variantName='challenger').value > kpi(variantName='control').value * 1.05 &&
statisticalSignificance > 0.95
action:
type: slack
channel: '#growth-experiments'
message: "🚀 A/B Test `{{ kpi.dimensions.experimentId }}` has a winner! Challenger is up by {{ (kpi(variantName='challenger').value / kpi(variantName='control').value - 1) | percentage }}."
- name: "Negative Impact Alert"
condition: |
kpi(variantName='challenger').value < kpi(variantName='control').value * 0.9
action:
type: pagerduty
service: 'product-on-call'
summary: "🚨 Critical Drop: A/B test challenger variant is significantly underperforming!"
This configuration file is now the heart of your experiment's analysis. It's readable, portable, and can be checked into your Git repository right alongside the feature code.
With your KPI defined, your application simply needs to send the raw event data to the KPIs.do platform. Instead of complex ETL pipelines, your engineers can use a straightforward SDK to track user actions as they happen.
The kpis.do agentic platform listens for these events and handles all the complex calculations defined in your config.
import { kpis } from 'kpis.do';
// When a user sees a page in the experiment
// This contributes to the 'denominator' in our calculation
await kpis.track({
name: 'signup-conversion-rate', // The KPI we are tracking
data: {
event: 'view_signup_page',
userId: 'user-xyz-123'
},
dimensions: {
experimentId: 'q3-new-signup-flow',
variantName: 'challenger'
}
});
// When that same user successfully signs up
// This contributes to the 'numerator'
await kpis.track({
name: 'signup-conversion-rate',
data: {
event: 'signup_complete',
userId: 'user-xyz-123'
},
dimensions: {
experimentId: 'q3-new-signup-flow',
variantName: 'challenger'
}
});
Your code's responsibility is simple: report events. The kpis.do service takes care of the rest.
Once your data is flowing, KPIs.do becomes your automated analysis engine. It continuously:
The moment a variant reaches statistical significance or a key metric drops unexpectedly, your configured actions are fired. Your team gets an immediate Slack message. A PagerDuty alert is triggered. A winning feature flag could even be rolled out automatically via a webhook. The manual loop of checking dashboards is broken forever.
Adopting a "KPIs as Code" approach for A/B testing fundamentally changes how you operate:
Your A/B tests are a critical part of your development process; their analysis should be too. By moving your experiment metrics out of brittle dashboards and into version-controlled code, you unlock a new level of speed, collaboration, and automation.
Ready to turn your A/B test analysis into an automated, measurable service? Explore KPIs.do and start treating your business performance as code.