Skip to main content

Bucketing

Bucketing is the process of randomly assigning users to experiment branches. When a user is “bucketed” into an experiment, it means that the configuration in one of its branches (such as a change to part of the UI) can be activated, and that any interactions we record from that moment on can be associated with the experiment and branch identifier. The concept is explained in this video / presentation.

which experiments?

This documentation applies to experiments launched to Desktop, iOS, and Android Firefox through the "Nimbus" or "Normandy" systems. Differences between platforms are noted when relevant.

Assumptions

In order to support the analysis of controlled experiments, we must be able to satisfy the following functional requirements:

  • We can randomly assign users to one or more branches of of an experiment.
  • A single user can enroll in multiple experiments simultaneously.
  • We can specify certain characteristics about a client that must be met for a client to bucket into an experiment, such as region.
  • We can assign users to unevenly distributed branches(e.g. 10% to A, 90% to B)
  • We can control interactions between experiments (i.e. ensure experiments do not overlap) when we want to.
  • We can observe which users have bucketed into which experiments/branches and when.

We assume the following statistical requirements:

  • Assignment of targeted clients to branches is uniformly random with respect to all observables. If we were to look at the set of users for each branch (where unique users are identified by the randomization unit), we should see roughly the same distribution of locale, location, profile age, etc.
  • Branch assignment must not depend on anything the user can influence.
  • Actual enrollment is probabilistically equal to the percentage of total traffic allocated to that branch. e.g. If we configured an experiment with two equal branches to enroll 10% of the population, we should see 5% of the total population enroll in each branch.
  • Enrollment in a branch is deterministic. Given the same experiment configuration, interaction rules, and user identifier, the result should always be the same. Shipping a new experiment must not change the basis for assigning a client to a branch.
  • Enrollment in a branch is persistent. Once a user is bucketed into a branch, they should continue to see the same branch for the duration of the experiment.
  • We should be able to control undesired interactions between experiments based on the specific requirements of our system. For example, as a first step, we can’t enroll users in more than one branch that contains configuration for the same feature.

Implementation

At a high level, we bucket users into experiments client-side by taking a hash of a randomly generated user id and some configuration delivered from our experimentation servers. Assignment happens when configuration is synced to the client and sends enrollment telemetry.

Configuration

note

This example uses the Nimbus experiment format. While the Normandy format is different, the client-side algorithm is almost identical. Many fields have been omitted for brevity.

{
"slug": "my-cool-test",
"targeting": "browserSettings.update.channel == 'release'",
"bucketConfig": {
"start": 5000,
"count": 2000,
"total": 10000,
"namespace": "aboutwelcome-1",
"randomizationUnit": "normandy_id"
},

"branches": [
{ "slug": "control", "ratio": 1 },
{ "slug": "treatment", "ratio": 1 }
]
}
  • targeting specifies conditions that must be met before the client can be considered. In this case, the user must in the release channel (beta or nightly users will not be considered).
  • count is a fraction of total representing the chance of getting bucketed. In this case, the chance is 20%.
  • start is an integer representing a "range" of buckets, which allows for isolation of experiments along a single namespace. In this example, the start is set to 5000, which would isolate it from users in an existing experiment with a start of 0 and a count of 5000.

Randomization Unit

Bucketing uses a stable unique identifier generated at startup. Note that this identifier is not client_id, which is the standard unit for aggregation for most data analysis in Firefox.

Desktop experiments use the normandy_id, a unique stable identifier generated by the ClientEnvironment module during first run and stored in a preference (see implementation). It differs from client_id in that it is not exposed to Telemetry and it is not synced across profiles / accounts.

Mobile experiments use the nimbus_id, a unique identifier generated by the Nimbus client during first run stored in the experiments database (see implementation).

Experiment assignment

In order to randomize clients into experiments, we take a SHA-256 hash of the namespace and the randomization_unit truncated to 12 characters and check if that falls between the bucket range configured in the experiment.

Consider this example:

{
"slug": "experiment-B",
"bucketConfig": {
"start": 3000,
"count": 2000,
"total": 10000,
"namespace": "rutabaga",
"randomizationUnit": "normandy_id"
}
}

A client will be bucketed into the experiment if the input hash falls in the range 3000 to 4999:

                  start
| hash
v v
[0, 1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000]
^
end

Namespace rollovers

When a namespace is fully consumed (i.e., an experiment requests a bucket range beyond 9999), the namespace "rolls over". The namespace ID is updated from something like <application>-<feature>-<channel>-1 to <application>-<feature>-<channel>-2 and the clients are effectively rehashed to form 10000 new buckets. When an experiment launches on this new namespace, some fraction of the requested client range is already enrolled into an experiment (unlike a non-rollover situation where the requested client range is guaranteed to yield available clients) and so experiments will under-enroll their desired amount.

As an example, suppose that 75% of the namespace has been consumed by experiments that have ended and that 20% of the namespace is consumed by an active experiment called A (i.e., A is live with a bucketConfig of {"start":7500, "end":9500", "total":10000}). Suppose further that another 20% experiment called B is launched. In that situation, the namespace will rollover and buckets 0000 to 2000 of the new namespace will be allocated to B. Some fraction of B's clients (approximately 20%) will still be enrolled in A, so even though they meet the enrollment criteria, they will not enroll. Thus, B will actually enroll 16% of the available userspace, even though it targeted 20%.

In practices, experiments can run indefinitely and namespaces can rollover indefinitely so one must account for any running experiment on any prior iteration of the namespace, not just the previous iteration. That said, the impact of this effect is moderated by retention (or lack thereof) in that a 1 year old experiment using 20% of the clients will only block some fraction less, maybe 10%, because clients have churned and many of the clients in that experiment have dropped while new clients have joined after that experiment enrolled. Further, the impact of this effect depends on the targeting itself. For example, experiments targeting new clients do not suffer from this problem (all clients who meet the targeting are guaranteed to not be in any previous experiments because they are new) while experiments that target the same client subset over and over are most impacted.

Branch assignment

Assuming a client has satisfied all targeting conditions and bucketed into an experiment, we will randomly assign a branch. Unlike experiments, branches cannot specify targeting conditions, and hashes are re-randomized for every experiment. We do this by:

  1. Assigning buckets equal to the ratios specified in each branch
  2. Taking a SHA-256 hash of the randomization unit and the experiment identifier (which is unique per experiment)
  3. Checking which range the input hash falls into

For example, given the following branch ratios:

{
"slug": "experiment-123",
"branches": [
{ "slug": "a", "ratio": 2 },
{ "slug": "b", "ratio": 5 },
{ "slug": "c", "ratio": 3 }
]
}

We will assign 20% of the buckets to branch a, 50% to b, and 30% to c. We take a hash of the client's normandy_id and the experiment slug (experiment-123) and see which bucket range it falls into:

                     hash
v
[a, a, b, b, b, b, b, c, c, c]

Controlling interactions

By default, all experiments are allowed to interact and clients can bucket into multiple experiments simultaneously. However, sometimes we do want experiments to be exclusive, such as when they change the same set of variables.

In practice, we have three methods of preventing interactions between experiments:

Bucket range exclusion

Experiments that configure the same namespace will bucket identically for the same user identifier. This means we can exclude experiments by giving the same namespace and have them specify non-interacting ranges (start / count).

Consider two experiments with the following configurations:

{
"slug": "experiment-A",
"bucketConfig": {
"start": 0,
"count": 3000,
"total": 10000,
"namespace": "rutabaga",
"randomizationUnit": "normandy_id"
}
},
{
"slug": "experiment-B",
"bucketConfig": {
"start": 3000,
"count": 2000,
"total": 10000,
"namespace": "rutabaga",
"randomizationUnit": "normandy_id"
}
}

Say we generate a value of 4562 from our hash on a given client. The client is bucketed into experiment-B because this falls in the the range for this experiment (which is 3000 to 4999).

Note that we always re-randomize branch assignment, so we can't isolate based on branch.

Client-side rules

In Nimbus, clients are prevented from enrolling into two experiments that target the same feature with a simple check during enrollment. For example, a user cannot be enrolled in two experiments that change the aboutwelcome feature.

In Normandy, clients are prevented from enrolling in two experiments that change the same preference.

Targeting exclusion

For specific experiments that should be excluded from others, a targeting expression can be included with a specific experiment identifier:

{
"targeting": "!activeExperiments['some-experiment']"
}