AB Testing Managed Service
AB Testing as a Managed Service
Running meaningful AB tests often requires dedicated resources and expertise. To make it easier, we offer AB Testing as a managed service. This allows your team to focus on business priorities while we handle the entire process — from setup to evaluation.
As part of this service, our team covers the following key tasks:
-
Analytics implementation Ensure proper tracking and measurement is set up on the current website.
-
Traffic split configuration & experiment setup Define the experiment design, group allocation, and configure traffic splitting. → See Traffic Splitting & Group Assignment
-
Displaying logic & component control Implement or review the correct logic for showing or hiding components based on variant. → See Displaying Logic & Component Control
-
Evaluation & reporting Analyze the collected metrics, perform significance testing, and provide a clear evaluation of experiment outcomes.
-
Continuous support Provide guidance and recommendations for further iterations or scaling successful variants.
1. Introduction: What is AB Testing & What Can Be Measured
What is AB Testing?
AB testing (sometimes called split testing) is a method of comparing two or more variants (A, B, C, …) to see which one performs better on defined metrics. Traffic is split into groups (e.g. 50% sees variant A, 50% sees variant B), and their outcomes are compared.
In the context of e-commerce personalization, AB testing allows you to measure how different personalization strategies, UI placements, ranking logic, or component exposure affect business and engagement metrics.
What Can Be Measured
When you run AB tests on recommendation and search components, you can measure a variety of metrics. Common examples include:
| Metric category | Specific metrics | Why it matters |
|---|---|---|
| Engagement / usage | Click-through rate (CTR), widget interactions, and view rate | Indicates how compelling the variant is |
| Conversion / Sales | Add-to-cart, product views, purchases, revenue per visitor, average order value (AOV) | Shows direct business impact |
| Recommendation metrics | CTR from recommendation, conversion, attribution from recommendation | Evaluates recommendation efficiency |
| Search metrics | Search conversion, zero-result rate, CTR from search | Evaluates search efficiency |
| Behavioral metrics | Bounce rate, session duration, pages per session | Indicates user satisfaction |
| Uplift metrics | Difference (uplift) between variants, statistical significance | Proves which variant wins |
2. Traffic Splitting / Group Assignment
To enable AB testing, Perselio provides logic to allocate visitors into groups at specified ratios. This is done by the same JavaScript Data Collector.
-
Define variants and ratios Example: 50% control, 50% treatment (or 30% / 70%).
-
JavaScript script handles assignment
- Checks if a visitor already has a group assignment (via cookie). Please see Cookies
- If not, assigns randomly according to ratios.
- Stores assignment in a cookie for consistency.
- Typical name of group consists of experiment name and group, e.g.
bremen??0
-
Cookie persistency ensures consistency
- The user remains in the same group across sessions.
- Avoids flicker and ensures reliable test data.
We use user-level randomization. It uses a pseudo-anonymous identifier - a long-lived browser cookie Cookies. Detailed pseudocode can be found below.
3. Displaying Logic & Component Control
Once assigned to a variant, components adapt their behavior accordingly.
How Components Are Controlled
- Each component checks the variant cookie and decides how to render.
- Control variant → baseline experience (default recommendations, default search).
- Treatment variants → show Perselio personalization, hide components, or change UI logic.
Implementation Approaches
- Front-end controlled (default) Your code inspects the variant and renders accordingly.
- Perselio-driven toggling (customization) Our script can show/hide components automatically based on the variant.
Considerations
- Control should mirror “business as usual.”
- Run variant check early to avoid flicker.
- For dynamic UI frameworks, ensure variant check happens before component mount.
4. Analytics & Measurement
item_id identifier in the HTML/DOM layer, so events can be accurately captured and attributed.What is measured?
- Click-through rate of recommendation & search widgets
- Revenue attribution (purchases conversions attributed to widgets interactions)
- Uplift metrics comparing control vs treatment
These metrics are available in the AB Testing dashboard.
Attachments
Pseudo-code of group assignment.
visitor goes to webpage
after JS script is loaded
check whether long-live cookie _saexp is set
if not set:
use random number (used Math.random()) to set group and store to cookie _saexp
else:
if experiment is not active:
delete old experiment from _saexp
set a new experiment and use randomization to set the group
else:
use already set group from cookie
prolong cookie validity
Updated 11 days ago
