AB Testing Managed Service

AB Testing as a Managed Service

Running meaningful AB tests often requires dedicated resources and expertise. To make it easier, we offer AB Testing as a managed service. This allows your team to focus on business priorities while we handle the entire process — from setup to evaluation.

As part of this service, our team covers the following key tasks:

  1. Analytics implementation Ensure proper tracking and measurement is set up on the current website.

  2. Traffic split configuration & experiment setup Define the experiment design, group allocation, and configure traffic splitting. → See Traffic Splitting & Group Assignment

  3. Displaying logic & component control Implement or review the correct logic for showing or hiding components based on variant. → See Displaying Logic & Component Control

  4. Evaluation & reporting Analyze the collected metrics, perform significance testing, and provide a clear evaluation of experiment outcomes.

  5. Continuous support Provide guidance and recommendations for further iterations or scaling successful variants.


1. Introduction: What is AB Testing & What Can Be Measured

What is AB Testing?

AB testing (sometimes called split testing) is a method of comparing two or more variants (A, B, C, …) to see which one performs better on defined metrics. Traffic is split into groups (e.g. 50% sees variant A, 50% sees variant B), and their outcomes are compared.

In the context of e-commerce personalization, AB testing allows you to measure how different personalization strategies, UI placements, ranking logic, or component exposure affect business and engagement metrics.

What Can Be Measured

When you run AB tests on recommendation and search components, you can measure a variety of metrics. Common examples include:

Metric categorySpecific metricsWhy it matters
Engagement / usageClick-through rate (CTR), widget interactions, and view rateIndicates how compelling the variant is
Conversion / SalesAdd-to-cart, product views, purchases, revenue per visitor, average order value (AOV)Shows direct business impact
Recommendation metricsCTR from recommendation, conversion, attribution from recommendationEvaluates recommendation efficiency
Search metricsSearch conversion, zero-result rate, CTR from searchEvaluates search efficiency
Behavioral metricsBounce rate, session duration, pages per sessionIndicates user satisfaction
Uplift metricsDifference (uplift) between variants, statistical significanceProves which variant wins

2. Traffic Splitting / Group Assignment

To enable AB testing, Perselio provides logic to allocate visitors into groups at specified ratios. This is done by the same JavaScript Data Collector.

  1. Define variants and ratios Example: 50% control, 50% treatment (or 30% / 70%).

  2. JavaScript script handles assignment

    • Checks if a visitor already has a group assignment (via cookie). Please see Cookies
    • If not, assigns randomly according to ratios.
    • Stores assignment in a cookie for consistency.
    • Typical name of group consists of experiment name and group, e.g. bremen??0
  3. Cookie persistency ensures consistency

    • The user remains in the same group across sessions.
    • Avoids flicker and ensures reliable test data.

We use user-level randomization. It uses a pseudo-anonymous identifier - a long-lived browser cookie Cookies. Detailed pseudocode can be found below.


3. Displaying Logic & Component Control

Once assigned to a variant, components adapt their behavior accordingly.

How Components Are Controlled

  • Each component checks the variant cookie and decides how to render.
  • Control variant → baseline experience (default recommendations, default search).
  • Treatment variants → show Perselio personalization, hide components, or change UI logic.

Implementation Approaches

  1. Front-end controlled (default) Your code inspects the variant and renders accordingly.
  2. Perselio-driven toggling (customization) Our script can show/hide components automatically based on the variant.

Considerations

  • Control should mirror “business as usual.”
  • Run variant check early to avoid flicker.
  • For dynamic UI frameworks, ensure variant check happens before component mount.

4. Analytics & Measurement

Note
To calculate metrics for the control group, proper analytics tracking must be implemented. This requires including the appropriate item_id identifier in the HTML/DOM layer, so events can be accurately captured and attributed.

What is measured?

  • Click-through rate of recommendation & search widgets
  • Revenue attribution (purchases conversions attributed to widgets interactions)
  • Uplift metrics comparing control vs treatment

These metrics are available in the AB Testing dashboard.



Attachments

Pseudo-code of group assignment.

visitor goes to webpage
after JS script is loaded
  check whether long-live cookie _saexp is set
    if not set: 
      use random number (used Math.random()) to set group and store to cookie _saexp
    else:
      if experiment is not active:
        delete old experiment from _saexp
        set a new experiment and use randomization to set the group
      else:
        use already set group from cookie
      prolong cookie validity