Navigating cross-platform A/B testing

Navigating cross-platform A/B testing

Shay Alaluf
Shay Alaluf

Imagine running a simple A/B test: two pages, one winner, clear results. Easy, right? But what happens when those two pages live on completely separate website platforms, each with its own technical stack? Suddenly, something that should be straightforward turns into a much bigger challenge.

That’s exactly what we ran into at monday.com. What started as a routine experiment quickly turned into a juggling act, as the team tried to keep test assignments, user experience, and data in sync across both systems.

In this post, you’ll see how we handled cross-platform A/B testing, the unexpected problems we encountered, and how we ultimately managed to maintain accurate results without compromising speed or flexibility.

Why we needed a new website platform

Our goal was simple: enable designers to build and launch pages quickly, without involving developers at every step. Originally, our site was built with Next.js, providing powerful integrations but requiring significant developer involvement for updates.

To allow designers more autonomy and speed up launches, we introduced Webflow, a no-code visual website builder. Webflow simplified page creation and maintenance, allowing non-developers to directly contribute to our website.

However, Webflow lacked built-in A/B testing and analytics integrations, complicating tests that needed to span both platforms. Known solutions didn’t fit our unique constraints, forcing us to create a custom solution.

Why the obvious solutions didn’t work

Some common approaches are often mentioned when it comes to AB testing across platforms, but they didn’t suit our needs.

Custom redirects based on test variants

Redirecting the user to the matching page using a 301 redirect might sound simple, just send the visitor to the variant page if they were assigned to it. It works with existing routing and doesn’t require a complex setup.

Problem: Since the new platform lacked server-side capabilities, redirects had to occur on the client side. This meant the decision to redirect happened only after the initial page had loaded, causing noticeable delays and negatively impacting user experience. Early variant assignment became essential, but client-side redirects still felt slow and disruptive.

iframes

Embed the external variant as a full-page iframe within the main platform. When a user is assigned to the external variant, the main platform serves a shell page that loads the external page inside an iframe. This keeps users on the same domain and allows you to control the AB test logic and analytics from the parent platform.

Problem: Usability issues can disrupt browser navigation and analytics tracking, causing problems with SEO, styling, and cross-origin restrictions, ultimately resulting in a poor user experience.

These solutions might work in simpler setups, but for what we were aiming for, they simply weren’t an option.

Our solution: seamless integration via lambda@edge

Why lambda@edge was the right choice

Lambda@Edge is an AWS service that lets you run code at CloudFront’s edge locations, close to your users. This means it executes extremely quickly, significantly reducing latency. Because it operates before your application receives the request, it’s ideal for tasks that need to happen early, like assigning users to A/B test variants, modifying headers, or handling redirects.

We specifically chose Lambda@Edge because:

  • Speed: Decisions happen immediately, minimizing delays.
  • Timing: Variant decisions occur before the client loads any content, ensuring the right content is fetched immediately.
  • Cookie and Header Management: It can read and set cookies and headers, which are crucial for our testing logic.

How we integrated the platforms seamlessly 

Step 1: reverse proxy integration

We integrated the new platform into our domain using a reverse proxy setup via AWS CloudFront and Lambda@Edge. This allowed us to serve the new platform’s content seamlessly under our existing URLs, ensuring a consistent user experience and unified tracking across all platforms.

Step 2: Adjusting the URI for cross-platform testing (viewer-request)

When someone lands on our site, the viewer-request Lambda checks the assigned variant. Suppose the user is assigned to a variant different from the control (meaning they should see a page other than the one they initially accessed). In that case, it adjusts the URI to fetch the correct page from the appropriate platform.
This way, the user sees the same URL in the address bar and doesn’t feel they’re in an AB test, which is the exact experience we wanted.

Step 3: checking origin differences and setting headers (viewer-request)

After adjusting the URI, the viewer-request Lambda evaluates if the new URI points to a different origin (either our app w. Next.js or Webflow) by checking URI prefixes. If it does, this information is passed along through request headers.

Step 4: routing to the correct origin (origin-request)

In this step, the origin-request Lambda reads the origin information from the headers and routes the request to the appropriate backend platform, ensuring users consistently receive the correct variant.

Unified tracking & analytics

To ensure consistent analytics, we added custom attributes to elements on the new platform. Event listeners on these elements triggered analytics events across all A/B tests. Additionally, the Lambda@Edge functions stored test assignment details in cookies, enabling client-side scripts to reliably trigger split-testing events.

Taken together, these integration steps gave us full control without compromising performance, user experience, or data quality. By leveraging the flexibility of Lambda@Edge and unified analytics practices, we maintained a smooth, consistent experience across platforms, confidently scaling our cross-platform experiments.

A new problem: unbalanced traffic between variants

Just when we thought we had figured out our cross-platform AB testing, we encountered another unexpected problem: the traffic wasn’t split 50%-50% between the test variants as we might have been expecting. Our main platform automatically redirected visitors to localized pages based on their language, a feature missing in the new platform. As a result, visitors on the main platform often never entered the tested page, significantly reducing traffic and resulting in smaller sample sizes for these variants. We first noticed the issue when we observed unusually large differences in sample sizes in non-English-speaking countries, which clearly signaled a flaw in our A/B test accuracy.

Tackling the sample size issue

To address this issue, we implemented an automated S3 mapping for each new Webflow page, including its published localized versions, along with a fallback URL to our main platform. This allowed us to replicate the same localization logic used by our main platform directly within our Lambda@Edge functions, ensuring visitors are consistently routed to the appropriate localized variant. By doing this, we restored balanced traffic distribution across all variants.

What we learned (and would do again)

Getting cross-platform AB testing right took additional effort, but the results were well worth it. We learned that taking the extra step to build custom integrations can significantly simplify complex processes, improve data reliability, and empower non-developer teams. The right architecture made cross-platform testing not just possible, but invisible to the user.