How to know if a test is winning or not?

Written by Mason Pastro on November 3rd 2022

This was a hard lesson for me to learn.

I used to look at test data and if I saw an improvement I would immediately assume it was a winner.

But here’s the thing: if you don’t know how to properly analyze test results you could be implementing LOSING TESTS on your site and hurting conversion rates!

After running 1000+ split tests I’ve created a few rules of thumb when it comes to looking at test data.

Let’s break it down:

1. Let a test run for 14 days minimum. I don’t care if you see a 50% lift. Let it run. Just like how FB ads may crush it on the weekends but flop during the week, testing can be the same. The more data you have, the more accurate the test results.

2. Aim for a minimum of 100 transactions per variant. Same principle as before, the more data the better.

A scientist wouldn’t bring together a group of 10 people (5 boys, 5 girls) and ask them all if caffeine affects their sleep. That’s such a small data pool. Little to no conclusions can be made from it.

3. Check both mobile and desktop results of your test. A test may be winning on mobile but losing on desktop. This is crucial information. You don’t want to implement a test on a device where it hurt conversion rate

4. Use a statistical significance calculator to determine if a test is winning or not. I like to use https://abtestguide.com/bayesian/ and plug in my data there. I typically look for a confidence score of 90% or higher to determine if a test is a winner or not.

What is statistical significance?

Let me explain it like your 5:

If you flip a quarter 50 times, the likelihood of you landing on tails exactly 25 times and heads exactly 25 times is almost zero.

Your results will be random but will be in the ballpark of a 50/50 split.

When we run a split test, we need to know that the change we made and the conversion increase we see is not due to random chance.

So we calculate the likelihood or “confidence score” that the change we made is a direct result of the conversion increase.

And not due to the fact that 80% of people in variant 1 were feeling extra frisky to buy that day.

The more data the better!

In simpler terms (taken from reddit)

“Another way of saying it is: if a result is not statistically significant, then we would probably not be able to replicate the result reliably. It is more like a random blip than a really meaningful pattern.”

That’s it.

This is what I look at when determining if a test is a winner or not.

Hope you got value from this post!

If you want to launch your conversion rate to the moon with your DTC brand then schedule a call here: www.conversiontime.io

We will give you a full LIVE CRO audit for FREE!

Talk soon,

- M

Mason Pastro

Mason helps DTC founders unlock hidden revenue and scale through his conversion rate optimization and average order value boosting methods. Having been in ecom since 2016, he has a thorough background in web design, direct response marketing, and data analytics. If you're interested in increasing your store's conversion rate and average order value, book a call with us!

Break Through The Wall.

Book a no-pressure discovery call to find out how the team at ConversionTime

can help you increase your conversion rate by 20% or more, guaranteed.

Copyright © 2023, ConversionTime.io | All Rights Reserved