Table of Contents
Note: this article about A/B Testing and Multivariate Testing methodology was updated from the 2018 original on February 12, 2021.
Learning how to perform A/B and Multivariate Testing is of paramount importance for any webmaster. Here we’ll see what these techniques are, and how to use them.
A true user experience designer skill set includes the ability to test their premises with users and/or stakeholders. Before, during and after the launch of a product, service, website, etc.
For this purpose, there are countless techniques, tools, approaches, methods, etc.
Although sometimes there are different ways of measuring something, there is usually ONE way that is the most appropriate. It could be by its methodology, or by the clarity of the results obtained.
Or it could be defined by the available possibilities (budget, audience reach, lack of baseline data, etc)
Availability example: suppose I design a User Experience for users located in France and I am in Japan. It will be very difficult for me to use guerrilla testing techniques such as Hallmark Testing. Even if I know that this technique is the most suitable for my needs.
However, if I have the financial means, I can hire someone in France to do it. Or replace this test with another to which I have access.
Since there are countless methods we can use, let’s focus on this simple tutorial where we’ll learn the basics of A/B Testing and Multivariate Testing (also known as MVT). Keep in mind that unlike other tutorials that deal with testing in marketing or SEO, this is specifically made for UX and UI (which may include marketing and SEO, but not necessarily). We’ll see some A/B testing examples at the end of the article
A/B Testing and Multivariate Testing (MVT)
- A/B Testing
- Multivariate Test
Both tests are easy to perform at a very low cost, and they return excellent and very valuable information.
First, because of their simplicity.
Second, because both basically compare one element with another.
Because of this, it is very common for both methodologies to be confused. Specifically in the sense of when to use one or the other method.
Let’s keep in mind:
A/B Testing and Multivariate Testing are not the same and may return conflicting results
A / B test and when to use it
An A/B test is basically a comparison of results between two versions with minimal differences. To be clear: only one difference.
Examples of these differences are: colors, slogans, CTA (Call to Action), typography, layout, ui design, etc. However, to perform the test correctly, only one parameter must be modified at a time.
For the purposes of this article, we’ll only deal with A / B Testing in contexts of web pages or user interfaces. A webpage is a simple and common scenario for any designer, developer or UI researcher, so it will be very easy and useful to use such web-page example.
A / B testing is widely used in landing pages and email marketing because they are very dynamic types of communication actions. Also, they can be modified very quickly, allowing almost immediate multiple comparisons. With just a few changes in your HTML markup , or some custom CSS, or using tools such as different WordPress plugins, you can quickly modify the web-design in order to display two different versions you can compare. You’ll see this below, in the A/B testing examples section
The procedure to perform this test is as simple as comparing the results of two almost identical versions except in one detail. From this comparison, we can define what worked best based on the parameter that best suits our needs, such as CTR (click-through ratio) or total sales. For example, if version A has a CTR of 15% and version B has a CTR of 25%, version B is the best for our purposes.
DOs and DON’Ts of A / B Testing (what to do and what NOT to do)
What to do
- We must always measure a single variable
- We should always try to make the contexts as similar as possible. Pretending to compare a sales action at the beginning of the month with another at the end of the month will obviously give very different (and perhaps false) results.
- We must always be clear about what to measure with an A / B test. There’s no multi-page or site-wide A/B testing.
- It is a good idea to have an automated A/B version generation system, which can be easily generated with email services such as Mailchimp, or even creating special templates with WordPress.
- The text and slogans are usually of fundamental importance. Not everything is buttons and colours.
- A well-performed A/B test can generate revenue increases of up to 3000%. Obviously, a poorly performed test will cause us to lose that possibility and may even cause us to lose money in a poorly conducted campaign.
- Let’s always stick to the data, no matter if we like it or not.
What NOT to do
- Never measure more than one item at a time.
- We never believe that a version is final. Once we find a winning version, we must test it in a variety of contexts.
- It is preferable to avoid long and complex versions in favour of simple and smaller versions.
- We must avoid taking the data as one-dimensional. The surface data obtained in an A / B tests may hide another subset of data that contradicts our first impression.
Never trust that our instincts are going to be correct: the purpose of these tests is to confirm or deny the hypotheses.
A/B testing examples
Here are a couple examples of A/B testing, explaining what has been done in order to test hypotheses.
These cases are real user cases done by ourselves for our clients, therefore data is accurate and don’t rely on external third parties or examples form the internet.
Healthy Wage Experiment
These are old examples from 2016, but these tests marked an incredible growth for the company, who decided to go on a data driven direction. Examples below are for a landing page’s hero section.
“B” Hypothesis Version
While these tests are closer to MVT than A/B, this is a great example. You’ll see there are several changes in the designs. But in reality, there are only 2 big changes. The text in the middle would change according to the challenge, so we didn’t measure it (in this campaign).
So, the changes are 2 that are grouped: the text in the circle and the text in the button. Our hypothesis was that Version B would attract more clicks and engagement.
And so it was… that we were wrong!
Version A performed 13.8% better.
We wondered why, since according to all UX theories, version A should perform better than B.
The answer: user psychology. With further testing, we found that the question of knowing what was going to happen was more attractive than knowing the exact amount, since the expectation was that if more people got together, a bigger prize would be achieved.
For those interested in the psychological part of the experience, this behavior pattern is called Optimism Bias, and it is related to other biases such as wishful thinking, valence effect, positive outcome bias.
As we can see, not all hypotheses are proven correct. The correct thing in A/B testing is to use scientific methodology to validate the tests in a factual way. A failed hypothesis means absolutely nothing. The important thing is to discover which hypothesis or design DOES work.
Ready to continue with the second part: multivariate tests ?
Disclaimer: This content was translated to English from the original we wrote in Spanish, available in UXpañol
We can improve your business!
Let us help you with the best solutions for your business.
It only takes one step, you're one click away from getting guaranteed results!I want to improve my business NOW!