My UX Toolkit: A/B Testing
My UX Toolkit is a series of posts exploring different tools and techniques used in the user experience design and research process, my understanding of them, and when they can be applied. UX is a broad and varied space that can range from quantitative statistical analysis to graphic design, from branding and content strategy to storyboarding. Here I am trying to scratch the surface of how UXers UX, share my knowledge and further my own understanding of this vast career field.
A/B Testing
If you are someone that regularly uses major search engines, social media apps, or even travel websites like booking.com you have probably unknowingly taken part in an A/B test. A/B testing is a technique commonly used in UX or Product Design, and also in marketing. It was popularized and utilized with great success by companies like Facebook and Google.
An A/B test study starts out as a hypothesis: that changing one variable - an independent variable will directly affect a dependent variable. For example, an online ecommerce site may have a theory that creating a larger “checkout” button (independent variable) on their shopping cart page will increase the number of click throughs (dependent variable). They can test this theory by having two versions of their shopping cart page and analyzing the comparative data.
Process
To design your A/B test study, you start out with your theory, choosing which independent variable you want to test. These are generally simple variables like button size, font, color scheme, small changes in copy, or basic changes to the interface design.
The second part of your theory is how you will change that variable, and what you think the effect of the change will be. Generally you will be trying to have a positive effect on things like conversion rates, bounce rates, or time spent with the product.
You now create a control test, and a challenge test. For the “checkout” button example above, you would have two versions of the shopping cart page. The control page will be unchanged from what you started with. The challenge page will have a bigger “checkout” button.
In order for the data to be most useful, you need to have an equal number of both control and challenge tests. For a website, this can be done by showing half of your traffic the control page and the other half the challenge page. Smaller companies can use software like Optimizely to facilitate this process for them.
From there, you will continue to run the test until you hit your pre-determined sample size, and then analyze the data. You can then decide if your hypothesis was correct, and if the change you are testing will have a positive impact on your product.
There are a lot of benefits to A/B testing. It is inexpensive to conduct, both in time and money, and the information returned is simple to understand and easily actionable. The biggest con of A/B testing is that though it will illuminate which changes will result in the desired outcome, it does not convey why. This test gives no context. In order to better understand the users mindset and thought process you would need to conduct a qualitative research study such as a moderated usability tests or diary studies.
It is inadvisable to test multiple different variables in one test. It might be tempting to test with a whole new shopping cart page, but due to the nature of A/B testing and the incomplete story it tells, you will never know which changes had a positive effect on, say, bounce rates and which had a negative impact.
An A/B testing study can be done at any time for an existing product and is an entire UX process all to itself. It doesn’t have a conventional home in the UX Double Diamond methodology, but can be useful both for a complete product overhaul and to investigate smaller tweaks.