Split Testing Overview
Split testing (also referred to as A/B testing or multivariate testing) is a method of conducting automated and random experiments with the goal of improving a predetermined website metric, such as number of clicks on a specific element (I.E. add to cart button, register button, play a video, click for more info), form completions (I.E. registrations, opt-ins, contact requests), or purchases. Incoming traffic to the website is distributed between the original (control) and one or more variations. This is transparent to the visitor, who does not know they are part of an experiment. You, the tester, waits for a statistically significant difference in behavior to emerge. The more traffic the page gets, the faster you can obtain statistically relevant results. The results from each variation are then compared to determine which version showed the greatest improvement. If it is determined that there is a clear winner, you would likely update your site to incorporate the elements from your winning experiment. (And then start a new experiment to improve it further!)
What Types of Elements Can Be Split Tested?
Nearly any element can be varied for a split test. For example:
- Visual elements: pictures, videos, and colors (Does a photo of a smiling man help convert better than a serious man? Does an orange button convert better than a green one?)
- Text: headlines, calls to action, and page copy (Does “Get Your Free Quote” convert better than “Receive an Instant Quote?” Do bullet points convert better than a paragraph? Does adding a customer testimonial help increase conversions?)
- Layout: arrangement and size of buttons, menus, and forms (Does a form with 3 fields convert better than one with 5? Does a register button on the left convert better than having it on the right?)
- Traffic flow: how a user gets from point A to B (Does a 2 page checkout convert better than a 3 page checkout? Do more people sign up for a free trial on page 3 after visiting page 1, then 2, then 3? Or Page 1, then 4, then 3?)
Split Testing Email
Split testing isn’t only used for testing webpages. It can be very useful for testing your emails as well. You can answer questions such as, What’s the best day to send your email marketing campaign? What time? What kind of subject line works best? Something promotional? (Save 10% on Widgets Today Only!) Or something more subtle and informative? (Find Out How Our New Widget Can Save You Time.) You can set goals such as opens, clicks, or ROI for each email variation. First, make sure you integrated analytics into your emails. Many email service providers such as Mailchimp and Aweber can do this for you automatically and also offer certain reporting capabilities such as the number of opens and clicks directly in their system. You can then set up segments of your email list, perhaps sending one version of an email to 10% of your list and then 10% to another version (or the same version, but at a different time.) You can then see which performed better and send the remaining 80% out using the winning version, or at the winning time.
Some Split Testing Best Practices
- Simplify: generally, fewer page elements create less distractions from the conversion goal.
- Don’t forget about the overall business goals: test with the overarching goal of the website in mind, not just the goals of individual pages. For example, one page might generate more clicks, but fewer checkout completions. Or, one version of an email might generate more opens, but less sales.
- Test one element at a time: Testing one element at a time is called A/B testing, while testing multiple elements on a single page is called multivariate testing. Unless you have a testing tool capable of interpreting more complex multivariate results, it’s important to test one element at a time (although you can have multiple variations of this one element.) For example, just change the call to action, or an image on the page, or the position of your testimonials. Otherwise you can’t be sure which element had an impact, and by how much. If you test two elements and one results in an increase on performance of 10%, while the other results in a decrease of 10%, you might assume that the net benefit of your changes is zero; that there was no difference in page performance. In reality, if you only added the positive performing element, you would have seen a 10% increase which could be a big win.
- It’s not all about drastic changes: Don’t be seduced by the idea that all variations in an A/B test have to be huge, obvious transformations. Even subtle changes can have a demonstrable effect, such as slightly editing a list of product features to persuade users to request more information, or phrasing a call to action differently to drive user engagement. It’s a gradual, granular process. Keep iterating until your conversion rate for that page is maxed out.
- Don’t make assumptions: Just like a scientist tests a hypothesis, that should be your approach to split testing. Use hard A/B test data to make informed business decisions – no matter how much it surprises you.
- Resist the temptation to jump to a conclusion: Even if you’re getting strong initial results, let the test run it’s course. Economists and data scientists rely on a principle known as statistical significance to ensure that the data has a high probability of accuracy and this relies on a large data sample. Without it, you run the risk of making business decisions based on bad data.
Split testing doesn’t have to be complicated. Services such as Visual Website Optimizer, AB Tasty and Optimizely offer interfaces that allow you to create page variations without touching your website code. Conversion Rate Experts has a great breakdownof the numerous split testing tools and platforms that are available. With easy ways to do split testing of your website and emails, there is no excuse for settling for the status quo. There is always room for improvement throughout your site. So once you find improvements, don’t stop there. Keep testing!