Written by Jeff Burslem, Analytics Manager at Springbox
What is Test and Learn?
Great question! Testing and learning is a great way to optimize the performance of your website and help you to achieve your business objectives.
You may have heard Test and Learn referred to as Website Testing, Conversion Rate Optimization (CRO for short) or A/B and Multivariate Testing. These terms are interchangeable - but what remains consistent is the value that these initiatives will bring to your organization.
For example, you may look after an e-commerce website and need ideas for how to get people to buy more stuff. On the other hand, you might not sell anything at all and just want visitors to browse more content to improve your website engagement numbers.
A cursory web search will point you in the direction of case studies that showcase uplifts in conversion thanks to testing – WhichTestWon.com is a great place to look, for example. But please don’t do that right now – we’re about to discuss how to get your very own culture up and running!
Is Test and Learn more like a project? Or a process?
That is another great question. I can tell we’re going to get along just fine!
It should be thought of as a process, rather than a one-off project. It needs to be conducted on an iterative basis to answer business questions and improve understanding of visitor behaviour.
Test ideas should lead to hypotheses about what your visitors are likely to do (e.g. “I think visitors will click through to product pages”) and the business outcomes (e.g. “we will sell more stuff”), which when actually tested provide us with insights and ideas about what to optimize next (e.g. “Let’s test the order summary page because people are still abandoning their carts”).
Chances are, you’ve got Google Analytics - which means you have access to Content Experiments. It’s a cheap and cheerful way of doing basic tests to get you started. The more sophisticated types of tests, for example multivariate (also known as multivariant - you would be correct in thinking that Analytics terms have more varieties than Heinz) may be better served with more sophisticated tools. That’s a conversation for another blog post though. Contain your excitement!
In essence, the creation of a Test and Learn Culture should form a vital component of continuous improvement for your website and therefore your business.
If that isn’t enough to convince you, here’s a fancy strategic-looking chart:
Fancy Strategic Chart: Creating a Test and Learn Culture
We will now discuss the steps in turn. Are you browsing comfortably? Then let’s begin…
Step 1. Generating Test Ideas
First of all, you need to start compiling a list of test ideas. You probably have a few thoughts on how to improve the user experience already, although a blend of quantitative and qualitative data is the best way to definitively build a list.
Your analytics will give you the numbers, but it’s also important to align those with other data points – where are the quick wins? What are the pet peeves of your colleagues and stakeholders?
To that end, incorporating different perspectives across your organization is a very useful exercise when brainstorming - designers, developers, the receptionist and just about everyone else will have their own point of view.
Competitor websites can also serve as a guide on what to test. Ever wanted to replicate the calls to action on Competitor Site X to see if it works on your website? Now’s your chance! Ever wondered whether copying Competitor Y’s homepage layout could work for you? Knock yourself out. Not literally though! You still have to read Steps 2, 3 and 4.
Another key point to mention is that actionable tests are better than ‘interesting’ ones – experimenting with wording on a button to see if people buy more stuff or read more articles is far more impactful that testing a blue slider and a red slider. Or a puce one, for that matter. Or vermillion.
Step 2. Creating Hypotheses
Creating and subsequently testing a hypothesis will ensure that insights can always be taken away from whatever content experiments you wish to conduct on your site.
For example, do you have a hunch that visitors find certain Calls to Action (or CTAs for short) confusing? Would re-wording them improve click-through? Are there certain pieces of information that you think they’re unable to find? In which case, would greater prominence yield higher conversion? And so on.
It’s worth considering that not everything you test will win – disproven hypotheses really are an option. Moreover, it provides an opportunity to learn something new (e.g. “I guess the puce and vermillion sliders had no impact.”) and try again (e.g. “Let’s do a more actionable test next time!!”).
As alluded to earlier, it’s important to keep your initial tests simple and get them live quickly. Once the basics have been mastered, you can start thinking bigger.
Step 3. Measuring Outcomes
It’s very important to set measurable goals for each test you run.
As discussed earlier, goals can be either quantitative measures like purchases or downloads, or alternatively more qualitative ones, such as engagement with the website.
Useful test metrics include Bounce Rate (i.e. how many people see one page then leave the site), the number of pages viewed in a visit, time spent on site and Conversion Rate. Your Key Performance Indicators (or KPIs for short) will vary according to what you’re intending to test, but these metrics offer a solid starting point.
It’s important to be wary of false positives – your test results might imply something based on a small volume of traffic. Your testing tool should be able to tell you if a difference between your default and variant test versions is statistically significant, although sample size calculators can also tell you how much data you need (e.g. number of visitors) to make accurate decisions (e.g. push a variant test page live).
Step 4. Sharing Outcomes and Learnings
Once the test has concluded, it’s a case of ensuring that learnings and ideas are shared with all the relevant stakeholders. A process of regular meetings and general publicizing of test results should ensure buy-in for further optimization.
The more tests you conduct, the more ideas you’ll start to have about subsequent tests. In which case, it’s time to start thinking about creating a document where you can structure and prioritize tests based on the amount of effort required and their expected level of impact.
Over to you!
Why not have a go yourself and let us know how you get on? Or alternatively, drop us a line if you’d like to know more about what our ManiTesto can do for your organization. Either way, we’d love to hear from you.