Identifying a Test & Creating a Hypothesis
A/B testing can be one of the most effective mechanisms for making website enhancements. These simple experiments allow your customers to show you which experience they prefer, and actual actions and purchases can sometimes be more honest than surveys. These findings will have great results for your customer experience and revenue stream.
The first step is deciding what you want to optimize. This may be time on site, page views, conversions or even revenue per transaction. You can essentially optimize anything that you measure, and we’ll be publishing another blog post on this topic later.
Early in the process, you will also want to decide what you will call success. It will be easy after seeing the data to say that a $1 increase in revenue per transaction is a success, but not if you’re ultimately going for at least $5. These goals will help you build your hypothesis tests.
Writing a Hypothesis
While writing out your hypothesis test may not seem that important, it will help you interpret your results later.
Let’s say we are doing an experiment to drastically increase revenue per transaction. The site changes will be expensive, so we’re only going to implement if revenue increases by at least $25 per transaction. Here’s an iteration of the process:
- The Test (B) average transaction will be $25 greater than the Control (A) average transaction
- Test – Control > 25
Hypothesis tests in statistics are built around trying to disprove the null hypothesis. This makes the hypothesis above (the one you want to be true) the alternative hypothesis. The null hypothesis includes an equality. In this case, the null hypothesis is that the difference between Test and Control is equal, or at the very least, not $25. The final hypothesis looks like this:
- H0: B – A ≤ 25
- H1: B – A > 25
Since A/B testing relies on samples, you will want to verify your results are statistically significant, which will be covered in a future A/B Testing Series blog post.
Deciding on How
Next, you will want to decide upon what site changes you believe will optimize this measurement and implement the technical tools necessary to create an A/B testing environment. A/B testing can be done with a variety of programs, but popular ones include Google Content Experiments, Adobe Target and Optimizely.
It is always better to test two experiences at once with randomly assigned visitors rather than simply making changes and testing to see if the measurement improves. Why? Concurrent testing with random assignments allows for added validity – when interpreting the results, you can have peace of mind that nothing else except the website changes are affecting the results such as a large ad campaign finally taking hold or a great referral from a popular blog.
What steps have we covered so far?
- Identifying a measurement to optimize
- Deciding upon a success metric
- Developing a hypothesis test
- Implementing the tools for a proper A/B test