How to do CRO without being a wanker

This picture was chosen because this man came up a lot of times when you searched for conversions images. He knows how to CRO himself!

This picture was chosen because this man came up a lot of times when you searched for conversions images. He knows how to CRO himself!

Lately, I have been doing a lot of AB tests (and ABn tests, and multi-variant tests, etc, etc, for the ease of this article I am going to keep it at AB tests). It started, as most of my endeavours do, to prove that I was right about something. This is the exact opposite mentality you need going into AB testing but at least it got us started (see http://www.moekiss.com/2016/09/analysis-competing-hypotheses-digital-analytics/ for more details). From there, people had other ideas they wanted to test and we started to see some quick wins and it snowballed. Soon I realised what I was actually doing was CRO, and I cringed. CRO always reminds me of those quick win youTube videos or people who do tests to see if 2 for 1 is better than 50% off. At the heart of what they are saying was not about the overall user experience but how to scam people out of money by using dark patterns.

With this revelation in mind, I had to think about how I could continue to help improve the website by testing and streamlining the checkout flow but make sure there was a focus and not just a random mess. I’ve been doing some research and used some of my own experience to figure out the steps in creating an AB testing plan.

1. Create a measurement strategy

Before you even start thinking of testing, you need a measurement strategy. Even if you aren’t going to test, get a measurement strategy. Get everyone together on the same page of which metrics you are trying to move and how your business values them. There are a few different types of strategies, they can be based around the companies goals, or one single number, like the North star strategy. Either way, it should break down all the different levers that can be pulled for the company to reach its goals.

Here is a very simple version for an online retail shop. The main focus is on revenue, and it shows what can be moved to increase the revenue. Yes, I know I didn’t want to be one of those people who are just looking at increasing revenue but hear me out, you can increase revenue by having a good customer experience.


2. Do your research

Now you have agreement on the levers you have to pull to increase your overall goals you can look into what can be done to move the lever. In essence, what is stuck in the gears if you want to continue the metaphor. This can be done in two ways:

  • User research - crazy I know but users have some opinions on your site, they can let you know where they get frustrated, where they are confused or what they love about your site.

  • Data research - pose some questions to your data to find where users are struggling. Where do they drop off, what actions are they performing that your site doesn’t account for, e.g. what searches get 0 results. Also, map out common traits of people who do and don’t move the numbers, e.g. what do returning visitors look like, what is common about people who don’t add to cart.

Collate all your findings, user and data insights for each of the levers.

One thing I realised is that it is rare that all data and insights about an area of the site are kept together. Most are done ad-hoc and filled away in a report. Go through all those old reports and pull out any insights and put them in there too.

3. What’s the problem

For each of the areas, get a stakeholder group together and let them do what stakeholders do best, tell you their issues with the site. Try to structure it around the data and insights you have found. You want a list of all the problems you want to solve about attracting returning visitors, for example. Try to phrase it around the problem the user is having. As a user, I want to …..

If you are the type to get out sticky notes, this is the perfect time for that. If you have a large stakeholder group, you might even get them to vote, agile style, squeeee. Depending on the volume you get, you may want to prioritize them, but I think if you have around 10 then it is not needed.

To show quick wins I would pick one lever and take it through the entire process and then move onto the next, but if you are trying to build a full program of work then you could find ALL the problems. It may make you sad though and hate your site.

4. What can we do about it

The clear problems you have outlined now need solutions. Get those stakeholders back and let them do their 2nd favourite thing, give opinions on what should be done to fix the site. Each problem needs at least one possible solution. These don’t have to be something that has to be AB tested, it could be a piece of work that is already in the pipeline, it could be a new project that needs to get funding - but you already have the data around why it is a good idea. What you will probably find is that some problem may have multiple solutions or a solution not everyone is sold on - these are the ones that get highlighted for your AB testing plan.

5. Map it out

I came across Optimizely’s Experimentation Roadmap and felt the joy only a project manager come analyst come ab tester can have. It is perfect for figuring out the prioritization of each test. Once the solutions have been marked as an AB test they belong in here and those same stakeholders need to fill in the ranking for:

  • Potential to win

  • Business impact

  • Technical effort

  • Love of the idea

You should also be able to put in your hypothesis you are testing (the problem) based on data you collected, the solution with reasoning and expected results from the work you have done up to this point.

This could then be mapped out as a testing schedule, and a clear plan of where your team is going.

6. Now what…

Depending on the volume of your site, tests might have to run from 2 weeks to 2 months to get statistical significance. The world isn’t going to sit still and wait with patient breathe till you have some results to give. I recommend setting up regular catch-ups with stakeholders so you can let them know how the tests are going, and they can let you know anything new going on with their work that might present new hypothesis or problems.

Once a test is finished, circulate the results to all the stakeholders, even if it is bad news. This gives us insights. Also, don’t be worried if they fail, it is expected that 90% of tests won’t give a positive result to the variant. Knowing that a change will have no or negative effect can be just as powerful as finding something that increases revenue. It means the dev team doesn’t need to waste time to implement it, and if you keep your insights in one place, when someone suggests that change a few years down the track you can show the results of this test.

Remember to test, measure, learn, so your leanings should grow and lead to more tests, to keep you in a job for the rest of your life.

Conclusion

At the end of the day, it is up to you to steer this towards what is best for the user and not what will just make the most money. I like to believe that we are good people and try to do what is best and then I watch things like this. TLDW: Tax websites in USA purposely hiding the free version and tricking people into buying their paid version of a tax file form using dark patterns. Just be the best you can be and then you won’t have to trick people (unless you're a magician).


Previous
Previous

Matching Big Query Data with Google Analytics - Audience Overview

Next
Next

The Analysts guide to Surviving Isolation