Let’s talk about decision processes for stopping A/B experiments early.
By that, I don’t mean concluding an experiment. But rather about stopping an experiment because we suspect there to be something wrong.
This could be something related to the build, the data-collection, or the design of the experiment. Or it could just be that a change is so dramatic, that it impacts users in a concerning way.
Whatever the issue, it’s vital to know and act as early as possible because restarting an experiment usually means also losing the data that’s been collected so far. …
A simple sharing of experimentation benefits didn’t seem right, however, as the main problem facing “testing muggles” is that they’re often unaware of the issues with their current approaches they’re using.
We first need to show the shortcomings of their current approach.
Some companies will do user research, perhaps even conducting limited user testing sessions…
It’s 08:41 a.m. on 6 November 2020 and I have a browser tab open to a map of the US showing the 2020 election results.
Right now, I’m thinking about the democratisation of experiment processes across an entire company. Part of the principles of democratisation is to provide everyone with the ability to view real-time experiment results.
Getting back to the US Election for a moment: I’m one of the millions of people around the world who are monitoring the results of the election with tense anticipation. …
I’ve seen firsthand how well-defined processes can transform teams into factories which efficiently deliver one effective experiment after another (I’ll define what I mean by “effective” later in the article).
The idea is to have documented processes for everything from hypothesising new ideas; to prioritising experiments; to determining what metrics we need for specific experiments; and more.
Having these processes means everyone involved with experimentation knows what they’re responsible for. They also know how to undertake their respective tasks so that they are done in a consistent and predictable way.
Since many of my articles involve processes, I thought I’d…
Is there such a thing as a failed experiment? The standard answer is ‘no because you’ll still have learnt something’. And there’s undoubtedly some truth to that. But there’s only so much you can do with “well, that didn’t work” other than give up and try something else.
You see, there is such a thing as a failed experiment: it’s when you don’t learn anything actionable.
I share this diagram a lot:
Conversion Rate Optimisation can be a wildly chaotic exercise — like blindly throwing hundreds of darts hoping that something lands on a dartboard. It helps to be purposeful in terms of the approach to take. Utilising conversion levers to track the success of experiments is a step in the right direction.
There are eight conversion levers I‘ve relied on for years with the experiments I’ve run with teams — they’ve served us well. These levers are:
How to use these? Firstly, it’s useful to mention these when writing hypotheses. …
How does one scale themselves to avoid becoming a bottleneck?
That was the question I needed answering when, by chance, I came across the Smart Passive Income Podcast podcast episode where Dan Norris explained the origin of his company, WP Curve.
In the podcast, Dan described how he once provided a WordPress development and support service on his own. Being on his own meant he could only support a limited number of clients. He had a desire to scale so he could support more clients. His solution was to productise his services.
“Productising” involves taking a set of skills and…
By test big, I mean: having multiple changes, sometimes across multiple pages.
Testing this way is sometimes necessary (not to mention desirable), but we need to be mindful of the risk we’re taking when doing this — I’ve seen too many of these big tests fail.
Don’t get me wrong, having a test lose is not a bad outcome; however, having a test lose which also doesn’t tell us anything is the worst possible outcome for our experiments. Just so you know, an ideal experimentation process flow looks like this:
I’m a huge Star Trek fan, especially of The Next Generation and Deep Space 9. So, when the Picard show was unveiled, I was super excited (and a little nervous since I didn’t like the Discovery show at all).
So, Picard is an interesting show. It’s a mystery, for a start. And when I say that, I don’t mean a mystery like a whodunnit, but rather it’s a show that relies on the reveals of information as characters discover what’s going on and why.
It’s also one long story told in ten episodes which is unusual for Star Trek.
CRO consultant and trainer. Graphic novelist and writer.