Understanding Split Testing: The Basics
Right, most people seem to think of split testing as some sort of mystical wizardry. Some elaborate, A Beautiful Mind-level mathematics that only the most seasoned and computer-brained marketers can ever hope to wield. I Doubt and that it only works if youāre a super-expensive saas company looking to get an edge over your competitors.
But in reality, split testing is fairly straightforward. Just think of it as taking an existing web page or form and making subtle changes. You then check if people liked the new version better. Thatās literally it.
Thatās all you need to know about split testing. The truth is, though, itās not always going to work as intended. Sometimes youāll be surprised by how much more people like your new layout or messaging.
Sometimes they wonāt like it at all and youāll have ended up with a lower conversion rate than before. But thatās what makes split testing so important. More or less.
Instead of keeping a dull, underperforming page around for months or years (or creating entire websites based on trends that are currently out), you can start experimenting right away and see what makes the best impact on your conversion rates right away. No need for drawn out planning and strategising - just test two versions and go with the one that performs better.
Identifying Key Metrics for Success
People get split testing wrong all the time. At least, when it comes to identifying what metrics to measure, they do. Thereās this impression that certain things are more important than others ā like conversion rates and click-through rates. Sure, you canāt overlook these stats, but theyāre not the be-all and end-all.
The thing about websites and businesses is that thereās a lot more to track than just conversions. For instance, you have to keep a close eye on bounce rates, engagement rates, and even number of page sessions ā especially if your business model hinges on session-based revenue. In such cases, itās not enough for people to visit your website. You want them clicking all over the place, driving your revenue with every move.
Thereās also dwell time or customer lifetime value. Sure, people are seldom clicking on your links and spending money on your website now.
But how long are they hanging around for. In a way, having customers with higher CLV is often better than having thousands of customers whoāve spent smaller amounts on your website. So while people may think itās all about conversions and click-through rates, thereās a lot more nuance to it than people realise.
And thereās no shame in not knowing the right metrics to track or even prioritising certain ones over others. As with everything else in life (and business), continuous improvement is key ā so long as you actually measure it.
Idea 1: A/B Testing Your Headlines
People often think headlines are just five or six words meant to scream āsaleā or ādiscountā or something that sounds very clickbait-y. The way I see it, headlines are the beginning, middle, and end of marketing, in a sense. Sure, theyāre mostly the beginning, but if your first experience with a brandās content is possibly a weird headline, you will probably not see yourself giving them your money. Unless theyāre selling something you absolutely cannot resist, like chocolate or cold pizza.
It seems like a/b testing is sort of an iterative process with real-world insights informing your design and copy decisions. It helps marketers and creatives like us find the right rhythm when weāre publishing new content and creating experiences for our readers. In my experience, you want to experiment with a range of options and possibilities because your audience can be as diverse as the ideas in your head.
To test your headlines, create a control group and play around with variations - say things differently and see what sticks. More or less. The challenge often comes with knowing when to stop testing for optimality. The way I see it, not everything in marketing is black or white; thereās so much left up to chance - maybe bad timing on a particular day could ruin test results, maybe the weather didnāt help conversions that day - you just never know.
In my experience, each headline should be tested for at least 7 days across at least 100 people, but those numbers vary from business to business. Testing headlines is great for any digital business - whether itās an e-commerce store or a blog trying to maximise revenue through affiliate links. Thereās so much nuance involved in marketing and split-testing can help clarify things for you while showing you exactly what makes the most sense for your customers.
Idea 2: Experimenting with Call-to-Action Buttons
You know, most people get their Call to Action or CTA buttons all wrong. They think the only thing that matters is the wording and sometimes, the placement. But what they fail to realise is that there are so many other elements that come into play when it comes to experimenting with and optimising CTA buttons. For example, the colour of the button or surrounding area can really draw or distract attention.
And of course, thereās the copy that goes on the button - but itās not about what you want to say, itās about what your audience wants to hear. This copy needs to be in sync with how your target audience thinks and acts in order for them to click that button and take the desired action. But then again, I think thereās no real formula to a CTA button that works across all types of businesses and websites. Some might work better with a short 2-3 word CTA, while others might require a more convincing 6-word statement on their button.
Sometimes, capitalising all letters makes a difference whereas sometimes keeping it all lower case does. Thereās no clear right or wrong here so you have plenty of room for experimentation and testing things out till you find something that works. More or less.
Idea 3: Variations in Pricing Strategies
Now, thereās a common misconception that people buy only from the cheapest source, but thatās really not true. Iāve noticed, over the years, that there are many factors at play when it comes to pricing and youād be surprised at how many people will fork out more money if you give them a good enough reason. These āreasonsā donāt have to be something physical - things like premium feel, trust signals and authenticity can all count as reasons for someone to spend more on your products.
Most business owners are scared to raise prices because they worry about losing sales or facing backlash from loyal customers. Thatās fair, but if you only ever make decisions based on fear of risk, then youāll never get any real rewards. And if you want to reap the rewards, then risk taking is reportedly necessary - sometimes even crucial. This doesnāt mean that you should raise prices for everything at once and see what happens though.
In my experience, when it comes to pricing, slow and steady wins the race. Split-testing one or two products with different price points or combos can a bit help create trust between your business and your customers as well as within yourself. This also gives you time and space to experiment with what works best for your brand without alienating your core audience.
There will never be a straight answer when it comes to pricing strategies - different things work for different brands and customer segments. You might see massive improvements in one of your products by increasing its price point while another product tanks entirely at higher prices. Test small and fail small so you can pretty much learn what works best for your brand so that when you do go big, you already know exactly what needs doing.
Analyzing Results: Making Data-Driven Decisions
Making Sense Of Split-Testing Results People tend to get excited about seeing different numbers in split testing results. They immediately jump to conclusions, assuming the higher converting variant is the winner. Itās easy to think this way, but thereās more to it than just that.
You have to look at the bigger picture and ask yourself some important questions. It seems like it seems like all you should care about is which test brings in more revenue, right. But it can get a bit more complicated than that. Sometimes, you might see a significant difference in performance between your variants, but it could be due to random chance.
If your test hasnāt run long enough or if you didnāt have enough people participate in your test, your results might not be very accurate. As much as we want clear answers and predictable results from split testing, thatās not always going to happen. It can be pretty frustrating sometimes when things donāt go your way or when things donāt make sense. You could do everything right and then suddenly have something completely unexpected happen like your previously high-converting webpage is now performing worse than ever before.
More or less. A good rule of thumb is to keep running tests until you find a statistically significant difference between variants before making any decisions about which one works best for you. There are also calculators available online where you can put in all relevant information (such as number of visitors) and figure out how reliable your data actually is likely so that making informed decisions becomes easier over time.