Marketing Tips

All A/Bout Split Testing

minute read

Post Image

A recent Validity report showed programs achieving 90% inbox placement rates or above are 25% more likely to have A/B split testing as part of their overall strategy. However, testing presents its own challenges: what hypotheses should be tested, how should tests be built, and how should results be measured?

In the latest episode of our popular State of Email Live webinar series, it was a pleasure to be joined by Charlie Wijen, Digital & CRM Specialist at Philips, and my Validity colleagues Tori Garcia and Laura Christensen, who provided expert answers to these questions.

Tori kicked things off with our regular analysis of global email metrics. There has been a recent surge in email volumes as businesses – especially travel and retail – anticipate post-pandemic life. Subscribers are responding with open rates trending upwards to around 26%. In turn, deliverability is also trending positively, with global inbox placement rates around 85%.

The torrent of audience questions reinforced what a popular topic split testing is, and our presenters obliged with fantastic insights, examples, and supporting data. Here are some of the highlights:

  • Charlie emphasised split testing is not just about subject lines. Philips has learnt factors like use of product vs people imagery, colour of products and buttons, and even showing different foods for different regions all impact performance.
  • Every Philips test delivers an average performance uplift of ± 33% in open rates and ± 20% in click-to-open rates from the winning versions. Little wonder that split testing is now mandatory for all new campaigns!
  • Tori showcased an innovative example from Dell, where predictive eye tracking was used to test optimal positioning of content elements. The winning version generated a healthier mix of clicks across both email and device creatives.
  • Laura wrapped up with 10 tips for effective testing, covering planning, execution, and analysis. The importance of starting with clear goals and hypotheses, testing across multiple devices, and ensuring meaningful and repeatable results were just three of Laura’s common-sense pieces of advice.

That’s just a sample (see what I did there!) of everything we covered. If we’ve whetted your appetite to watch the full recording, it can be viewed below.

If you’d like to learn more about how customers like Philips use Validity’s solutions to complement their testing programs, request a demo now.