Are you Sabotaging Your Testing Plan?

Most marketers agree testing is one of the most effective ways to optimize your email program and increase ROI. As outlined in Return Path’s recent eBook, All About A/B Testing, even a simple A/B test, such as testing subject lines or promotional offers, can provide valuable insights for future campaigns. However, in order to run a successful and impactful test, marketers need to avoid common pitfalls that can sabotage their results.

When running your test, always keep the following rules in mind:

  • Test one variable at a time. Limiting testing elements to one per campaign allows you to accurately measure the impact of each variable. For example, if you’re testing call-to-action language (such as “Shop Now” vs. “Click Here”) this should be the only difference between content in version A and version B of your test. If you change the call-to-action language and also alter call-to-action placement, it will be unclear which variable drove your results.
  • Send tests at the same time. As obvious as it may seem, we often forget this critically important detail that can skew results. Unless you are testing send time, make sure you deploy both versions of your tests simultaneously; otherwise, it will be difficult to tell which variable actually caused the results.
  • Use a statistically significant sample size. One of the key factors in running a successful test is making sure your test audience is set up correctly, otherwise you'll make important business decisions based on inaccurate results. Use Return Path’s sample size calculator to ensure your test audience size is statistically significant. In most cases, as a rule of thumb, aim to include 2,000-3,000 subscribers per test cell for a 95% confidence level and 2% margin for error.
  • Be patient. Although you may be excited to incorporate test findings into your email program, tests need sufficient time to run in order to obtain clear and accurate results. Wait between 48 to 72 hours to determine a winner and also make sure to run the same test, at least, three times to see if results remain consistent from test to test before declaring a true winner.
  • Track your results. Create a detailed spreadsheet outlining each test and it's results. Track audience size, testing variables, subject line, deployment time, and all metrics used to determine success, such as opens, clicks, and conversions. Not only does this allow you to review past tests, but it also allows you to easily share findings with your larger organization.

Not only is testing a helpful way to optimize your email strategy, but it's also the key to ensuring long-term program success. If it’s not already on there, add testing to your email to-do list and check out this post for 50 testing ideas to get you started today!

minute read

Popular stories



BriteVerify email verification ensures that an email address actually exists in real-time


The #1 global data quality tool used by thousands of Salesforce admins


Insights and deliverability guidance from the only all-in-one email marketing solution

GridBuddy Cloud

Transform how you interact with your data through the versatility of grids.

Return Path

World-class deliverability applications to optimize email marketing programs

Trust Assessments

A revolutionary new solution for assessing Salesforce data quality


Validity for Email

Increase inbox placement and maximize subscriber reach with clean and actionable data

Validity for Data Management

Simplify data management with solutions that improve data quality and increase CRM adoption

Validity for Sales Productivity

Give your sales team back hours per day with tools designed to increase productivity and mitigate pipeline risks in real-time