A View of The Email Universe: Return Path's Q2 Reputation Benchmark Report

When we set out to build a reputation data network we had a strong sense that the volume of email being sent to top receivers was staggering. But sensing is one thing. Having empirical data is another.

Which is why I’m so excited about the Return Path Q2 Reputation Benchmark Report. Now we have actual email performance data that tells us what the email traffic really looks like.

You can read the report yourself here.

Here’s my high level take on what we found:

  1. Most of the servers sending email shouldn’t be. Only 20% of the IPs we studied were legitimate, well-configured, static email servers. It’s important to point out that this doesn’t speak at all to the quality of the messages from those servers – lots of horrible spammers know how to configure a mail server. The other 80% of the mail is coming from servers that are either identifiably bad or unidentifiable and probably bad. No wonder ISPs and other large receivers feel besieged.
  2. Servers with good reputations get their messages delivered. Servers with bad reputations don’t. This might seem obvious to those of you reading this blog, and of course it is. But again, having that empirical data is gratifying. We found a direct linear relationship between an IP’s Sender Score and that IP’s average delivered rate. Of course I have to point out here that it is not the low Sender Score that is causing the delivery problems, a common misconception. The reputation issues that give an IP a low Sender Score are what also cause that IP to be blocked from inboxes.
  3. Specific best practices have a direct result on an IP’s delivery rates. We found a 20 point difference in delivery rates for IPs with just one spam trap hit. For servers with unknown user rates above 9% the difference was 23 points versus servers with cleaner data.
  4. Blacklists don’t cause blocking, they predict it. We found that servers listed on any one of nine public blacklists (the lists studied are noted in the report) had an average delivery rate of 35% versus 58% for servers not on these lists. But the reason is not that those blacklists are used by receivers. In fact, some of them are not used very much at all. Much like with the Sender Score, the behaviors that land a server on the blacklist also cause that server to be blocked by many receivers.

Read the report now. And if you haven’t already gotten your Sender Score, you should go to our reputation portal at www.senderscore.org. You get your score for free, or you can register with us (still free!) and get a more detailed report on your reputation score.

minute read

Popular stories



BriteVerify email verification ensures that an email address actually exists in real-time


The #1 global data quality tool used by thousands of Salesforce admins


Insights and deliverability guidance from the only all-in-one email marketing solution

GridBuddy Cloud

Transform how you interact with your data through the versatility of grids.

Return Path

World-class deliverability applications to optimize email marketing programs

Trust Assessments

A revolutionary new solution for assessing Salesforce data quality


Validity for Email

Increase inbox placement and maximize subscriber reach with clean and actionable data

Validity for Data Management

Simplify data management with solutions that improve data quality and increase CRM adoption

Validity for Sales Productivity

Give your sales team back hours per day with tools designed to increase productivity and mitigate pipeline risks in real-time