Email Sender and Domain Reputation

What’s an FBL?

minute read

by J.D. Falk
Director of Product Strategy, Receiver Services

In spite of the best efforts of anti-spam staff, end users — the account holders, recipients of email — still receive spam. And users want to complain about it, preferably to someone who’ll do something to make it stop. So, somewhere in the later 1990s, mailbox providers created an easy way to complain directly to them.

Users click a “spam” button in their email client, and the mailbox provider automatically processes that message to improve their spam filters. This is the underpinning of complaint-fed signature-based filters like Cloudmark, as well as reputation systems including our own Sender Score. The earliest reputation systems utilized little more than complaint/volume ratios, though of course they’ve gotten more complex since.

As useful and important as spam filters and reputation systems became, they still didn’t effectively address the root cause. The spam wasn’t being received, which is absolutely necessary for customer satisfaction — but it was still being sent, and the people sending it were still online with all the same rights and privileges as any other user. So, a few mailbox providers started forwarding these complaints to each other so that the offending users could be (ahem) dealt with appropriately. Later, ESPs, deliverability consulting services like Return Path’s offerings, and direct senders asked for feedback too.

After spammers gave up on trying to send through their mailbox provider’s SMTP server and switched to sending direct from their own IP addresses, the processing had to change a bit — but the feedback became even more necessary. It was important in the days of open relays, and open proxies. Now that most spam comes from botnets, this feedback helps many access providers find the infected machines in their network so that they can help their users get cleaned up. And a lot of spam is still sent via legitimate SMTP relays or free webmail services, so this same feedback is used to fine-tune outbound spam filters and disable accounts.

Common types of Complaint Feedback Generators include mailbox providers, 3rd parties working on behalf of the mailbox provider (such as Return Path), and 3rd parties working on behalf of the recipient (such as SpamCop.)

There are many more common types of Complaint Feedback Consumers: mailbox providers, access providers, hosting & colocation companies, ESPs, direct senders, 3rd parties working on behalf of the ESP or direct sender (such as Return Path), researchers, and legal enforcement authorities.

In nearly every case, the feedback flows from generator to consumer by prior arrangement: the consumer requests feedback, and the generator decides whether to approve that request. Feedback loops hosted by Return Path (and many others) include a confirmation step: the abuse@ or postmaster@ address at the domain name responsible for the original messages is asked to confirm whether that requester is authorized to receive feedback for that domain.

The feedback itself consists of one new email message (often called a report) per complaint, just like forwarding — which is indeed where the concept originated. Originally, the headers & body were dumped into the body portion of a message; formatting such as character encoding or whitespace could vary widely. Sometimes there was explanatory text at the top. Unique parsers had to be written for every feedback generator, but there weren’t many.

In 2005 a small group of MAAWG members, including myself, decided to work on a standard format for these reports. We wanted it to eventually become a standard, so we created a mailing list outside of MAAWG (which has strict confidentiality rules), invited a few additional experts to participate, and wrote the first draft of the Abuse Reporting Format (ARF).

By 2007 ARF was already the de facto standard; no new feedback loops have cropped up since then which aren’t using it. But then we started discussing some possible extensions to ARF, and knew it needed to go through the IETF standards process so that these changes could be discussed and tracked in the standard way of standards.

After the IETF leadership blessed the creation of a Mail Abuse Reporting Format (MARF) Working Group, draft-ietf-marf-base was published. This version removed all of the report types which were in earlier versions of ARF, but weren’t actually used by feedback generators. Following a few improvements to the language, it has now been published as RFC 5965.

At the same time, MAAWG has published a Best Common Practices document which explains the rationale for either generating or consuming such feedback from some of the most common perspectives.

Today the list of feedback generators includes AOL, BlueTie/Excite, Comcast, Cox, Earthlink (though it may be on hiatus), Microsoft, RackSpace, Outblaze (restricted to IPs on their whitelist), Road Runner, Tucows/OpenSRS, USA.net, and Yahoo! (including other large providers they host.)

Return Path helps with just over half of these, and we’re aware of about a dozen more in progress. We’re also starting to see some feedback mechanisms based on spam trap messages, rather than complaints — and there’s been some work towards making it easier for a desktop or device email client to send spam complaints to their mailbox provider.

None of this entirely answers the question, though: what’s an FBL? Unfortunately, this has become difficult to answer. I’ve seen those three letters variously applied to feedback generators, feedback messages, feedback consumers, report creation software, report parsing software — absolutely anything related to the process. So, in an effort to communicate accurately, I’ve been trying to stop saying “FBL” entirely.

We’ll be publishing a high-level overview of the standard Abuse Reporting Format next week on our new blog, “Received:”. If you’d like to talk about either generating or consuming these reports, please contact us.