The Fiveten Blacklist: Not Accurate

Fiveten” ( is a combination anti-spam blacklist run by Carl Byington, publishing under the name of “510 Software Group.” This blacklist has been available since at least February, 2001.
It has a multitude of criteria for listings.
As of this writing, the website lists the following current criteria:
  • Individual spam sources: “These are generally taken from spam samples that have arrived here, and from discussions on”
  • Bulk mailers that don't require closed loop confirmed opt-in from all their customers, or that have have allowed known spammers to become clients.”
  • “Networks that provide services to spammers.”
  • Web servers running software vulnerable to spam relay, such as FormMail.
  • Open relaying mail servers.
  • “Free mail providers.” One assumes this relates to sites like Yahoo or Hotmail.
  • “Systems that send virus notifications (klez, sobig, etc) to the supposed sender.” In other words, a specific type of backscatter.
  • “Systems that have delivered challenge-response” messages to Carl's mail server. Yet another type of backscatter.
  • “Systems that are owned by organizations that latently violate the TCPA.” This refers to what most would call phone spammers, entities where Carl is aware them sending pre-recorded telephone message solicitations. (In other words, not email related.)
I've been tracking the effectiveness of the Fiveten blacklist going back to March, 2007. It, along with Spamcop, were two blacklists where I had little data about their current effectiveness. I was intensely curious as to what it targeted and how well it succeeded at stopping spam.

Over the years, I've answered a lot of questions from a lot of companies trying to figure out how to do the right thing with regard to list management and application of abuse prevention best practices. One of the recurring themes in the many emails I receive is blacklisting. I'm blacklisted! What do I do? How do I get un-blacklisted? How do I prevent myself from being blacklisted? Interestingly, one of the blacklists I'm most frequently asked about is Fiveten. Why is that?

Well, after tracking the effectiveness of Fiveten for many months, I've figured out why: Fiveten is inexact and inaccurate. It blocks only a so-so level of spam, and, on a percentage basis, it tends to block more non-spam than spam.

The chart above shows the thirteen-week average effectiveness as measured against my spamtrap and hamtrap mail sources. Fiveten has an approximately 40% success rate with regard to filtering spam. However, it gets it wrong a staggering 44% (approximate) of the time with regard to non spam.
Analysis of the raw data suggests to me that Fiveten's poor (high) false positive rates is primarily due to Fiveten's listing of “bulk mailers that don't require closed loop confirmation opt-in from all their customers.” As a result, Fiveten has thousands of senders listed that have never send spam, specifically because they choose not to utilize double opt-in. This means that Fiveten is effectively a tool that blocks “things the maintainer doesn't like,” which is a wholly different criteria than blocking spam. Against my own data, it appears that there is no direct correlation between spam and the blacklist maintainer's choices for listing criteria.
There's nothing wrong with making a blacklist that requires that any sender not utilizing double opt-in be listed. It's fair to ask how accurate such a list would be, or is. Is there a correlation between lack of confirmed opt-in and spam? Double opt-in, or confirmed opt-in, is a practice that I have strongly promoted for many years. Indeed, I've designed and built a number of confirmed opt-in systems myself over the years, and continue to promote it to this day. However, ISPs generally do not block mail from senders only because they don't utilize double opt-in. What do they know that Fiveten doesn't know?

It's perfectly acceptable to create and publish a blacklist that operates on specific, arbitrary criteria. Blacklist operators clearly have the right to block any email, or any sender, even if only because their email messages might contain the letter “T” or the number “7.” Blacklists are opinions, and I support a blacklist publisher's right to define whatever listing criteria they feel appropriate. But, how does arbitrary relate to accuracy? What if there was a blacklist that listed any IP address containing the number 7? I'm in a good position to test exactly how well a blacklist like that might work. Since March, over 1.6 million email messages (a combination of spam and non-spam) have crossed my tracking mechanism, and I've saved the IP address (and other data) for each. So, it's actually pretty easy for me comb through that data and measure the effectiveness of this type of hypothetical, clearly arbitrary (and some would add, silly) blacklist.

After a few minutes of coding and data compilation, here's what I've come up with: The “Luckyseven” blacklist. As the name suggests, any mail server “lucky” enough to have an IP address containing the number 7 is listed. When comparing Luckyseven to Fiveten, it is approximately 10% more accurate against spam (50% vs. 40%), and slightly less inaccurate against non-spam (43% vs 44%).

I think this exercise suggests that arbitrary listing criteria not based on direct correlation to spam can result in a blacklist that doesn't target spam accurately or successfully.
Any ISP who uses this list is going to block a lot of mail that their users actually desire to receive. As just a sampling, using Fiveten means rejecting various email messages from Microsoft, multiple public radio newsletters (from different radio stations in different states), travel notifications and newsletters from Expedia and Hotwire, lots of other newsletters and news updates from various newspapers and TV shows, and even the newsletter from my favorite pizza place back in my home town of Minneapolis.

Could any of these senders have list management issues? Could any of them be spammers, or be engaging in bad acts warranting blacklisting? Potentially, yes. I know nothing about the practices of any of these entities listed. But, I do know that even the “good guys” can go off the rails once in a while and end up on a blacklist. But it seems unlikely that this is the case with all of them. To me, this is further indication that Fiveten is unsuitable for use as a spam blocking mechanism.
For the most up-to-date Fiveten accuracy data available from DNSBL Resource, visit the Fivten data page at the Blacklist Statistics Center.

(Please note: The Luckyseven list is fake; an exercise; do not use it for spam filtering.)