Response to a Deliverability Rant
Response to a Deliverability Rant
Justin Foster from WhatCounts, an email service provider based in Seattle, wrote a very lengthy posting about email deliverability on the WhatCounts blog yesterday. There’s some good stuff in it, but there are a couple of things I’d like to clarify from Return Path‘s perspective.
Justin’s main point is spot-on. Listening to email service providers talk about deliverability is a little bit like eating fruit salad: there are apples and oranges, and quite frankly pineapples and berries as well. Everyone speaks in a different language. We think the most relevant metric to use from a mailer’s perspective is inbox placement rate. Let’s face it – nothing else matters. Being in a junk mail folder is as good as being blocked or bounced.
Justin’s secondary point is also a good one. An email service provider only has a limited amount of influence over a mailer’s inbox placement rate. Service providers can and must set up an ironclad email sending infrastructure; they can and must support dedicated IP addresses for larger mailers; they can and must support all major authentication protocols — none of these things is in any way a trivial undertaking. In addition, service providers should (but don’t have to) offer easy or integrated access to third-party deliverability tools and services that are on the market. But at the end of the day, most of the major levers that impact deliverability (complaint rates, volume spikiness, content, registration/data sources/processes) are pulled by the mailer, not the service provider. More on that in a minute.
I’d like to clarify a couple of things Justin talks about when it comes to third-party deliverability services.
Ok, so he’s correct that seed lists only work off of a sample of email addresses and therefore can’t tell a mailer with 100% certainty which individual messages reach the inbox or get blocked or filtered. However, when sampling is done correctly, it’s an incredibly powerful measurement tool. Email deliverability sampling gives mailers significantly more data than any other source about the inbox placement rate of their campaigns. Since this kind of data is by nature post-event reporting, the most interesting thing to glean from it is changes in inbox placement from one campaign to another. As long as the sampling is done consistently, that tells a mailer the most critical need-to-know information about how the levers of deliverability are working.
For example, we released our semi-annual deliverability tracking study for the first half of 2005 yesterday, which (download the whitepaper with tracking study details here or view the press release here). We don’t publicly release mailer-specific data, but the data that went into this study about specific clients is very telling. Clients who start working with us and have, say a 75% inbox placement rate — then work hard on the levers of deliverability and raise it to 95% on a sampled basis, can see the improvements as their sales and other key email metrics jump by 20%. Just because there’s a small margin of error on the sample doesn’t render the process useless.
Second, Justin issues a big buyer beware about Bonded Sender and other “reputation” services (quotes deliberate – more on that in a minute as well). Back in June, we released a study about Bonded Sender clients which showed that mailers who qualified for Bonded Sender saw an average of a 21% improvement in inbox delivery rates (range of 15%-24%) at ISPs who use Bonded Sender such as MSN, Hotmail, and Roadrunner. We were pretty careful about the data used to analyze this. We only looked at mailers who were clients both before and after joining the Bonded Sender program for enough time to be relevant, and we looked at a huge number (100,000+) of campaigns. Yes, it’s still “early days” for accreditation programs, but we think we’re off to a good start with them given this data, and the program isn’t all that expensive relative to what mailers pay for just about everything else in their email deployment arsenal.
Finally, let me come back to the two “more on that in a minute” points from above. I’ll start with the second one — Bonded Sender is an accreditation program, or a whitelist, NOT a reputation service. Accreditation and Reputation services are both critical components in the fight to improve inbox placement of legitimate, permissioned, marketing emails, but they’re very different kinds of programs (a little background on why they’re important and how they fit with authentication here).
Accreditation services like Bonded Sender work because, for the very best mailers, third parties like TRUSTe essentially vouch that a mailer is super high quality — enough so that an ISP can feel comfortable putting mail from that mailer in the inbox without subjecting it to the same level of scrutiny as random inbound mail.
There are no real, time-tested reputation services for mailers in the market today. We’re in the process of launching one now called Sender Score. Sender Score (and no doubt the other reputation services which will follow it) is designed to help mailers measure the most critical levers of deliverability so they can work at solving the underlying root cause problems that lead to low inbox placement. This is really powerful stuff, and it will ultimately prove our (and Justin’s) theory that mailers have much more control over their inbox placement rate/deliverability than service providers.
Where does all this lead? Two simple messages: (1) if you outsource your email deployment to an email service provider, pick your provider carefully and make sure they do a good job at the infrastructure-related levers of email deliverability that they do control. (2) whether you handle email deployment in-house or outsource it to a service provider, your inbox placement rate is largely in your control. Make sure you do everything you can to measure it and look closely at the levers, whether you work with a third-party deliverability service or not.
Apologies for the lengthy posting.