From Mark Edwards
A collaborative spam filter along the lines described in your article has run quite well for a number of years (14 May, p 24). The Cloudmark SafetyBar has well over 1 million users who identify messages as legitimate or spam, ranking them in nine levels of trustworthiness (including negative levels). As anticipated by the team in the article, such a social network is extremely accurate at identifying spam while generating very low levels of false positives.
My experience with the system over about 18 months is that of the 49,252 emails I received, 2479 were spam. The software identified 2432 of these and left me to manually identify the remaining 47. Only 13 of these manual identifications happened in the last six months, so, as expected in the article, increased numbers of users are improving the effectiveness of the system. I now have to deal with only two spam messages per month myself.
From Eric Solomon
Filters of any type, networked or not, can never beat spammers. First, it is so easy to disguise key words in any of a billion ways. Second, the idea of a network sharing information implies that each user will be prepared to fill his hard disc with a myriad of sample spams, or extracts therefrom. And third, the system proposed in the article would be extremely vulnerable to sabotage.
Advertisement
The only complete solution to the spam problem is for email users to embrace the concept of passwords. I proposed such a system to the all party internet group at the House of Commons in September 2003, and to other interested parties. The idea is that email clients (programs handling email) will reject incoming messages not containing a password in the message body.
Each user selects, or changes, their password and may have different passwords for different purposes. Senders who need to contact a recipient whose password is unknown to them will simply send a message with a meaningful subject line, and the single word ‘password’ in the body of the message. The recipient can then decide whether to send his password, or not.
London, UK
From Chris Jack
The proposal to collaboratively look for spam by behind-the-scenes software sharing emails without user consent has two keys flaws. First, the ability to match emails automatically is no longer straightforward. Spammers add random text and variations to their email to make direct comparison harder.
Second, the idea that potentially confidential emails might be sent to other people’s computers is anathema to most people’s privacy requirements.
Opt-in schemes, where people voluntarily indicate particular emails are spam and this information gets centralised, avoid at least the second problem and make the first problem easier to analyse. These are already in widespread use.
St Albans, Hertfordshire, UK
From Morris Pearson
You say that filters would be a lot more effective if they pooled data. Further research would have led you to SPAM NET.
Owings, Maryland, US
From Rich Tietjens
Wonderful idea. It’s called the DCC, but instead of relying on hundreds of thousands of (frankly, bloody ignorant) end-users, it runs at the server level – which is where spam must be blocked, to be effective.
One hopes that the next development from your pundits is fire, or the wheel. We’ve never seen those before, either.
Newberg, Oregon, US
Chapel Hill, Queensland, Australia
