Discussion:
A method to eliminate spam
(too old to reply)
m***@mail.SoftHome.net
2003-03-15 23:11:54 UTC
Permalink
I've set up a web site which outlines my proposal of a new method to send
E-Mail. The method makes use of Digital Signature algorithms, white lists,
and encryption, but it doesn't use them all in the standard way; please
give it a good read. I've spent a lot of time working on this method and
think it deserves some good intellectual review by people interested in
eliminating spam mail.

http://www.xwarzone.com/colin/overview.htm

Colin LeMahieu
Hans Spath
2003-03-16 02:02:10 UTC
Permalink
At 15.03.2003 17:11 -0600, ***@mail.SoftHome.net wrote:
>http://www.xwarzone.com/colin/overview.htm

Cool for bad guys who want to do denial of service attacks to freemailsers
like hotmail, gmx, etc. You would just have to get an account from one
freemailer and send hundrets of mails to several mailboxes at your own
pseudo server. Your pseudo server then behaves like the box at the
freemailer is unthrusted and replies with a uncrypted random message and a
different crypted random message. The freemailer's server will be unable to
find the right key. Even if there is a limit, how long the freemailer's
server will try to find the right key, it will take more (cpu) time than usual.

The situation would be even worse if someone overtakes a normal server and
replaces it with such a pseudo server. All servers sending messages to this
one would be slowed down.
Hans Spath
2003-03-16 04:23:29 UTC
Permalink
At 15.03.2003 21:06 -0600, ***@mail.SoftHome.net wrote:

>>Cool for bad guys who want to do denial of service attacks to
>>freemailsers like hotmail, gmx, etc. You would just have to get an
>>account from one freemailer and send hundrets of mails to several
>>mailboxes at your own pseudo server. Your pseudo server then behaves like
>>the box at the freemailer is unthrusted and replies with a uncrypted
>>random message and a different crypted random message. The freemailer's
>>server will be unable to find the right key. Even if there is a limit,
>>how long the freemailer's server will try to find the right key, it will
>>take more (cpu) time than usual.
>>
>>The situation would be even worse if someone overtakes a normal server
>>and replaces it with such a pseudo server. All servers sending messages
>>to this one would be slowed down
>
>In the case of web clients the computer that would have to do the
>computation load would be different. Clearly, as you said, a free mail
>server should not have to perform the computations because, as you stated,
>that would be open to a DoS attack. A simple solution to this would be
>possibly giving the web client an applet the performs the computations on
>the server's behalf. If the sender is required to perform some
>computation work, the work load could be encapsulated by this applet and
>given to the computer that is requesting the mail is sent. Once the
>solution is found, the result would be sent to the web mail client, which
>would in turn be sent to the destination mail server. [...]

First of all the problem is not limited to web clients. It's a problem for
everyone who offers a free email service, regardless what clients their
"customers" use.

Your "simple solution" is bad.

Example:
Dial-In user ---> ISP's/freemailer's relay
relay --> final destination (temporally unreachable)
Dial-In user goes offline
relay (retrying) --> final destination

If the final destination is a malicious pseudo server, the *relay* will
have to handle the requested, impossible authentication task.

Even if you would solve this, so a original sender would not be able to
take down relays by using malicious pseudo servers, there remains a problem.
What if a former normal server would be replaced by a malicious pseudo
server (by virus/worm or hack)? All clients would try to solve impossible
authentication tasks, given from the "new" server, no matter wheter the
were whitelisted at the original server or not.

In short: If the destination requests an impossible problem solution (in
result of error, manipulation or whatever) the sender is the one who
suffers. And the sender will not be able to discover this before it has to
suffer.

>You do bring up an a valid point that web clients would be harder to
>implement, but I truly believe the benefits of this method far outweigh a
>slightly harder web client implementation.

I don't think it's a good idea to "solve" problems of the concept in the
implementations. "Needs workaround by design" stinks.

Erm ... and what about replying to the list next time?
m***@mail.SoftHome.net
2003-03-17 04:36:04 UTC
Permalink
>First of all the problem is not limited to web clients. It's a problem for
>everyone who offers a free email service, regardless what clients their
>"customers" use.

No, the "problem" is that you're applying the current E-Mail system to a
different method. If you read the method through, you'll see that there
are no relay agents in the protocol. There are send clients(computer users
trying to send mail), mail hosts(server which houses mail messages for
later retrieval by an end user), and end users(people who are getting the
mail). When you send mail, you're sending to the mail host, you do not
send to an ISP SMTP server which in turn relays the message to the final
server. Like every other server/client setup on the Internet, if the mail
host is down, your message cannot be sent at that time it does not get
"cached" somewhere until the SMTP server decides to bounce it back.

>Your "simple solution" is bad.

I think you're misunderstanding it.

>Example:
>Dial-In user ---> ISP's/freemailer's relay
>relay --> final destination (temporally unreachable)
>Dial-In user goes offline
>relay (retrying) --> final destination
>
>If the final destination is a malicious pseudo server, the *relay* will
>have to handle the requested, impossible authentication task.

There are no relay servers. If the mail host gives you an impossible task,
only the send client will be tied up. Why would a mail host give a mail
sender an impossible task? and if they did, who's to say that the mail
sender cannot push a "Cancel" button? These tasks are designed to take on
an order of 10 seconds to 1 minutes max, it will be very apparent to a mail
sender that the task is not completing correctly. First of all, it's not
practical for a mail host to give out impossible tasks; mail hosts want
mail to go through, not to be bounced. Second of all, with no relays how is
erroneous problems debilitating in any way? when only the mail sender has
the remote possibility of getting an impossible task, how is it not
possible to cancel or retry sending?

>Even if you would solve this, so a original sender would not be able to
>take down relays by using malicious pseudo servers, there remains a problem.
>What if a former normal server would be replaced by a malicious pseudo
>server (by virus/worm or hack)? All clients would try to solve impossible
>authentication tasks, given from the "new" server, no matter wheter the
>were whitelisted at the original server or not.

No relay servers. Yes, what if a server was replaced by a malicious pseudo
server and all clients were given impossible tasks? This question is
analogous to asking what if a site's web server was replaced with a fake
one and was giving out bogus web pages. No admin would should allow his
server to be replaced by a "malicious pseudo server"; and even if it was,
how is this problem specifically related to the implementation? This is a
problem with any un trusted network and has no bearing on what the overview
trying to implement.


>In short: If the destination requests an impossible problem solution (in
>result of error, manipulation or whatever) the sender is the one who
>suffers. And the sender will not be able to discover this before it has to
>suffer.

Yes, it is the sender who suffers, how is this an issue? No bandwidth is
being used, no CPU time is being used by the mail host when the send client
is working, only sender computation time, which could be stopped with the
infamous "Cancel" button.
Yes the sender can discover errors within a finite period of time. I'm
sorry for not giving out explicit details in the overview as it was trying
to portray an idea, not a specific implementation. In the case of a mail
sender not knowing when to stop searching for a key; the mail host would be
able to give parameters to the mail sender as to how large the key is. If
the sender is unwilling/unable to find the key of a specified size, it
knows there was an error and only a few seconds of CPU time was lost.


>>You do bring up an a valid point that web clients would be harder to
>>implement, but I truly believe the benefits of this method far outweigh a
>>slightly harder web client implementation.
>
>I don't think it's a good idea to "solve" problems of the concept in the
>implementations. "Needs workaround by design" stinks.'

This is not a work around, you're misunderstanding the idea.


>Erm ... and what about replying to the list next time?

My mistake.
Chris Lewis
2003-03-17 05:01:06 UTC
Permalink
***@mail.SoftHome.net wrote:
>
>> First of all the problem is not limited to web clients. It's a problem
>> for everyone who offers a free email service, regardless what clients
>> their "customers" use.
>
>
> No, the "problem" is that you're applying the current E-Mail system to a
> different method. If you read the method through, you'll see that there
> are no relay agents in the protocol. There are send clients(computer
> users trying to send mail), mail hosts(server which houses mail messages
> for later retrieval by an end user), and end users(people who are
> getting the mail). When you send mail, you're sending to the mail host,
> you do not send to an ISP SMTP server which in turn relays the message
> to the final server. Like every other server/client setup on the
> Internet, if the mail host is down, your message cannot be sent at that
> time it does not get "cached" somewhere until the SMTP server decides to
> bounce it back.
>
>> Your "simple solution" is bad.
>
>
> I think you're misunderstanding it.
>
>> Example:
>> Dial-In user ---> ISP's/freemailer's relay
>> relay --> final destination (temporally unreachable)
>> Dial-In user goes offline
>> relay (retrying) --> final destination
>>
>> If the final destination is a malicious pseudo server, the *relay*
>> will have to handle the requested, impossible authentication task.
>
>
> There are no relay servers. If the mail host gives you an impossible
> task, only the send client will be tied up.

Every mail client would have to become a full mail server with MX'ing,
queuing and the rest of the 9 yards. Sending an email to large numbers
of recipients (especially mailing lists) would become extremely unreliable.

There goes any chance for rate limiting, send filtering, or even logging
(especially in a corporate environment). Legitimate bulk mailing would
become excrutiatingly painful.

I think this cure is worse than the disease. It'd never fly in a
corporate environment.
m***@mail.SoftHome.net
2003-03-17 05:58:12 UTC
Permalink
>Every mail client would have to become a full mail server with MX'ing,
queuing and the rest of the 9 yards. Sending an email to large numbers of
recipients (especially mailing lists) would become extremely unreliable.

I believe these problems are a large result of comparing this technique to
how E-Mail is currently handled right now. Every sending client would not
have to become a mail server. The sending clients could use normal
A-record lookups to resolve DNS names instead of using MX records. The
sending client would not have to implement server queues as it would only
be sending mail from one user.

>There goes any chance for rate limiting, send filtering, or even logging
(especially in a corporate environment).

I don't see how rate limiting would be affected with this method, I would
like to know exactly what you mean by this. Send filtering and logging
however are different topics, both of these are easy to get around by any
knowledgeable computer user. Could you give some examples of how send
filtering and logging are beneficial to corporations? Is this logging any
different than just normal usage logging that administrators
perform? Logging like to see what users are doing with HTTP, FTP,
messaging systems, etc. Could logging programs not be adapted, if people
wanted to, to log new protocols like this one?

>Legitimate bulk mailing would become excrutiatingly painful.

Well, right now legitimate bulk mailing is kind of broken. In order for
myself to sign up to this list I had to send and receive 3 pieces of
E-Mail. In the overview I did address bulk mailing. If a bulk mailer is
on a white list of someone(legitimate), then the bulk mailer does not have
to use any CPU time to send the message, it is allowed to be sent as simply
as it is right now. This makes it perfectly possible for a bulk mailer to
send out hundreds of thousands of E-Mail messages to users who want to
receive them. I don't see how legitimate bulk mailing would be hindered by
this method. On the contrary; I think this method could help out bulk
mailers as end users would clearly know who they've placed on their list of
accepted senders and would eliminate users accidentally reporting
legitimate or solicited bulk mail as unsolicited.

>I think this cure is worse than the disease.

I don't see how this is true. I believe this method is a more robust
method opposed to simple E-Mail system we use right now. As such it
requires slightly different methods, but even the current E-Mail system has
it's own methods to get used to. Different is not the same as worse.
Chris Lewis
2003-03-17 06:41:01 UTC
Permalink
***@mail.SoftHome.net wrote:
> >Every mail client would have to become a full mail server with MX'ing,
> queuing and the rest of the 9 yards. Sending an email to large numbers
> of recipients (especially mailing lists) would become extremely unreliable.
>
> I believe these problems are a large result of comparing this technique
> to how E-Mail is currently handled right now. Every sending client
> would not have to become a mail server. The sending clients could use
> normal A-record lookups to resolve DNS names instead of using MX
> records.

Why on earth would you want to abandon MXes? MXes serve an extremely
useful purpose even if your "solution" was adopted - failover,
prioritization, enabling systems not on the Internet to receive email
(hint: my home machine doesn't have, and has no need for, an A record.
Couldn't use one even if it had one. It's not Internet-connected.).

There's also the issue of having to expose internal email topology (eg:
which machine holds my mailbox? You can't tell. We don't want you to
be able to tell. We don't want anyone to be able to connect to it
directly at _all_.)

> The sending client would not have to implement server queues
> as it would only be sending mail from one user.

When faced with the option of not being able to hit send unless the
recipient's mailbox machine was currently online, sending clients would
have to. Dealing with sporadic machine outages would make mailing lists
(or even modest receiver lists) extremely unpleasant to operate if you
couldn't queue. Imagine the world-wide grief if a backhoe took out
AOL's connectivity for a short time, or an individual mailbox server
went down. My home machine would never get another email again...

> >There goes any chance for rate limiting, send filtering, or even
> >logging (especially in a corporate environment).

> I don't see how rate limiting would be affected with this method, I
> would like to know exactly what you mean by this.

How would ISP (or a corporation) block outbound mailbombs and the like
if they don't get to see the traffic? Legal liabilities galore.

> Send filtering and
> logging however are different topics, both of these are easy to get
> around by any knowledgeable computer user.

Huh? I'd like to see any of our users get around our send filtering and
logging. Hint: we block direct outbound email simply by denying
outbound port 25. As do many ISPs with dialup pool router blocks. Short
of active collusion with outside entities (eg: port forwarding ala open
proxies) they can't.

> Could you give some examples
> of how send filtering and logging are beneficial to corporations?

Blocking outbound klez. I wish Verizon would do that...
Logging all email is becoming a legal necessity these days [Patriot Act
plus others, mutter].

> Is
> this logging any different than just normal usage logging that
> administrators perform? Logging like to see what users are doing with
> HTTP, FTP, messaging systems, etc. Could logging programs not be
> adapted, if people wanted to, to log new protocols like this one?

Yeah, by implementing proxies as they do for HTTP and FTP. Which
defeats your proposal because they're nothing more than intermediate
mail servers. Packet logging is impractical or useless - it shows
nothing of the details of the email, such as recipient address.

> >Legitimate bulk mailing would become excrutiatingly painful.

> Well, right now legitimate bulk mailing is kind of broken. In order for
> myself to sign up to this list I had to send and receive 3 pieces of
> E-Mail. In the overview I did address bulk mailing. If a bulk mailer
> is on a white list of someone(legitimate), then the bulk mailer does not
> have to use any CPU time to send the message, it is allowed to be sent
> as simply as it is right now.

So now we have two email protocols, one for bulk and one for non-bulk. Ouch.

> I don't see how this is true. I believe this method is a more robust
> method opposed to simple E-Mail system we use right now. As such it
> requires slightly different methods, but even the current E-Mail system
> has it's own methods to get used to. Different is not the same as worse.

Requiring the recipient mailbox machine to be online and operational at
the time of sending a piece of email is, by itself, considerably worse
than the status quo. Race conditions. Etc. Vastly more unreliable.
Matt Sergeant
2003-03-17 11:34:41 UTC
Permalink
On Monday, Mar 17, 2003, at 05:58 Europe/London, ***@mail.SoftHome.net
wrote:

> >Legitimate bulk mailing would become excrutiatingly painful.
>
> Well, right now legitimate bulk mailing is kind of broken. In order
> for myself to sign up to this list I had to send and receive 3 pieces
> of E-Mail. In the overview I did address bulk mailing. If a bulk
> mailer is on a white list of someone(legitimate), then the bulk mailer
> does not have to use any CPU time to send the message, it is allowed
> to be sent as simply as it is right now. This makes it perfectly
> possible for a bulk mailer to send out hundreds of thousands of E-Mail
> messages to users who want to receive them. I don't see how
> legitimate bulk mailing would be hindered by this method. On the
> contrary; I think this method could help out bulk mailers as end users
> would clearly know who they've placed on their list of accepted
> senders and would eliminate users accidentally reporting legitimate or
> solicited bulk mail as unsolicited.

If you sign up for C|Net's daily newsletters, who do you whitelist?
*@cnet.com? *@news.com?

Or do you have to wait for the newsletter to come in before you can
create a whitelist entry for them?

I ask because C|Net's newsletter doesn't come from anywhere you might
expect it to come from.

Matt.
Hans Spath
2003-03-17 12:02:27 UTC
Permalink
At 17.03.2003 11:34 +0000, you wrote:
>On Monday, Mar 17, 2003, at 05:58 Europe/London, ***@mail.SoftHome.net wrote:
>
>> >Legitimate bulk mailing would become excrutiatingly painful.
>>
>>Well, right now legitimate bulk mailing is kind of broken. In order for
>>myself to sign up to this list I had to send and receive 3 pieces of
>>E-Mail. In the overview I did address bulk mailing. If a bulk mailer is
>>on a white list of someone(legitimate), then the bulk mailer does not
>>have to use any CPU time to send the message, it is allowed to be sent as
>>simply as it is right now. This makes it perfectly possible for a bulk
>>mailer to send out hundreds of thousands of E-Mail messages to users who
>>want to receive them. I don't see how legitimate bulk mailing would be
>>hindered by this method. On the contrary; I think this method could help
>>out bulk mailers as end users would clearly know who they've placed on
>>their list of accepted senders and would eliminate users accidentally
>>reporting legitimate or solicited bulk mail as unsolicited.
>
>If you sign up for C|Net's daily newsletters, who do you whitelist?
>*@cnet.com? *@news.com?
>
>Or do you have to wait for the newsletter to come in before you can create
>a whitelist entry for them?
>
>I ask because C|Net's newsletter doesn't come from anywhere you might
>expect it to come from.

I suppose average users would soon become annoyed if they had to whitelist
every newsletter they subscribe to.
Vernon Schryver
2003-03-17 13:56:54 UTC
Permalink
> From: Hans Spath <ml-***@hans-spath.de>

> ...
> >If you sign up for C|Net's daily newsletters, who do you whitelist?
> >*@cnet.com? *@news.com?
> >
> >Or do you have to wait for the newsletter to come in before you can create
> >a whitelist entry for them?
> >
> >I ask because C|Net's newsletter doesn't come from anywhere you might
> >expect it to come from.

Users of the DCC have mentioned that problem.

> I suppose average users would soon become annoyed if they had to whitelist
> every newsletter they subscribe to.

Judging from the actions of the ISPs using the DCC, that does not seem
to be a problem. Users are amazingly tolerant of false positives on
bulk mail. It seems to be the non-bulk mail that they must receive.

However, there are schemes that could without any user action whitelist
every legitimate newsletter, or every newsletter that does not offer
unsolicited "gift subscriptions" and "sample issues." (Those with
experience as spam filter wranglers on behalf of well known users has
probably had the pleasure of arguing with newsletter publishers about
"gifts" and "samples;" I have.) Those schemes involve third parties
that attest bulk mail legitimacy. For example, if you think spammers
would respect legal threats, then the scheme of http://habeas.com/
would work. Spammers have so far, and Habeas claim that it's header
whitelists mail for about half of all Internet mailboxes in
http://habeas.com/about/debunk.htm sounds like a powerful tempation.
If not, then you could replace Habeas's haiku with a public key.

That does not resolve all problems, as witness the recent controversy
about Habeas headers in some unsolicited bulk mail from Topica, but
problems like that would be tolerable if only most legitimate bulk mail
were marked. Of course, the major problem for such a solution is
the transition. Until almost all legitimate bulk is marked, there's
no reason for legitimate bulk senders to pay Habeas or whomever would
bond (http://www.google.com/search?q=bond+spam ) or certify their mail.

....

About central whitelists: yes, many users have had many addresses,
and some of us have not abandoned them to spammers. However, if there
were a central whitelist, what would you do? I would abandon and wire
as spam traps many of my extra or old addresses and do whatever is
necessary to whitelist the rest. Fuzzy name matching, whether sendmail's
ancient or newfangled LDAP, would be turned off for senders outside
your corporate firewalls, as is already often the case.

Central whitelists could be enforced with marking (e.g. Habeas or
public key) or bonding or with laws. Without laws, they have the
major transition problem.


Vernon Schryver ***@rhyolite.com
m***@mail.SoftHome.net
2003-03-17 18:34:42 UTC
Permalink
At 11:34 3/17/2003 +0000, you wrote:
>On Monday, Mar 17, 2003, at 05:58 Europe/London, ***@mail.SoftHome.net wrote:
>
>> >Legitimate bulk mailing would become excrutiatingly painful.
>>
>>Well, right now legitimate bulk mailing is kind of broken. In order for
>>myself to sign up to this list I had to send and receive 3 pieces of
>>E-Mail. In the overview I did address bulk mailing. If a bulk mailer is
>>on a white list of someone(legitimate), then the bulk mailer does not
>>have to use any CPU time to send the message, it is allowed to be sent as
>>simply as it is right now. This makes it perfectly possible for a bulk
>>mailer to send out hundreds of thousands of E-Mail messages to users who
>>want to receive them. I don't see how legitimate bulk mailing would be
>>hindered by this method. On the contrary; I think this method could help
>>out bulk mailers as end users would clearly know who they've placed on
>>their list of accepted senders and would eliminate users accidentally
>>reporting legitimate or solicited bulk mail as unsolicited.
>
>If you sign up for C|Net's daily newsletters, who do you whitelist?
>*@cnet.com? *@news.com?
>
>Or do you have to wait for the newsletter to come in before you can create
>a whitelist entry for them?
>
>I ask because C|Net's newsletter doesn't come from anywhere you might
>expect it to come from.
>
>Matt.
This is a good question but there is an answer. When you sign up for a
list, you would do it similar to how you signed up for this mailing list to
the IRTF group. Each of you would send an E-Mail back to each other which
would in turn add each other to your respective white lists. When you
white list someone, you are white listing a single digital signature. This
digital signature ensures that you are getting E-Mail from the same person
each time. This digital signature is not for guaranteeing identity, it
only for guaranteeing that you receive E-Mail from the same person every
subsequent time you receive mail. So if cnet was sending out a news
letter, you would exchange E-Mails(slowly the first time because neither of
you are white listed with each other) and add each other to your white
lists(indexed by public keys). At this point you will be able to receive
any E-Mail signed by someone with the public key you received at full
speed(the speed at which mail is sent right now). Also it is worth
pointing out that these public keys can be hidden from end users, this will
not add complexity to E-Mailing.
m***@mail.SoftHome.net
2003-03-17 18:39:09 UTC
Permalink
At 13:02 3/17/2003 +0100, you wrote:
>At 17.03.2003 11:34 +0000, you wrote:
>>On Monday, Mar 17, 2003, at 05:58 Europe/London, ***@mail.SoftHome.net
>>wrote:
>>
>>> >Legitimate bulk mailing would become excrutiatingly painful.
>>>
>>>Well, right now legitimate bulk mailing is kind of broken. In order for
>>>myself to sign up to this list I had to send and receive 3 pieces of
>>>E-Mail. In the overview I did address bulk mailing. If a bulk mailer
>>>is on a white list of someone(legitimate), then the bulk mailer does not
>>>have to use any CPU time to send the message, it is allowed to be sent
>>>as simply as it is right now. This makes it perfectly possible for a
>>>bulk mailer to send out hundreds of thousands of E-Mail messages to
>>>users who want to receive them. I don't see how legitimate bulk mailing
>>>would be hindered by this method. On the contrary; I think this method
>>>could help out bulk mailers as end users would clearly know who they've
>>>placed on their list of accepted senders and would eliminate users
>>>accidentally reporting legitimate or solicited bulk mail as unsolicited.
>>
>>If you sign up for C|Net's daily newsletters, who do you whitelist?
>>*@cnet.com? *@news.com?
>>
>>Or do you have to wait for the newsletter to come in before you can
>>create a whitelist entry for them?
>>
>>I ask because C|Net's newsletter doesn't come from anywhere you might
>>expect it to come from.
>
>I suppose average users would soon become annoyed if they had to whitelist
>every newsletter they subscribe to.

White listing isn't a complex operation. The ease and complexity of adding
someone to your white list is dependant on your E-Mail client. If, for
instance, Outlook tied white lists with your address book, the user would
hardly know they're even white listing someone. The simple act of sending
an E-Mail to someone could add them to your white list. As I said earlier,
I had to send and receive 3 E-Mails to sign up for this list. If the
proposed method was implemented, only one would have to be sent. The one
E-Mail would add the list's public key to my white list(implicitly by
sending them an E-Mail), and the list owner would know that I signed up for
it(no one signed me up for it out of spite), because of my public key.
V***@vt.edu
2003-03-17 18:59:19 UTC
Permalink
On Mon, 17 Mar 2003 12:39:09 CST, ***@mail.SoftHome.net said:

> I had to send and receive 3 E-Mails to sign up for this list. If the
> proposed method was implemented, only one would have to be sent. The one
> E-Mail would add the list's public key to my white list(implicitly by
> sending them an E-Mail), and the list owner would know that I signed up for
> it(no one signed me up for it out of spite), because of my public key.

This assumes the existence of a PKI. Without that, it's fairly trivial
for me to crank out a bogus digital signature claiming to be from
***@mail.softhome.net and forge subscriptions for you. And without any
mailback confirmation, you'd not even know what happened until you started
getting 300+ pieces of mail a day from linux-kernel mailing list. ;)

Yes, a self-signed cert *will* prove that two somethings have the same source,
but that doesn't help in trying to confirm a subscription to an e-mail list...
John Rumpelein
2003-03-17 22:40:48 UTC
Permalink
> This assumes the existence of a PKI. Without that, it's
> fairly trivial
> for me to crank out a bogus digital signature claiming to be
> from ***@mail.softhome.net and forge subscriptions for you.

It is already more or less required that organizations buy a CA-issued SSL
cert to operate a web site dealing in credit card transactions.

Maybe it is not so farfetched that they should do this (or maybe use the
same cert) to also operate a mail server?

-J
V***@vt.edu
2003-03-17 22:55:27 UTC
Permalink
On Mon, 17 Mar 2003 14:40:48 PST, John Rumpelein <***@jmrtech.com> said:
> It is already more or less required that organizations buy a CA-issued SSL
> cert to operate a web site dealing in credit card transactions.

Now, is that *legally* required, or is that simply the guys at Visa and
Mastercard saying "We won't clear transactions for you unless you...."

I believe it to be the latter.

> Maybe it is not so farfetched that they should do this (or maybe use the
> same cert) to also operate a mail server?

Hmm.. if AOL and Hotmail and Yahoo were to insist on it, it might have a
snowball's chance of flying. The big question is whether there's enough
supply of SSL accelerator cards, and if certs were economically feasible.

Remember there's a lot of .com's and .org's that are 1 or 2 boxes in a colo,
or a box or two in a closet in somebody's basement (literally half my personal
mail goes to places that are at the skinny end of an ADSL or cable modem).
If you can think of a way to deploy this without bankrupting those places
(they'd not need an SSL card for 100 smtp-over-ssl a day, but a full-blown
.COM cert may put their budget over the edge). Any ideas?
Damien Morton
2003-03-17 23:38:21 UTC
Permalink
Would all data between mail servers needs to be encrypted? I would
imagine that only a secure handshake on conection would be required.

> -----Original Message-----
> From: asrg-***@ietf.org [mailto:asrg-***@ietf.org] On
> Behalf Of ***@vt.edu
> Sent: Monday, 17 March 2003 17:55
> To: John Rumpelein
> Cc: ***@ietf.org
> Subject: Re: [Asrg] A method to eliminate spam
>
>
> On Mon, 17 Mar 2003 14:40:48 PST, John Rumpelein
> <***@jmrtech.com> said:
> > It is already more or less required that organizations buy
> a CA-issued
> > SSL cert to operate a web site dealing in credit card transactions.
>
> Now, is that *legally* required, or is that simply the guys
> at Visa and Mastercard saying "We won't clear transactions
> for you unless you...."
>
> I believe it to be the latter.
>
> > Maybe it is not so farfetched that they should do this (or
> maybe use
> > the same cert) to also operate a mail server?
>
> Hmm.. if AOL and Hotmail and Yahoo were to insist on it, it
> might have a snowball's chance of flying. The big question
> is whether there's enough supply of SSL accelerator cards,
> and if certs were economically feasible.
>
> Remember there's a lot of .com's and .org's that are 1 or 2
> boxes in a colo, or a box or two in a closet in somebody's
> basement (literally half my personal mail goes to places that
> are at the skinny end of an ADSL or cable modem). If you can
> think of a way to deploy this without bankrupting those
> places (they'd not need an SSL card for 100 smtp-over-ssl a
> day, but a full-blown .COM cert may put their budget over the
> edge). Any ideas?
>
>
V***@vt.edu
2003-03-17 23:54:19 UTC
Permalink
On Mon, 17 Mar 2003 18:38:21 EST, Damien Morton said:
> Would all data between mail servers needs to be encrypted? I would
> imagine that only a secure handshake on conection would be required.

Strictly speaking, yes. However, once you've done the handshake, keeping
the data connection encrypted is a relatively low overhead issue - all the
CPU cost is front-loaded on the handshake.

<rant>
Plus there's non-spam reasons for encrypting the whole transaction - it
makes life difficult for Echelon-like systems. Some of us don't trust our
governments, and may not want to use crypto for only our sensitive data,
as if only 5% of your traffic is encrypted it's a red flag. But if it's
ALL encrypted, their traffic analysis gets much harder.

Question: How many Nobel Peace Prize winners have terrorists and organized
crime wiretapped? And how many has the US government confessed to wiretapping?

</rant>
Kee Hinckley
2003-03-18 01:12:22 UTC
Permalink
At 5:55 PM -0500 3/17/03, ***@vt.edu wrote:
>(they'd not need an SSL card for 100 smtp-over-ssl a day, but a full-blown
>.COM cert may put their budget over the edge). Any ideas?

A .com cert goes for $99 semi-wholesale. QuickCerts are even
cheaper. That would be a per-year cost. Not too bad.

As for the encryption hit. A lot of sites are already using TLS and
self-signed certs. I use it for the hundreds of messages I
download/send every day, and other sites that support it use it
connecting to my server. Not a huge cost.
--
Kee Hinckley
http://www.puremessaging.com/ Junk-Free Email Filtering
http://commons.somewhere.com/buzz/ Writings on Technology and Society

I'm not sure which upsets me more: that people are so unwilling to accept
responsibility for their own actions, or that they are so eager to regulate
everyone else's.
V***@vt.edu
2003-03-18 04:18:59 UTC
Permalink
On Mon, 17 Mar 2003 20:12:22 EST, Kee Hinckley said:

> A .com cert goes for $99 semi-wholesale. QuickCerts are even
> cheaper. That would be a per-year cost. Not too bad.

And how much do said certs prove? What verification does QuickCert do to
make sure that they're issuing a cert to an actually identified user?

At $100 a pop, what's stopping spammers and other miscreants from getting
bogus certs?

> As for the encryption hit. A lot of sites are already using TLS and
> self-signed certs. I use it for the hundreds of messages I
> download/send every day, and other sites that support it use it
> connecting to my server. Not a huge cost.

As I said, it's not a hit when you're doing several hundred pieces of mail
a day. You do several hundred thousand, you'll be wanting an SSL card.. ;)

(And yes, I have a listserv box that cranks about 500K msgs/day, and it will
do SSL with a self-signed cert if the other end says STARTTLS. Not many
sites do.)
Vernon Schryver
2003-03-18 05:13:02 UTC
Permalink
> From: ***@vt.edu

>...
> (And yes, I have a listserv box that cranks about 500K msgs/day, and it will
> do SSL with a self-signed cert if the other end says STARTTLS. Not many
> sites do.)

I think I'e seen sendmail Received headers saying the SMTP clients of some
spammers agree to STARTTLS when sending to my SMTP server.

I've no idea if they're using self-signed or commercial certs.

In this case, I'm talking about "whack-a-mole" spammers instead of
big outfits that are senders of unsolicted bulk mail and that might be
expected to use commercial certs such as Verisign.
Please consider the implications if they are using commercial certs.
Please note there is no reason except cost that they aren't using
commercial certs.


Vernon Schryver ***@rhyolite.com
Hadmut Danisch
2003-03-18 05:27:28 UTC
Permalink
On Mon, Mar 17, 2003 at 10:13:02PM -0700, Vernon Schryver wrote:

> Please note there is no reason except cost that they aren't using
> commercial certs.

On the contrary.

If a spammer used a commercial cert, this cert could be tracked
back to the senders identity, and that's what the spammer hates
like the devil hates holy water.

And a commercial certificate contains informations the
spammer cannot arbitrarily change. So these certs could very
easily be blacklisted. That's also something the spammer fears.

Hadmut
John Johnson
2003-03-18 06:31:42 UTC
Permalink
On Mon, 17 Mar 2003, Hadmut Danisch wrote:

> If a spammer used a commercial cert, this cert could be tracked
> back to the senders identity, and that's what the spammer hates
> like the devil hates holy water.

Thats the truth. With so much work to forge headers, set up
return DNS for 127.0.0.x, and using anonymous proxies, these
things hate the light.

That appears to be a good challenge, make senders authenticated
in a trackable, nonreputable fashion.

---
John Johnson - System Administrator; Sirius Systems Group
***@sirinet.net KJ5AA
(580) 355-6436
Kee Hinckley
2003-03-18 06:46:53 UTC
Permalink
At 11:18 PM -0500 3/17/03, ***@vt.edu wrote:
>And how much do said certs prove? What verification does QuickCert do to
>make sure that they're issuing a cert to an actually identified user?

Basically they seem to do a credit check and make sure that the whois
and contact information matches the organization information. I
assume Phillip could address that more completely.
--
Kee Hinckley
http://www.puremessaging.com/ Junk-Free Email Filtering
http://commons.somewhere.com/buzz/ Writings on Technology and Society

I'm not sure which upsets me more: that people are so unwilling to accept
responsibility for their own actions, or that they are so eager to regulate
everyone else's.
V***@vt.edu
2003-03-18 08:04:28 UTC
Permalink
On Tue, 18 Mar 2003 01:46:53 EST, Kee Hinckley said:
> At 11:18 PM -0500 3/17/03, ***@vt.edu wrote:
> >And how much do said certs prove? What verification does QuickCert do to
> >make sure that they're issuing a cert to an actually identified user?
>
> Basically they seem to do a credit check and make sure that the whois
> and contact information matches the organization information. I
> assume Phillip could address that more completely.

How proof is the system against identity theft? There's 50M .coms, a
large portion of them are probably vanity domains - JoeRandom.com,
the Whois probably gives enough info to start, if you can score an SSN
to match, you could probably get a cert. Might even be able to
do it without the SSN.

I admit ignorance of this stuff - does the $100-and-under segment at
least involve a phone callback or snail-mail exchange (both of which go
a long way to nailing down physical location and all that). If I have to
answer the phone at the number listed in the 'whois' for the domain,
that raises the stakes a lot, because there's a lot more paper trail
then (telcos and real estate rental offices both dislike people who
skip out.. ;)
John Rumpelein
2003-03-18 08:46:46 UTC
Permalink
Valdis,

[CA issued SSL certs]
> How proof is the system against identity theft? There's 50M
> .coms, a large portion of them are probably vanity domains -
> JoeRandom.com, the Whois probably gives enough info to start,
> if you can score an SSN to match, you could probably get a
> cert. Might even be able to do it without the SSN.

Certificates are generally not issued to individuals, and I am not sure what
kind of hoops you have to jump through to do that. I have only done it for
corporations, and I can tell you what is involved there.

The biggest requirement is paperwork on the establishment of the corp. as a
legal entity -- generally the Certificate of Incorporation issued by the
state that the corp. is set up in. Second you need to have an agreement
signed by an authorized person at the company (these, granted, could be
faked, but now you're committing fraud) saying that they are requesting an
SSL cert on behalf of the company. Third, of course, is the actual payment,
which creates a paper trail of its own.

I'm not saying this is a perfect solution. There are some companies out
there that would go through all this and then spam people anyway. But at
least then we'd know exactly who they were and where to find them.

-J
John Rumpelein
2003-03-18 04:12:18 UTC
Permalink
Valdis,

> > It is already more or less required that organizations buy
> a CA-issued
> > SSL cert to operate a web site dealing in credit card transactions.
>
> Now, is that *legally* required, or is that simply the guys
> at Visa and Mastercard saying "We won't clear transactions
> for you unless you...."
>
> I believe it to be the latter.

Actually, I'm not sure it is either of these. But people with a brain will
not use an e-commerce site which is not using encryption, and the vendor is
(theoretically at least) liable for negligence if someone manages to
intercept the credit card info because it was sent in the clear. I say
"theoretically" because 1) it seems more likely that credit cards are stolen
by server machines being broken into and 2) I've never heard of a lawsuit
happening because of this.

> > Maybe it is not so farfetched that they should do this (or
> maybe use
> > the same cert) to also operate a mail server?
>
> Hmm.. if AOL and Hotmail and Yahoo were to insist on it, it
> might have a snowball's chance of flying. The big question
> is whether there's enough supply of SSL accelerator cards,
> and if certs were economically feasible.

The question of CPU load is interesting. I'm not sure how much trouble this
would cause for anyone except the largest ISPs. Arguably it may decrease
the load on servers, since many (most? all? ;) spammers sending mail these
days would not be able to send mail at all under this scheme.

I am aware that some MTAs have support for this sort of thing currently, but
I have to confess ignorance of how it is actually implemented... this is
something I should read up on.

> Remember there's a lot of .com's and .org's that are 1 or 2
> boxes in a colo, or a box or two in a closet in somebody's
> basement (literally half my personal mail goes to places that
> are at the skinny end of an ADSL or cable modem). If you can
> think of a way to deploy this without bankrupting those
> places (they'd not need an SSL card for 100 smtp-over-ssl a
> day, but a full-blown .COM cert may put their budget over the
> edge). Any ideas?

CA-issued SSL certs are around $100/yr. Little guys like this could get
around the requirement by relaying mail through their ISP, maybe (who could
have a list of allowed relays by IP). If an ISP can't afford $100/yr for an
SSL cert, they have bigger problems. (Most ISPs will have one already for
doing secure HTTP anyway.)

Remember, I'm talking about requiring an SSL cert to *initiate* an SMTP
session; you could still receive mail without one.

-J
John R. Levine
2003-03-18 16:36:29 UTC
Permalink
>> It is already more or less required that organizations buy a CA-issued SSL
>> cert to operate a web site dealing in credit card transactions.
>
>Now, is that *legally* required, or is that simply the guys at Visa and
>Mastercard saying "We won't clear transactions for you unless you...."
>
>I believe it to be the latter.

Neither, it's the defacto implementation of web servers that accept
credit card orders, mostly to enrich cert vendors. As someone else
pointed out, it defends against card numbers being intercepted in
transit which is not a significant risk compared to bad guys breaking
into databases at the merchant and stealing all the numbers, or
phishers putting up fake sites with certs that correctly identify the
fake site and collecting credit card info directly from suckers.

Someone else asked how hard it is to get an SSL cert. It used to be a
pain in the neck requiring notarized letters and faxed copies of
business licenses and the like, but it's not any more. The last cert
I got required only that the WHOIS domain contact click through a "was
that really you?" challenge, and we all know how utterly valid WHOIS
info is. The wholesale price is now about $69/yr which isn't enormous
but I'm not eager to pay yet another nuisance fee for faux security.
(I brought up POP and IMAP servers yesterday with SSL certs and all of
my MUAs moan and groan about the self-signed certs. Phooey.)

It's certainly worth thinking about ways to make it easier to check
that mail is coming from a valid source, along the lines of Habeas or
Trusted Sender, but it's implausible to come up with a mail system
that would be forgery-proof and still be usable to communicate with an
interestingly large set of other people. If you want a closed system
that only communicates with people whose PGP keys are on your keyring,
you can have that now. But I don't know many people who'd want that.

--
John R. Levine, IECC, POB 727, Trumansburg NY 14886 +1 607 387 6869
***@iecc.com, Village Trustee and Sewer Commissioner, http://iecc.com/johnl,
Member, Provisional board, Coalition Against Unsolicited Commercial E-mail
m***@mail.SoftHome.net
2003-03-17 19:38:34 UTC
Permalink
At 13:59 3/17/2003 -0500, you wrote:
>On Mon, 17 Mar 2003 12:39:09 CST, ***@mail.SoftHome.net said:
>
> > I had to send and receive 3 E-Mails to sign up for this list. If the
> > proposed method was implemented, only one would have to be sent. The one
> > E-Mail would add the list's public key to my white list(implicitly by
> > sending them an E-Mail), and the list owner would know that I signed up
> for
> > it(no one signed me up for it out of spite), because of my public key.
>
>This assumes the existence of a PKI. Without that, it's fairly trivial
>for me to crank out a bogus digital signature claiming to be from
>***@mail.softhome.net and forge subscriptions for you. And without any
>mailback confirmation, you'd not even know what happened until you started
>getting 300+ pieces of mail a day from linux-kernel mailing list. ;)
>
>Yes, a self-signed cert *will* prove that two somethings have the same source,
>but that doesn't help in trying to confirm a subscription to an e-mail list...

Actually this method would not need a PKI. Upon the first attempt to send
mail to the end user by the Mail list, the mail list would be told that the
public key/signature is not recognized at which point it would be known
that the recipient mail box did not request list subscription.
Even if you did have to send 3 E-Mails with the new method, like you do
with the current method, this does not disprove the ability to subscribe to
bulk mailing lists. The point I was trying to demonstrate was that white
lists are easy to implement, and digital signatures verify origin.
Hans Spath
2003-03-18 01:13:15 UTC
Permalink
At 17.03.2003 13:38 -0600, ***@mail.SoftHome.net wrote:
>At 13:59 3/17/2003 -0500, you wrote:
>>On Mon, 17 Mar 2003 12:39:09 CST, ***@mail.SoftHome.net said:
>>
>> > I had to send and receive 3 E-Mails to sign up for this list. If the
>> > proposed method was implemented, only one would have to be sent. The one
>> > E-Mail would add the list's public key to my white list(implicitly by
>> > sending them an E-Mail), and the list owner would know that I signed
>> up for
>> > it(no one signed me up for it out of spite), because of my public key.
>>
>>This assumes the existence of a PKI. Without that, it's fairly trivial
>>for me to crank out a bogus digital signature claiming to be from
>>***@mail.softhome.net and forge subscriptions for you. And without any
>>mailback confirmation, you'd not even know what happened until you started
>>getting 300+ pieces of mail a day from linux-kernel mailing list. ;)
>>
>>Yes, a self-signed cert *will* prove that two somethings have the same
>>source,
>>but that doesn't help in trying to confirm a subscription to an e-mail
>>list...
>
>Actually this method would not need a PKI. Upon the first attempt to send
>mail to the end user by the Mail list, the mail list would be told that
>the public key/signature is not recognized at which point it would be
>known that the recipient mail box did not request list subscription.
>Even if you did have to send 3 E-Mails with the new method, like you do
>with the current method, this does not disprove the ability to subscribe
>to bulk mailing lists. The point I was trying to demonstrate was that
>white lists are easy to implement, and digital signatures verify origin.

How would a newsletter sender find out, if he tried to send a mail to an
unsubscribed user or to an user who has forgotten to whitelist the sender yet?
m***@mail.SoftHome.net
2003-03-18 01:46:54 UTC
Permalink
>How would a newsletter sender find out, if he tried to send a mail to an
>unsubscribed user or to an user who has forgotten to whitelist the sender yet?

The white list exchange, as covered in my overview, works like this:

List subscriber obtains the address of the E-Mail list distributor from a
web site(Ex: ***@cnn.com).
List subscriber sends a subscription request E-Mail to the list's
address. This sending of E-Mail places the address ***@cnn.com in the
white list of the list subscriber.
The mailing list attempts to send an E-Mail to the list subscriber.
The mail host sees that ***@cnn.com has been added to the white list and
allows mail to be sent without computation time.

Keep in mind, as covered in the overview, white lists do not exclude people
from sending mail to an end user, but rather slows the ability to rapidly
send mail to unknown persons.
Ronald F. Guilmette
2003-03-18 04:00:46 UTC
Permalink
In message <***@mail.SoftHome.net>,
***@mail.SoftHome.net wrote:

>
>>How would a newsletter sender find out, if he tried to send a mail to an
>>unsubscribed user or to an user who has forgotten to whitelist the sender yet
>?
>
>The white list exchange, as covered in my overview, works like this:
>
>List subscriber obtains the address of the E-Mail list distributor from a
>web site(Ex: ***@cnn.com).
>List subscriber sends a subscription request E-Mail to the list's
>address. This sending of E-Mail places the address ***@cnn.com in the
>white list of the list subscriber.
>The mailing list attempts to send an E-Mail to the list subscriber.
>The mail host sees that ***@cnn.com has been added to the white list and
>allows mail to be sent without computation time...

... after which the mose astute portion of the spammer community figures
out that they will increase their odds of delivery dramatically, simply
by sending out their spams with the forged envelope sender address of
<***@cnn.com>.

>Keep in mind, as covered in the overview, white lists do not exclude people
>from sending mail to an end user, but rather slows the ability to rapidly
>send mail to unknown persons.

Except for the trusted ones... or anybody masquerading as trusted one.
m***@mail.SoftHome.net
2003-03-17 19:08:11 UTC
Permalink
>Why on earth would you want to abandon MXes? MXes serve an extremely
useful purpose even if your "solution" was adopted - failover,
prioritization, enabling systems not on the Internet to receive email
(hint: my home machine doesn't have, and has no need for, an A record.
Couldn't use one even if it had one. It's not Internet-connected.).

There are other methods of fail safe and redundancy besides MX
records. Every other type of network communication that needs 24/7 up
time, but does not make use of MX records has ways of keeping up time
despite machines failing. Every other type of network communication does
not have special records for it's specific protocol, yet people are still
able to full up time.
I don't know the personal situation with your machine at home but I don't
see how it is able to send/receive mail if it isn't connected to the
Internet ever. I also may be naive but I don't see a resounding reason to
have a mail server that is not connected to the network which it is
receiving mail from.

>When faced with the option of not being able to hit send unless the
recipient's mailbox machine was currently online, sending clients would
have to. Dealing with sporadic machine outages would make mailing lists (or
even modest receiver lists) extremely unpleasant to operate if you couldn't
queue. Imagine the world-wide grief if a backhoe took out AOL's
connectivity for a short time, or an individual mailbox server went
down. My home machine would never get another email again...

If a backhoe took out AOL's connectivity, there are going to be more issues
at stake than just this method. If no one can connect to AOL's servers AOL
itself will be down. None of it's users will be able to use it's services,
no one will be able to get to it's web page, nothing.
Maybe I overlooked it, but do you believe that this method should protect
against catastrophic failures of this kind?


>How would ISP (or a corporation) block outbound mailbombs and the like if
they don't get to see the traffic? Legal liabilities galore.

Unless you're mail bombing someone who has white listed you(someone who
trusts you), your mail bomb isn't going to be able to nearly as effective
as they are right now. I also don't see why a filter could not be
developed for this protocol as well. Just because it isn't SMTP, doesn't
mean it cannot be monitored by administrators.

>Huh? I'd like to see any of our users get around our send filtering and
logging. Hint: we block direct outbound email simply by denying outbound
port 25. As do many ISPs with dialup pool router blocks. Short of active
collusion with outside entities (eg: port forwarding ala open proxies) they
can't.

Yes, I was implying outside assistance.


>Blocking outbound klez. I wish Verizon would do that...
Logging all email is becoming a legal necessity these days [Patriot Act
plus others, mutter].

Again, I don't see how a filter could not be developed for this protocol if
one was so desired.


Is this logging any different than just normal usage logging that
administrators perform? Logging like to see what users are doing with
HTTP, FTP, messaging systems, etc. Could logging programs not be adapted,
if people wanted to, to log new protocols like this one?

>Yeah, by implementing proxies as they do for HTTP and FTP. Which defeats
your proposal because they're nothing more than intermediate mail
servers. Packet logging is impractical or useless - it shows nothing of
the details of the email, such as recipient address.

I think packet logging is useful. If you monitor communications between
the client and the destination mail host, how could you not reconstruct who
it was to, who it was from, what public keys were exchanged, etc? In the
case of mail bombing it would be very obvious if someone is opening several
hundred communications with the destination mail box(public key) within a
short period of time. I would say this method would actually help you
because you _can_ find the recipient address.


>So now we have two email protocols, one for bulk and one for non-bulk. Ouch.

No, there are no two protocols. White listing is built in to the
protocol. It is the basis for fast mail sending. Trusted users do not
have to compute a cipher key every time they send mail, only if you are not
on the white list. White lists can be tied to address books and sending
mail. There are not two protocols for this, this was outlined in the overview.


>Requiring the recipient mailbox machine to be online and operational at
the time of sending a piece of email is, by itself, considerably worse than
the status quo. Race conditions. Etc. Vastly more unreliable.

I don't see how the mail server being off-line is worse than any other
server going off-line that needs 24/7 up time. If your web server goes
down and you have no redundancy, you have a problem. If your Internet
router box goes down and you have no redundancy, you have a problem. I
believe this method succumbs to the same issues as every other network service.
The difference with this method opposed to the current E-Mail system is
that you are not wasting gigabytes worth of bandwidth and disk space
transferring and storing unsolicited junk mail because no one can make
money off of sending it anymore.
Hans Spath
2003-03-18 01:11:19 UTC
Permalink
At 17.03.2003 13:08 -0600, ***@mail.SoftHome.net wrote:
> >Why on earth would you want to abandon MXes? MXes serve an extremely
> useful purpose even if your "solution" was adopted - failover,
> prioritization, enabling systems not on the Internet to receive email
> (hint: my home machine doesn't have, and has no need for, an A record.
> Couldn't use one even if it had one. It's not Internet-connected.).
>
>There are other methods of fail safe and redundancy besides MX
>records. Every other type of network communication that needs 24/7 up
>time, but does not make use of MX records has ways of keeping up time
>despite machines failing. Every other type of network communication does
>not have special records for it's specific protocol, yet people are still
>able to full up time.
>I don't know the personal situation with your machine at home but I don't
>see how it is able to send/receive mail if it isn't connected to the
>Internet ever. I also may be naive but I don't see a resounding reason to
>have a mail server that is not connected to the network which it is
>receiving mail from.

What if you're a dial-in internet user?

What if there is a temporally routing problem, which makes the sender
unable to reach your mailbox-"server"?

BTW. Does "every other type network communication" that hasn't special
records for it's specific protocol make use of *static* adresses (like mail
adresses) which are independend from what FQDN the *real* destination host has?

The good thing with your "solution" is that it could fix the bad things
that came up with SMTP. But it would also "fix" the good things of it, stop
ignoring that.

> >When faced with the option of not being able to hit send unless the
> recipient's mailbox machine was currently online, sending clients would
> have to. Dealing with sporadic machine outages would make mailing lists
> (or even modest receiver lists) extremely unpleasant to operate if you
> couldn't queue. Imagine the world-wide grief if a backhoe took out AOL's
> connectivity for a short time, or an individual mailbox server went
> down. My home machine would never get another email again...
>
>If a backhoe took out AOL's connectivity, there are going to be more
>issues at stake than just this method. If no one can connect to AOL's
>servers AOL itself will be down. None of it's users will be able to use
>it's services, no one will be able to get to it's web page, nothing.
>Maybe I overlooked it, but do you believe that this method should protect
>against catastrophic failures of this kind?

Damn, yes, it really should. Say, a network administrator at AOL makes a
"little" mistake while configuring the main routers, so AOL will be
completely down for ... 10 minutes. Even in this 10 minutes *lots* of mail
*would* have been delivered to some of the millions(?) of AOL users. So
your protocol would force thousands of mail clients to retry their
delivery. If the clients were dial-in users, they wourd be forced to be
online for each retry. With a protocol that allows relay servers (like
SMTP) they can go offline and lay back without worrying what happens to
their mail. And even when the relay server is unable to deliver the mail,
it is not lost - the sender will recieve an notify when the relay server
gives up (after a reasonable period of time).

> >How would ISP (or a corporation) block outbound mailbombs and the like
> if they don't get to see the traffic? Legal liabilities galore.
>
>Unless you're mail bombing someone who has white listed you(someone who
>trusts you), your mail bomb isn't going to be able to nearly as effective
>as they are right now. I also don't see why a filter could not be
>developed for this protocol as well. Just because it isn't SMTP, doesn't
>mean it cannot be monitored by administrators.
>
> >Huh? I'd like to see any of our users get around our send filtering and
> logging. Hint: we block direct outbound email simply by denying outbound
> port 25. As do many ISPs with dialup pool router blocks. Short of active
> collusion with outside entities (eg: port forwarding ala open proxies)
> they can't.
>
>Yes, I was implying outside assistance.
>
>
> >Blocking outbound klez. I wish Verizon would do that...
>Logging all email is becoming a legal necessity these days [Patriot Act
>plus others, mutter].
>
>Again, I don't see how a filter could not be developed for this protocol
>if one was so desired.

The problem is, that it has to be installed on *every* concerned client. In
a corporate environment you really want to have one central mail relay
server because it's no problem to update the filter rules for *all*
incoming and outgoing mail at this point.

>[...]

> >Requiring the recipient mailbox machine to be online and operational at
> the time of sending a piece of email is, by itself, considerably worse
> than the status quo. Race conditions. Etc. Vastly more unreliable.
>
>I don't see how the mail server being off-line is worse than any other
>server going off-line that needs 24/7 up time. If your web server goes
>down and you have no redundancy, you have a problem. If your Internet
>router box goes down and you have no redundancy, you have a problem. I
>believe this method succumbs to the same issues as every other network service.
>The difference with this method opposed to the current E-Mail system is
>that you are not wasting gigabytes worth of bandwidth and disk space
>transferring and storing unsolicited junk mail because no one can make
>money off of sending it anymore.

AGIAN. What if TWO dial-in users want to send each other email? Both have
to be online at the same time! THIS SUCKS!!!
And you don't see how this all ever could be a problem because you become
f*cking ignorant when someone criticises your protocol.
m***@mail.SoftHome.net
2003-03-18 01:42:06 UTC
Permalink
>The problem is, that it has to be installed on *every* concerned client.
>In a corporate environment you really want to have one central mail relay
>server because it's no problem to update the filter rules for *all*
>incoming and outgoing mail at this point.

No it would not. Although you would not be able to _stop_ the E-Mails from
being sent(unless the filter is on the Internet router and is able to block
communication), you would be able to track the sender through the use of
passive packet analyzing, and probably terminate the employment of the
perpetrator.

>AGIAN. What if TWO dial-in users want to send each other email? Both have
>to be online at the same time! THIS SUCKS!!!
>And you don't see how this all ever could be a problem because you become
>f*cking ignorant when someone criticises your protocol.

Again, I don't think you've read my outline thoroughly. I said there are
three entities in this protocol. Mail senders, mail hosts, and end
users. Mail hosts house the E-Mail for end users until they are ready to
retrieve it. The retrieval process is a simple authentication based retrieval.

In the case of a dial-up user sending to another dial up user, the
communication would go something like this(Keep in mind the encryption key
finding section of this protocol is only used if the sender is _not
whitelisted_ by the receiver so the only time this would happen is if a
dial up user was sending a piece of mail to an unknown person):

Dial-up user composes the E-Mail.
Dial-up user connects to the mail host.
Mail hosts requests the Dial-up user to find the encryption key to the cipher.
Dial-up user can disconnect while this computation is being performed.
Dial-up user connects to the mail host.
Dial-up user sends the cipher key and sends the E-Mail message.

Please do not continue to address me as you have been. I've spend a lot of
time working on this idea and would appreciate constructive criticism
instead of cursing and name-calling. I believe this method is a _very_
good solution to our E-Mail problems and would appreciate some possible
brain storming as to how to work out all the details.
Kee Hinckley
2003-03-19 05:09:18 UTC
Permalink
At 7:42 PM -0600 3/17/03, ***@mail.SoftHome.net wrote:
>Dial-up user composes the E-Mail.
>Dial-up user connects to the mail host.
>Mail hosts requests the Dial-up user to find the encryption key to the cipher.
>Dial-up user can disconnect while this computation is being performed.
>Dial-up user connects to the mail host.
>Dial-up user sends the cipher key and sends the E-Mail message.

I have no idea why your respondents have been so vitriolic--patience
seems to be going on this list. But for what it's worth, here are my
comments.


So. If I install this system on my computer, I presumably still need
to interact with the existing mail system until we all convert,
correct?

1. What do you do with incoming mail from the gateway?
2. Do you change the email addressing system so that the MUA can tell
the difference between outbound mail for a user of the new system or
an old user? If not, bear in mind that you are seriously
disincenting early adopters, because they'll be spending lots of time
generating keys that nobody will need. But at the same time they'll
see no benefit because they can't yet block email from people who
don't use the system.

The fundamental problem with new protocols for email has nothing to
do with how good the protocol is. It has to do with how the
transition is made from the old system to the new.

So far, every new protocol proposal I've seen has one of two adoption plans.

1. Everyone will just agree it's the best and all adopt it at once.
2. A small group of people will gamble that this solution is going to
be the one that wins in the marketplace, and they'll put up with a
lot of pain and no benefit until everyone else adopts it as well.

Of those two, I'd actually give #1 a better chance of succeeding. But
it would take a unanimous decision from at least a dozen of the
biggest ISPs, along with a real threat to turn off all access to the
old system within a certain amount of time. And when it was all said
and done, you'd still have to run two systems if you needed to
correspond with anyone in the third world--the two just wouldn't
interoperate.

I don't give #2 a chance in hell unless you come up with a protocol
change that actually gives the early adopters an immediate benefit.
(In other words, it cuts their spam and it doesn't annoy their
non-converted customers.) Otherwise you're asking them to make a big
gamble in time and money with no immediate benefit. They'd just be
doing it for the long term good of the community. The same forces
that have led to spam (tragedy of the commons) make it certain that
people won't do that.
--
Kee Hinckley
http://www.puremessaging.com/ Junk-Free Email Filtering
http://commons.somewhere.com/buzz/ Writings on Technology and Society

I'm not sure which upsets me more: that people are so unwilling to accept
responsibility for their own actions, or that they are so eager to regulate
everyone else's.
Daniel Feenberg
2003-03-19 12:21:13 UTC
Permalink
On Wed, 19 Mar 2003, Kee Hinckley wrote:

>
> So far, every new protocol proposal I've seen has one of two adoption plans.
>
> 1. Everyone will just agree it's the best and all adopt it at once.
> 2. A small group of people will gamble that this solution is going to
> be the one that wins in the marketplace, and they'll put up with a
> lot of pain and no benefit until everyone else adopts it as well.
>

There is a third possibility. A method that helps those who invoke it, and
works better the more sites invoke it. For example, the real-time black
hole list reduces spam for the first user, and every subsequent user.
Furthermore, as more sites use it, open relays are closed, restricting
spammers to fewer relay hosts, each of which suffers even more. With fewer
open relays, sites are more willing to reject mail from them, as there are
fewer legitimate messages refused. As the remaining relay hosts get more
overloaded, even the most recalcitrant owners eventually close them. In
the end (the "Nash equilibrium") many sites subscribe to a black hole
list, nearly all open relays are closed, and there is no need for
universal agreement to get to that end. It may take a while though.

If MUAs and MTAs made it easier to use SMTP-AUTH, or relay after pop, then
this process would happen faster, but no change in the RFCs is required.

Content based anti-spam measures don't have a similar felicitous Nash
equilibrium. As more sites filter, the spammers get cageier, and content
filtering becomes harder, not easier.

Spam point scores (such as Spamassasin) can affect the ability to achieve
a desirable equilibrium. A site might not be willing to reject mail from
MTAs with the string "dial-up" in their host name, but might find it
helpfull to add a point to the spam score. Likewise they might subtract a
point for seeing "smtp" or "mail" in the host name. Both those strings
indicate whether the DNS owner wishes to authorize mail from that host
(and no RMX record is necessary). The final equilibrium could easily be
that these naming conventions became more and more usefull untill they
were more widely adopted than most RFCs.

It is important to have a plausible adoption path for any scheme. To be
plausible, the scheme must benefit the first few users, and benefit them
more the more users there are.

Daniel Feenberg
Kee Hinckley
2003-03-19 16:56:41 UTC
Permalink
At 7:21 AM -0500 3/19/03, Daniel Feenberg wrote:
>On Wed, 19 Mar 2003, Kee Hinckley wrote:
>
>>
>> So far, every new protocol proposal I've seen has one of two adoption plans.
>>
>> 1. Everyone will just agree it's the best and all adopt it at once.
>> 2. A small group of people will gamble that this solution is going to
>> be the one that wins in the marketplace, and they'll put up with a
>> lot of pain and no benefit until everyone else adopts it as well.
>>
>
>There is a third possibility. A method that helps those who invoke it, and
>works better the more sites invoke it. For example, the real-time black
>hole list reduces spam for the first user, and every subsequent user.

That's a possibility. But none of the protocol changes I've seen do
that. As you point out, the blackhole solutin doesn't require a
protocol change.

>overloaded, even the most recalcitrant owners eventually close them. In
>the end (the "Nash equilibrium") many sites subscribe to a black hole
>list, nearly all open relays are closed, and there is no need for
>universal agreement to get to that end. It may take a while though.

Why do you think it hasn't happened already. Those lists have been
around for years. Have open-relays significantly decreased?

My guess is that too many people are reluncant to use them. As has
been discussed here, black hole lists have a reputation for lack of
accountability. If automated they have a serious problem with false
positives. If manual they cost money. While individuals may have
some degree of tolerance for false positives, most companies and ISPs
are not so tolerant--all it takes is one bad instance and you're all
over the press (college admissions notifications blocked, Mac.com
blocking domain renewal emails...).

>If MUAs and MTAs made it easier to use SMTP-AUTH, or relay after pop, then
>this process would happen faster, but no change in the RFCs is required.

At this point in the game I'm not aware of any major MUA that doesn't
support SMTP-AUTH, although until recently Microsoft's programs only
supported particularly lame versions. (Their renaming of AUTH-NTLM
to AUTH-MSN hasn't helped either.) POP-before-SMTP is supported in
most commercial email products. Unfortunately POP and SMTP tend to
be managed by different programs from different sources in the
open-source space, so integrating that has been more difficult.

>It is important to have a plausible adoption path for any scheme. To be
>plausible, the scheme must benefit the first few users, and benefit them
>more the more users there are.

We are in violent agreement on that point.
--
Kee Hinckley
http://www.puremessaging.com/ Junk-Free Email Filtering
http://commons.somewhere.com/buzz/ Writings on Technology and Society

I'm not sure which upsets me more: that people are so unwilling to accept
responsibility for their own actions, or that they are so eager to regulate
everyone else's.
Chris Lewis
2003-03-19 19:48:20 UTC
Permalink
Kee Hinckley wrote:
> At 7:21 AM -0500 3/19/03, Daniel Feenberg wrote:

>> overloaded, even the most recalcitrant owners eventually close them. In
>> the end (the "Nash equilibrium") many sites subscribe to a black hole
>> list, nearly all open relays are closed, and there is no need for
>> universal agreement to get to that end. It may take a while though.

> Why do you think it hasn't happened already. Those lists have been
> around for years. Have open-relays significantly decreased?

As a percentage of spam? Absolutely.

Total? Well, given that spam itself is exponentially increasing, I'm
not sure whether we can measure that, especially since open proxy/socks
has become the technique de-jour. But the numbers I'm going to show
below are suggestive that open relay is nowhere near the problem it once
was.

Your message prompted me into doing something I should have done for a
while - wire in individual blacklist effectiveness into our metrics.

And here are the numbers for the past week - these are based on
recipient counts, not message counts.

The first table talks exclusively about the results of our spamtrap, and
shows relative effectiveness of the blacklists on a "pure spam" feed.

The second table talks exclusively about the results of the mail
addressed to our real users.

The individual lists are annotated when they first appear.

Numbers are counts for the corresponding entry, and percentage of total
email received.

Blacklist effectiveness spamtrap only:

BOPM 3666774 50.73 (open proxy/socks)
Flonetwork 233 0.00 (Flowgo/dartmail/doubleclick static list)
IP, NOT BL 101140 1.40 (local "hard" manual blacklist,
being phased out)
MONKEYPROXY 4579195 63.36 (open proxy/socks)
NTblack 905852 12.53 (local automated proxy/socks/relay [+])
NTmanual 326783 4.52 (manual blacklist, new version)
OBproxies 1459108 20.19 (proxies/socks)
OBrelays 462877 6.40 (relays)
OK 42 0.00 (whitelist)
OSinputs 836741 11.58 (Osirus relays)
OSproxy 136594 1.89 (Osirus proxies)
OSsocks 1798424 24.88 (Osirus socks)
SBL 562940 7.79 (SpamHaus spamsource BL)
TOTAL 7227413 100.00
TOTAL BLOCK 6063477 83.59 (total would-be blocked by blacklists)


Blacklist effectiveness on real email:
BOPM 100635 5.34
CONTENT 54802 2.91 (non-IP based filters, not used
on spamtrap)
Flonetwork 6096 0.32
IP, NOT BL 34946 1.85
MONKEYPROXY 135285 7.17
NTblack 38608 2.05
NTmanual 30370 1.61
OBproxies 46420 2.46
OBrelays 17419 0.92
OK 5330 0.28
OSinputs 31922 1.69
OSproxy 2121 0.11
OSsocks 54144 2.87
SBL 51825 2.75
TOTAL 1885655 100.00
TOTAL BLOCK 316567 16.79 (total blocked)

As you can see, relays are quite low. Notice how monkeyproxy and BOPM
both trap more than 50% of all inbound spam (to the spamtrap, which is
by definition 100% spam - bounces and viruses are already stripped out).

Notice how the blacklists catch 84% of _all_ spam. Pretty darn good
actually. But not perfect. That's why we do content-based too.

> My guess is that too many people are reluncant to use them. As has been
> discussed here, black hole lists have a reputation for lack of
> accountability.

They have a reputation for that, but that's largely false. BOPM, OB*
(these two are private lists, but you'd know who it was and how to
contact them if you ever hit a OB* blacklist block), MONKEYS[*], OSIRUS
and SBL have _excellent_ reputations, and good
accountability/contactability.

> If automated they have a serious problem with false
> positives.

This is what the reputation is, but it's pure nonsense. While it is
true that "open relay" blacklists have a higher percentage of false
positives than the others, the numbers are still _extremely_ low.
Secondly, the automated testers are the most accessible ones for fixing
of false positives. ORDB is probably the very best of the group -
instant delist with subsequent retest and relist if necessary.

[We can't use ORDB, because we have to do zone transfers, and ORDB
doesn't permit that.]

And I can show that from the above tables.

First a comment on the "OK" entry. Our procedure for a false positive
on a blacklist of any kind causes us to immediately enter a whitelist
entry, and queue up a retest to each of the blacklists (where
appropriate) for retests. Automated-almost-to-a-single-keystroke process.

[We immediately whitelist, because our DNSBL implementation is by
zone-transfer and DNS zone file build. The average latency for a 3rd
party delist via these mechanisms can be well in excess of 24 hours.]

Furthermore, many of these whitelist entries are for whole ranges we do
in our local blacklist (like 200.148/16 and 200.158/16), and we've just
opened up a hole for the _only_ legit mailer in the whole block. [%]

What we don't have right at the moment, is a mechanism for stripping out
whitelist entries once the original blacklist entry disappears. I'm
working on it, I'm working on it ;-)

So, the "OK" entries are _every_ mail server we've ever whitelisted,
despite the fact that the original blacklisting entry has probably long
disappeared - so, the "OK" entries are considerably _higher_ than our
blacklists would actually block. Further, many of them are not from
third party blacklists, but rather from our local listings. Only 42 for
the spamtrap. .28% for the production mail. If I were presently able
to remove the whitelist entries for the machines no longer open, the
numbers would be probably be under .01% for our production systems too.

We get less than 5 false positive reports on average per day.

Spot checks show that at least 95% of all whitelist/retests we've issued
have taken effect on the corresponding 3rd party blacklist. Except
monkeys[*]

But again, it's true that open relay blacklists have higher false
positive rates. Despite being responsible for perhaps 3-4% of all of
our IP-based blocks, somewhat more than half of our IP-based false
positives are with open relay blacklists. And most of those are with
OBrelays.

Why is that? Simple:

1) machines that were open relays are more likely to have been intended
to send email than a simple open proxy or socks server, so, "legit"
users are more likely to hit a blacklist entry. Most open proxy or
socks hits are _not_ mail servers and were never intended to be. So
nobody notices. Nobody cares either (except the spammer, but they don't
notice).

2) Lesser used blacklists have higher FP rates, because fewer legit
senders hit them. OBrelays is only used by two sites: us, and its
maintainer. Despite being _large_ (OB is > 30 million mail addresses),
it's still small compared to the coverage of the other lists, hence the
relatively higher FP percentage.

3) Most of the open relay FPs are servers that are no longer open but
didn't have enough BL coverage to notice. Most of the open proxy/socks
hits are servers that are still open.

What does this all mean?

Well, what Joe said - perhaps our "filtering BCP" should _explicitly_
state that all mail filtering systems should be using well known and
reputable open relay and open proxy/socks blacklist.

In this way we encourage much greater coverage, so that (a) site owners
find out much quicker they have a problem and (b) stale entries are
cleaned up much faster. In other words, list accuracy is vastly
improved, and broken servers are fixed much faster. Open proxy/socks
blacklist usage is already "best practise" with IRC servers. See the
BOPM web site.

> If manual they cost money. While individuals may have some
> degree of tolerance for false positives, most companies and ISPs are not
> so tolerant--all it takes is one bad instance and you're all over the
> press (college admissions notifications blocked, Mac.com blocking domain
> renewal emails...).

Look at the above numbers, and remember who we are. Obviously, we're
VERY intolerant of false positives. We're doing fine.

[*] I have an issue with MONKEYSPROXY because the criteria for removal
isn't "just fix the open socks or proxy and ask for retest" - because
asking for the retest has other extraneous requirements. In effect, a
MONKEYSPROXY entry either means you have an open proxy/socks, OR, you
may simply not have been able to formulate a retest request that MONKEYS
would accept We can't do third-party retest requests with MONKEYS, for
example.

This does not seem to cause _us_ much trouble in practise (since we
whitelist), but if you're high volume like us and not actively
whitelisting like us, it may make you think twice about using it,
despite how good it is. I'd rather it followed the BOPM or ORDB model
here. Still and all, I think we've gotten 5 false positive reports for
Monkeys in 3 months.

[+] automated testing is triggered by at least 3 spam-in-hands in a day
hitting our spamtrap, one week minimum testing interval. 3 week no
repeat expiration. Ignored/not tested/listed if IP already blacklisted
elsewhere. Allows us to automatically detect "new" open
relays/proxies/socks hitting the spamtrap and publishing blacklist
entries to production servers. Experimental. May be decommissioned.

[%] 1000+ IPs spewing email from us from a /16, and 98%+ of them are
already listed as open relays/socks/proxies. The rest of them are
behaving as if they are. Sigh.
Ronald F. Guilmette
2003-03-19 21:31:32 UTC
Permalink
In message <***@americasm01.nt.com>,
"Chris Lewis" <***@nortelnetworks.com> wrote:

>Blacklist effectiveness on real email:
>BOPM 100635 5.34
>CONTENT 54802 2.91 (non-IP based filters, not used
> on spamtrap)
>Flonetwork 6096 0.32
>IP, NOT BL 34946 1.85
>MONKEYPROXY 135285 7.17
>NTblack 38608 2.05
>NTmanual 30370 1.61
>OBproxies 46420 2.46
>OBrelays 17419 0.92
>OK 5330 0.28
>OSinputs 31922 1.69
>OSproxy 2121 0.11
>OSsocks 54144 2.87
>SBL 51825 2.75
>TOTAL 1885655 100.00
>TOTAL BLOCK 316567 16.79 (total blocked)

And the winner is.... <<drum roll>...

:-)

>As you can see, relays are quite low. Notice how monkeyproxy and BOPM
>both trap more than 50% of all inbound spam (to the spamtrap, which is
>by definition 100% spam - bounces and viruses are already stripped out).

For those of you who don't already know, what Chris calls `monkeyproxy'
is more formally known as the Monkeys.Com Unsecured Proxies List (UPL).
You can read all about it here:

http://www.monkeys.com/upl/

>We get less than 5 false positive reports on average per day.

I'll remember that you said that. (See below.)

>Spot checks show that at least 95% of all whitelist/retests we've issued
>have taken effect on the corresponding 3rd party blacklist. Except
>monkeys[*]

I'll address that below.

>1) machines that were open relays are more likely to have been intended
>to send email than a simple open proxy or socks server, so, "legit"
>users are more likely to hit a blacklist entry. Most open proxy or
>socks hits are _not_ mail servers and were never intended to be. So
>nobody notices. Nobody cares either (except the spammer, but they don't
>notice).

Yes! What Chris said.

My tests indicate that over 75% of all IPs listed in the UPL are _not_
even mail servers.

>[*] I have an issue with MONKEYSPROXY because the criteria for removal
>isn't "just fix the open socks or proxy and ask for retest" - because
>asking for the retest has other extraneous requirements.

The UPL re-testing/de-listing requirements are detailed here:

http://www.monkeys.com/upl/delisting-policy.html

They are reasonably trivial to satisfy... unless you are a complete
dumbshit and/or unless your ISP is totally worthless and totally
unresponsive, even to YOUR requests for assistance.

If either case applies, then I personally don't give a damn if you
_have_ fixed your proxy... I still don't want mail from you.

(I think the normative description for these cases is ``Too stupid
to live.'')

In a nutshell, to be re-tested and/or de-listed from the UPL... after
it has already been proven, beyond a shadow of a doubt, that you were
running a wide open proxy (and that thus, you qualified as being
``somewhere beyond utterly clueless'') you must (a) have functioning
reverse DNS attached to your IP address and (b) either the Postmaster@
or the abuse@ person for the ``master controlling domain'' of your
reverse DNS must approve your request to be re-tested/de-listed. (Note:
http://www.monkeys.com/upl/master-domain.html describes what I mean by
``master controlling domain''.)

I don't think that either of these things are too much to ask. Note
also that requirement (a)... must have reverse DNS... derives from
requirement (b) i.e. getting Postmaster/abuse of your reverse DNS domain
to ``approve'' your re-test/de-listing request.

Most blithering idiots can (and many blithering idiots already have)
satisfied both requirments, and have gotten their IPs re-tested and
de-listed. A few thousand in fact. So obviously it is possible to
satisfy these simple requirements, and as far as I can tell, only a
few Forrest Gump types have been unable to do so.

I have many reasons for these requirements, and I think they are good
ones. They are mostly documented here:

http://www.monkeys.com/upl/delisting-rationale.html

But just to give you the simple version, although the primary goal of
the UPL is to stop spam, an important secondary goal is get open proxies
closed. The current UPL re-testing/de-listing requirements assist in
achieving that goal by making various _ISPs_ more aware of the fact that
their own networks are often RIDDLED with very dangerous unsecured proxies.
Getting them into the loop is worth the minor additional hassle of the
UPL's special re-testing/de-listing requirements. (Most ISPs still don't
have the vaguest idea that they even have an open proxies problem on their
networks. Ignorance == spam.)

>In effect, a
>MONKEYSPROXY entry either means you have an open proxy/socks, OR, you
>may simply not have been able to formulate a retest request that MONKEYS
>would accept

NOT TRUE!

If a.b.c.d is listed on the UPL, then ANY IDIOT can ``formulate a re-test
request'' for that IP address via the appropriate web form on the Monkeys.Com
web site. But the request must be _approved_ by Postmaster@ or abuse@ of
the relevant domain. What's wrong with that?? Nothing... unless BOTH (a)
your domain is administered by morons who don't read the RFCs (e.g. 2821)
AND (b) your domain admins are so totally clue-impervious that they are
not able to catch a clue, even when YOU, one of their own local users,
tries to give them one.

> We can't do third-party retest requests with MONKEYS, for example.

Again, that's just NOT TRUE.

You _can_ put in the re-test request, and then Postmaster@/abuse@ of the
domain that actually owns the IP of the (formerly?) open proxy must
simply approve the request. (They are given a magically coded URL via
e-mail, with a detailed message telling them what this is all about
and what they have to do, and then all they gotta do is visit that
magic URL to ``approve'' the re-test request.)

Simple, no?

>This does not seem to cause _us_ much trouble in practise (since we
>whitelist), but if you're high volume like us and not actively
>whitelisting like us, it may make you think twice about using it,
>despite how good it is. I'd rather it followed the BOPM or ORDB model
>here. Still and all, I think we've gotten 5 false positive reports for
>Monkeys in 3 months.

OK. Above you said that you get a total of about 5 whitelisting requests
at your site PER DAY. Now you say that you have only gotten about 5 due
to your use of the Monkeys.Com UPL list OVER A PERIOD OF THREE MONTHS.

Hummmm.... <<pulls out slide rule>>... So only about 1/90th of your
whitelist requests arise due to your use of the UPL, but the UPL is
stopping half, or more than half of your incoming spam.

Seems pretty admirable to me, even WITH the somewhat unusual de-listing
requirements.
Chris Lewis
2003-03-20 15:54:59 UTC
Permalink
Ronald F. Guilmette wrote:
> In message <***@americasm01.nt.com>,
> "Chris Lewis" <***@nortelnetworks.com> wrote:
> And the winner is.... <<drum roll>...

> :-)

I was expecting you to chime in :-)

>>[*] I have an issue with MONKEYSPROXY because the criteria for removal
>>isn't "just fix the open socks or proxy and ask for retest" - because
>>asking for the retest has other extraneous requirements.

> The UPL re-testing/de-listing requirements are detailed here:

> http://www.monkeys.com/upl/delisting-policy.html

> They are reasonably trivial to satisfy... unless you are a complete
> dumbshit and/or unless your ISP is totally worthless and totally
> unresponsive, even to YOUR requests for assistance.

I'm half expecting/dreading this to turn into a long protracted
discussion. Ron and I have had this conversation before, and I don't
expect my comments here will change his mind. So, I'm going to say my
piece as it pertains to general principles of spam control and then shut
up on this subject.

In my view, the criteria for "delisting" an IP in a blacklist should be
the exact reverse of the criteria for "listing". This is true in most
blacklists. Not true for Monkeys - it's saving grace _so_far_ is
simply that it _is_ very effective (partially due to us, as you'll
recall, Ron). But this will degrade over time given your delisting
criteria.

I agree with your remark about ISPs. But, that's not the point. The
goal of any anti-spam "technique" is to stop spam, not to to attempt to
enforce "best practises" which are unrelated to the technique, and are
at best only indirectly addressing spam. As such, a list where being
listed means "you're an open proxy", being delisted should mean "you're
no longer an open proxy", not "you're no longer an open proxy, and your
provider isn't an idiot".

For our purposes in handling false positives, I need to be able to tell
the person who hit the block that "I've triggered a retest" subject only
to issues surrounding, say, OSIRUS's or ORDB's retesting mechanism
glitching and missing a request, not "I tried to, pray your ISP pays
attention, your WHOIS entry is sane, etc".

The UPL isn't strictly a "open proxy/socks" list, it's more of a awkward
combination of "open proxy/socks" plus "RFCIgnorant". Those who want to
use it need to be aware of that fact.

When we hit a MONKEYS block we provide the end-user with the appropriate
link for the sender progressing through your delisting criteria, but I'd
expect the majority of users not be able to complete it successfully.

As I said, it's not a problem in practise yet, because we automatically
whitelist hits in "security lists" (open relay, proxy, socks) unless we
have reason to believe that the IP in question is actively spewing spam
_now_.

However, as the UPL gets older (it's only a few months old), and more
and more entries become out-of-date because of the delisting
requirements, we may have to rethink all of our interactions with it.

> Hummmm.... <<pulls out slide rule>>... So only about 1/90th of your
> whitelist requests arise due to your use of the UPL, but the UPL is
> stopping half, or more than half of your incoming spam.

As I mentioned, I expect this to degrade over time. BOPM by itself is
almost as effective as Monkeys, and it doesn't have this problem - it
can't degrade into a list of "stupid providers" instead of "open
proxies/socks".

As for BOPM - only two "false positive" reports over a period several
months longer than we've been using Monkeys.

They were really open at the time and spewing spam. We got the sites
fixed and delisted.

In other words, we've not seen a FP due to BOPM's entries being stale
yet. At least half of those with the UPL were stale and no longer valid.
Matt Sergeant
2003-03-19 22:16:40 UTC
Permalink
On Wed, 19 Mar 2003, Chris Lewis wrote:

> Blacklist effectiveness on real email:
> BOPM 100635 5.34
> CONTENT 54802 2.91 (non-IP based filters, not used
> on spamtrap)
> Flonetwork 6096 0.32
> IP, NOT BL 34946 1.85
> MONKEYPROXY 135285 7.17
> NTblack 38608 2.05
> NTmanual 30370 1.61
> OBproxies 46420 2.46
> OBrelays 17419 0.92
> OK 5330 0.28
> OSinputs 31922 1.69
> OSproxy 2121 0.11
> OSsocks 54144 2.87
> SBL 51825 2.75
> TOTAL 1885655 100.00
> TOTAL BLOCK 316567 16.79 (total blocked)

How come you're such a large entity yet this figure is so much lower than
everyone else is seeing? Is it because the companies (like ours) that work
in spam filtering are sought out by those with a spam problem, whereas
your user base covers everyone?

> 2) Lesser used blacklists have higher FP rates, because fewer legit
> senders hit them. OBrelays is only used by two sites: us, and its
> maintainer. Despite being _large_ (OB is > 30 million mail addresses),
> it's still small compared to the coverage of the other lists, hence the
> relatively higher FP percentage.

Why don't they make it publicly available then?

Matt.
Chris Lewis
2003-03-20 16:08:02 UTC
Permalink
Matt Sergeant wrote:
> On Wed, 19 Mar 2003, Chris Lewis wrote:
>>TOTAL 1885655 100.00
>>TOTAL BLOCK 316567 16.79 (total blocked)

> How come you're such a large entity yet this figure is so much lower than
> everyone else is seeing? Is it because the companies (like ours) that work
> in spam filtering are sought out by those with a spam problem, whereas
> your user base covers everyone?

The figure is much lower than some other people see simply because of an
historical accident - we changed our domain name (from the domains in
our spamtrap to nortelnetworks.com) a few years ago. We didn't change
it because of spam volumes directly, but, the timing turned out to be
extremely fortuitous.

[My involvement with the old domain decommissioning was to get the
messaging group to stop routing the old domains, rather than simply not
using them in From: stamping anymore.]

If you had looked in at our nortelnetworks.com spam volumes then, you'd
have seen < 1% spam. Meanwhile, on the other domains that are now in
the spamtrap, it was often hitting or exceeding 50%.

If you want to see the "real" picture as to what we would have looked
like now without the domain name change, you need to combine our
spamtrap and production figures. In other words, if we hadn't changed
our domain name, this is roughly what you would have seen per day:

300,000 ham
40,000 + 950,000 spams.

Ie: around 77% spam.

But it's just a matter of time before our production domains get back up
to that figure on its own.

>>2) Lesser used blacklists have higher FP rates, because fewer legit
>>senders hit them. OBrelays is only used by two sites: us, and its
>>maintainer. Despite being _large_ (OB is > 30 million mail addresses),
>>it's still small compared to the coverage of the other lists, hence the
>>relatively higher FP percentage.

> Why don't they make it publicly available then?

I can't speak for them. Sorry.
Matt Sergeant
2003-03-19 20:20:17 UTC
Permalink
On Wed, 19 Mar 2003, Kee Hinckley wrote:

> >overloaded, even the most recalcitrant owners eventually close them. In
> >the end (the "Nash equilibrium") many sites subscribe to a black hole
> >list, nearly all open relays are closed, and there is no need for
> >universal agreement to get to that end. It may take a while though.
>
> Why do you think it hasn't happened already. Those lists have been
> around for years. Have open-relays significantly decreased?

According to our stats, yes.

Matt.
m***@mail.SoftHome.net
2003-03-18 02:01:25 UTC
Permalink
>Damn, yes, it really should. Say, a network administrator at AOL makes a
>"little" mistake while configuring the main routers, so AOL will be
>completely down for ... 10 minutes. Even in this 10 minutes *lots* of mail
>*would* have been delivered to some of the millions(?) of AOL users. So
>your protocol would force thousands of mail clients to retry their
>delivery. If the clients were dial-in users, they wourd be forced to be
>online for each retry. With a protocol that allows relay servers (like
>SMTP) they can go offline and lay back without worrying what happens to
>their mail. And even when the relay server is unable to deliver the mail,
>it is not lost - the sender will recieve an notify when the relay server
>gives up (after a reasonable period of time).

On the opposite side of that, with the current SMTP/POP protocols Say a
network administrator at AOL makes a "little" mistake while configuring the
main routers, so AOL will be completely down for ... 10 minutes. Even in
this 10 minutes *lots* of mail *would* be unable to be sent by some of the
millions(?) of AOL users because their SMTP server is down. So the current
protocol would force thousands of mail clients to retry their delivery. If
the clients were dial-in users, they wourd be forced to be online for each
retry.


The current protocol is just as susceptible to server outages as the
proposed would be. With the proposed protocol the user would be at the
mercy of the receiving server instead of the sending server.

In a preemptive response to "If the SMTP server is down, use a different
SMTP server" *Very few* E-Mail users would or know how to re-configure
their SMTP server. The response to server outages should not be "Re
configure your SMTP server", but rather should be "Why did the server go
down" and "Why isn't there a backup"?
Chris Lewis
2003-03-18 16:17:19 UTC
Permalink
***@mail.SoftHome.net wrote:

> The current protocol is just as susceptible to server outages as the
> proposed would be.

What you have completely failed to realize is that there are far more
active players in delivering a piece of email than just the servers.
Your proposal forces the user to be "aware" of the state of every single
router-hop and piece of wire along the way. Router crashes in backbone?
Users can't even hit "SEND" for _any_ email that would traverse that
router, either side.

Certainly, that's at first glance "no worse than web". I fail to see
how decreasing the reliability of email to the same (not that good)
level as the web does anyone any good.

What you're also completely failing to see is that server administrators
don't _want_ to expose their mailbox servers to the Internet. That's
where all the email sits, right? All your email ripe for plucking by
hackers. That's why there are gateways and firewalls.

You couldn't email to sites behind gateways. You couldn't email to
sites with intermittent connectivity (either by design or accident).
Senders would have to take evasive action if the link went down in the
middle of transmission. Etc. Trying to send to two people whose
server's connectivity was intermittent exactly out of sync? Impossible.

We're trying to build things that isolate the user from the low-level
behaviour of the Internet, not the opposite.

Requiring end-to-end instant (especially reverse) connectivity is
totally impractical in a one-to-many transmission scheme - which is what
email is - unlike web (except in the rare and unusual case of webcasting).

And finally, as described before, trying to do any sort of centralized
logging or filtering becomes impossible, unless you want to make your
routers SMTP-aware - which is a gross violation of the layering
principle inherent in any well-designed and built networking stack.
They're flakey enough as it is :-(

Your "fix" would break many of the desirable (and in some circumstances
_critical_) characteristics of email that we rely on.

It really is a total non-starter.
m***@mail.SoftHome.net
2003-03-18 05:12:07 UTC
Permalink
>... after which the mose astute portion of the spammer community figures
>out that they will increase their odds of delivery dramatically, simply
>by sending out their spams with the forged envelope sender address of
><***@cnn.com>.

No, you are wrong. Digital signatures are used to assert
identity. Digital signatures are verifiable and unforgable.

>Except for the trusted ones... or anybody masquerading as trusted one.

Digital signatures... please review my
overview. http://meor.xwarzone.com/overview.htm
Alan DeKok
2003-03-18 00:14:40 UTC
Permalink
***@mail.SoftHome.net wrote:
> No, you are wrong. Digital signatures are used to assert
> identity. Digital signatures are verifiable and unforgable.

Digital signatures mean that software you didn't write performed a
mathematical operation you didn't understand, and produced output you
can't verify. (Yes, I would describe myself that way, too.)

Even if you believed them, you still have no idea what they mean.

Alan DeKok.
Ronald F. Guilmette
2003-03-18 08:33:25 UTC
Permalink
In message <***@mail.SoftHome.net>,
***@mail.SoftHome.net wrote:

>
>>... after which the mose astute portion of the spammer community figures
>>out that they will increase their odds of delivery dramatically, simply
>>by sending out their spams with the forged envelope sender address of
>><***@cnn.com>.
>
>No, you are wrong. Digital signatures are used to assert
>identity. Digital signatures are verifiable and unforgable.

Hint: CNN does not use digital signatures to identify any of its
outgoing mail at the present time.

Unless you have a plan to change that, your scheme isn't worth the
electrons it is written on.
Justin Mason
2003-03-18 17:30:09 UTC
Permalink
Ronald F. Guilmette said:
> >The mail host sees that ***@cnn.com has been added to the white list and
> >allows mail to be sent without computation time...
>
> ... after which the mose astute portion of the spammer community figures
> out that they will increase their odds of delivery dramatically, simply
> by sending out their spams with the forged envelope sender address of
> <***@cnn.com>.

Yep. This is exactly what's happened with some default whitelists, such
as the Amazon.com one distributed in early versions of SpamAssassin; the
concept was that spammers would be unlikely to fake the sender as
<***@amazon.com>, since they're a big, well-funded, litigious
company, and there's a legal precedent for such a co suing a spammer (ie
flowers.com).

I was wrong, it turned out. Spammers had *no* problem faking their From
addresses that way, and we've seen lots of examples. I don't think
Amazon have taken any cases either :(

The other anti-whitelisting technique spammers use, is to fake your
address as both From and To, on the basis that you usually appear
in your own whitelist. This is quite reliable, it seems.

> >Keep in mind, as covered in the overview, white lists do not exclude people
> >from sending mail to an end user, but rather slows the ability to rapidly
> >send mail to unknown persons.
>
> Except for the trusted ones... or anybody masquerading as trusted one.

Yep. And solve the masquerading problem, as has been said before here, and
you're halfway there anyway.

--j.
m***@mail.SoftHome.net
2003-03-18 19:15:03 UTC
Permalink
>Hint: CNN does not use digital signatures to identify any of its
>outgoing mail at the present time.
>
>Unless you have a plan to change that, your scheme isn't worth the
>electrons it is written on.

Actually I did plan on changing that. In my first message to this list I
posted a URL to an overview I was proposing. Here is that URL again
http://meor.xwarzone.com/overview.htm
Ronald F. Guilmette
2003-03-18 20:16:32 UTC
Permalink
In message <***@mail.SoftHome.net>,
***@mail.SoftHome.net wrote:

>
>>Hint: CNN does not use digital signatures to identify any of its
>>outgoing mail at the present time.
>>
>>Unless you have a plan to change that, your scheme isn't worth the
>>electrons it is written on.
>
>Actually I did plan on changing that. In my first message to this list I
>posted a URL to an overview I was proposing. Here is that URL again
>http://meor.xwarzone.com/overview.htm

I don't see any incentives there for CNN to change its current practices.

Were you planning to just send them money? Or did you figure that
they would assign programmers to work on changing their current
mailing list practices and software to fit your scheme just because
it will seem like fun?
m***@mail.SoftHome.net
2003-03-18 21:44:36 UTC
Permalink
>I don't see any incentives there for CNN to change its current practices.
>
>Were you planning to just send them money? Or did you figure that
>they would assign programmers to work on changing their current
>mailing list practices and software to fit your scheme just because
>it will seem like fun?

Jesus Christ this list is filled with stupid. Do you guys effectively
argue anything or do you only skim over messages and point out your own
ignorance? I've completely outlined the method I was proposing and only
about half of the responses I received were intelligent. The other half of
the responses were either completely ignorant or a thread fracture debating
how public keys work.

Have you guys ever thought about what happens if you truly do find an
original idea that would eliminate SPAM? What if it wasn't 100% backwards
compatible with sendmail? Do you guys have any sort of plan on how to roll
out such a solution across the Internet? My guess as to why this group is
so ineffectual is because there are too many voices spouting opinions and
not enough organization. You repeatedly get people talking about building
a trust infrastructure, better ways to filter, a universal black list, a
new law to govern SPAM, a rating systems, or re-implementing Finger? None
of these ideas work. You will never build a trust infrastructure. You
will never be able to filter 100% effectively. You will never be able to
create a universal black list. You will never be able to get a effectual
rating system working. Creating laws will never solve the problem because
it is too easy to anonymously send E-Mail, or send it from a different
country. Re-implementing finger? That's the stupidest thing I've ever heard.

The fact of the matter is: The current SMTP/POP/IMAP protocols were built
on the idea of a trusting network. When you allow people who abuse these
protocols to access them, you're lost. You will never get SMTP to work in
a way that prevents it from being abused. You will never get filters to
work 100% effectively because one man's SPAM is another man's free
vacation. The only way to get E-Mail to work in an effective an un-abused
manner is to implement a new protocol. The faster this is realized, the
faster a solution will be found.

You need a *new* protocol that does not have a central location to provide
proof of identity.
You need a way to prove identity peer to peer in a sense.
You need a way to either identify SPAMmers 100% of the time and stop them,
or identify them most of the time and slow them down. Identifying them
100% of the time is probably not feasible, so you can't try and stop
them. Identifying them most of the time with occasional false-positives
and slowing these down is probably a better way to do things.
A good way to identify SPAMmers most of the time would be through the use
of a white list of digital signatures. If someone is on your white list,
they are not a spammer. If they are not on your list, they're probably a
SPAMmer and need to be slowed down until they are placed on your white list.
You need a good way to slow down spammers in a way that does not imply
trust and does not bog down the network or server. You need a way to slow
down the sending client in a way that's scaleable, voluntary, and
uncircumventable.
A good way to slow the client down is to make the client perform a task
that uses CPU time. This task needs some way of proving that the CPU time
was invested by the client before the mail is able to be sent. The only
way to circumvent this task would be to apply more CPU power to solving
this task which would imply more money invested. If the task is scaleable
to easily consume more CPU power, investing more money in CPU power would
become less economically feasible and SPAMming would no longer be a viable
business practice.
Frank de Lange
2003-03-18 22:20:25 UTC
Permalink
***@mail.SoftHome.net wrote:

>
>> I don't see any incentives there for CNN to change its current
>> practices.
>>
>> Were you planning to just send them money? Or did you figure that
>> they would assign programmers to work on changing their current
>> mailing list practices and software to fit your scheme just because
>> it will seem like fun?
>
>
> Jesus Christ this list is filled with stupid. Do you guys effectively
> argue anything or do you only skim over messages and point out your
> own ignorance? I've completely outlined the method I was proposing
> and only about half of the responses I received were intelligent. The
> other half of the responses were either completely ignorant or a
> thread fracture debating how public keys work.
>
> Have you guys ever thought about what happens if you truly do find an
> original idea that would eliminate SPAM? What if it wasn't 100%
> backwards compatible with sendmail? Do you guys have any sort of plan
> on how to roll out such a solution across the Internet? My guess as
> to why this group is so ineffectual is because there are too many
> voices spouting opinions and not enough organization. You repeatedly
> get people talking about building a trust infrastructure, better ways
> to filter, a universal black list, a new law to govern SPAM, a rating
> systems, or re-implementing Finger? None of these ideas work. You
> will never build a trust infrastructure. You will never be able to
> filter 100% effectively. You will never be able to create a universal
> black list. You will never be able to get a effectual rating system
> working. Creating laws will never solve the problem because it is too
> easy to anonymously send E-Mail, or send it from a different country.
> Re-implementing finger? That's the stupidest thing I've ever heard.
>
> The fact of the matter is: The current SMTP/POP/IMAP protocols were
> built on the idea of a trusting network. When you allow people who
> abuse these protocols to access them, you're lost. You will never get
> SMTP to work in a way that prevents it from being abused. You will
> never get filters to work 100% effectively because one man's SPAM is
> another man's free vacation. The only way to get E-Mail to work in an
> effective an un-abused manner is to implement a new protocol. The
> faster this is realized, the faster a solution will be found.
>
> You need a *new* protocol that does not have a central location to
> provide proof of identity.
> You need a way to prove identity peer to peer in a sense.
> You need a way to either identify SPAMmers 100% of the time and stop
> them, or identify them most of the time and slow them down.
> Identifying them 100% of the time is probably not feasible, so you
> can't try and stop them. Identifying them most of the time with
> occasional false-positives and slowing these down is probably a better
> way to do things.
> A good way to identify SPAMmers most of the time would be through the
> use of a white list of digital signatures. If someone is on your
> white list, they are not a spammer. If they are not on your list,
> they're probably a SPAMmer and need to be slowed down until they are
> placed on your white list.
> You need a good way to slow down spammers in a way that does not imply
> trust and does not bog down the network or server. You need a way to
> slow down the sending client in a way that's scaleable, voluntary, and
> uncircumventable.
> A good way to slow the client down is to make the client perform a
> task that uses CPU time. This task needs some way of proving that the
> CPU time was invested by the client before the mail is able to be
> sent. The only way to circumvent this task would be to apply more CPU
> power to solving this task which would imply more money invested. If
> the task is scaleable to easily consume more CPU power, investing more
> money in CPU power would become less economically feasible and
> SPAMming would no longer be a viable business practice.
>
>
>
> _______________________________________________
> Asrg mailing list
> ***@ietf.org
> https://www1.ietf.org/mailman/listinfo/asrg

Lighten up dude, comments like 'this list is filled with stupid' don't
help to bring your point across. The opposite is true...

What we - IMnsHO - are looking for is a way to reduce the impact of spam
on email. Reduce, probably not totally eliminate, because that is not
really the point. Any service can be abused as long as the barrier of
entry to the service is low. Even your proposal will not completely
eliminate spam, it would probably just change the way spam would be sent
(by distributing the load of the verification process over a wide range
of computers, employing either well-known vulnerabilities in Windows or
parasitic ratware). And you mention something about patents and the need
to license when making for-profit implementations, which is a no-no for
something as essential and basic as the email standard.

Reduce spam while not throwing away the baby with the bathwater.
Evolution works, revolution often leaves a bloody mess.

Are you in San Francisco for the IETF? If so, come forward and speak
your mind.

Frank
Ronald F. Guilmette
2003-03-18 23:31:44 UTC
Permalink
In message <***@mail.SoftHome.net>,
***@mail.SoftHome.net wrote:

>rfg wrote:
>>I don't see any incentives there for CNN to change its current practices.
>>
>>Were you planning to just send them money? Or did you figure that
>>they would assign programmers to work on changing their current
>>mailing list practices and software to fit your scheme just because
>>it will seem like fun?
>
>Jesus Christ this list is filled with stupid. Do you guys effectively
>argue anything or do you only skim over messages and point out your own
>ignorance? I've completely outlined the method I was proposing and only
>about half of the responses I received were intelligent. The other half of
>the responses were either completely ignorant or a thread fracture debating
>how public keys work.

Thanks for sharing.

And please accept my apologies. I didn't realize until this moment that
I was in the presence of quite such a profound and infallible intellect.

Now that I know, I am suitably in awe.

>Have you guys ever thought about what happens if you truly do find an
>original idea that would eliminate SPAM? What if it wasn't 100% backwards
>compatible with sendmail?

Then that's a problem.

If you don't understand why it is a problem, then you may perhaps want
to look in your dictionary under the word ``inertia''.

>Do you guys have any sort of plan on how to roll
>out such a solution across the Internet?

I can't speak for anybody else, but I do.

Unfortunately, if I told you what my idea was, I'd have to kill you.

I'll be able to tell everybody later on.
Continue reading on narkive:
Loading...