Thursday, September 04, 2008

On Lauren Weinstein...

I was, for a long time, a participant on Lauren Weinstein's "NNSquad" mailing list. There are many important issues in traffic shaping, traffic management, and other related topics. But I had to conclude, reluctantly, that you can't deal with him. His views are those of a zealot, unwilling to compromise, and seems intent on forcing his view of "neutrality".

The two last straws were his censorship policy and his belief that a high usage cap (250GB, what Comcast is doing) is somehow significantly anticompetitive.

His reaction to Comcast's cap mystifies me. Its almost a total victory for his view: Its transparent, its neutral, its not anticompetitive (250GB/month is >7 hours/day of 720p HDTV video delivered over the net.), and because the response is to first warn than terminate customers over the cap, it CAN'T be used in an anticompetitive manner.


And for a list which is supposed to be about "open exchange of ideas", he's an incredibly harsh censor.

This was my "goodbye to the list" message, which he did not print, of course.

He notes that his part of the conversation was for "private consumption". Yet nothing he said in the private discussion is anything different than what he has said publically on multiple occasions.

It is his sandbox. He can say what he wants, and ignore dissenting views. But by the same token, everyone else should be aware that this is how he operates.

To respect any copyright he might possibly claim, I've excluded his sections, replacing them with paraphrases.




Lauren, folks.

I have to conclude the following: This project will be a failure.
Period. Because even if you "succeed", success will lead to
usage-based pricing. You have proven that you will accept nothing
else.

And, Lauren, you really have practiced heavy censorship, not of
personal attacks but of technical discussion.

You have proven unwilling to acknowledge, respond, or publish the
following, which was an on topic, technical discussion of the issues.

How does the following not perfectly mesh with your stated moderation
policies? Yet it seems to have gotten dropped down the memory hole!
You don't want cooperation. You don't want open discussion.

Thus this is "So long": This project will fail, and the mailing list,
due to Lauren's policy of squelching open discussion which doesn't
agree with his preconceived notions, has already failed.



On Tue, Sep 2, 2008 at 9:46 AM, Nick Weaver wrote:
> Replying on-list.
>
> On Sat, Aug 30, 2008 at 6:30 PM, Lauren Weinstein wrote:
>>
>>> Well, Google's low bandwidth, so it doesn't matter.
>>

{Here lauren insists that google is major bandwidth in the aggregate for both user queries and spidering}

> Google's bandwidth in the AGGREGATE is trivial from the end-customer
> perspective compared with the HD video services which you argue about.
>
> Even google apps is light: Start up google docs, start up a new
> spreadsheet. Thats less than 1 MB transfered (there is other activity
> in my mini-tracelet, so 1 MB is an upper bound). And most of that
> should get cached the second time around.
>
> Or 20 seconds of 400 kbps video. Thats IT. Google (sans video) is
> NOT high bandwidth, even when dealing with a lot of customers,
> compared to video applications.
>
> And even from the webserver side, its not THAT bad.
>
>
>>> But cuil is a
>>> counterexample, where it is not only going up against the ISP behavior
>>> that is the threat, but also a huge existing competitor: google!

{Lauren's comment: Cuil is not proven to be viable.}

> Yet you seem to insist that the network uncertanty should prevent it
> from even being funded?!?
>
> "Would Google be successfully funded if it were trying to get off the
> ground in today's Internet environment? A question to ponder."
>
> Answer: YES, because Google competitors, and much higher bandwidth
> services, are both being funded!
>
>>> But lets look at high-bandwidth new ventures, which do directly
>>> compete with ISP offered services. Like, oh, Youtube, Hulu, etc.
>>> Since Hulu wasn't even launched as a joint venture until 2007, well, I
>>> think this is another data point against your hypothesis.

{Lauren calls youtube relatively low bandwidth}

> Hu? You say that "Google is high bandwidth" but "youtube is not"?!?
>
> Yet your claims that 250 GB cap is somehow anticompetitive relies on
> video services which are an order-of-magnitude larger than YouTube in
> bps.
>
> You can't have it both ways. Google, even google apps, are
> lightweights in comparison.
>
> And I use Hulu as a DATA RATE example, and an example of a company
> today which is really stressing the limits of Internet video delivery,
> complete with full Akamization.
>
>>> As for 250 GB, it was not an arbitrary choice, even if it seems so to
>>> you. Rather, its a round number approximation of the 1% heavy-tail
>>> today.

{Lauren asks me where I get this, and asks how I justify Time Warner's proposed 50GB cap}

> On comcast, speculation based on communication with multiple
> individuals in various ISPs, and actually believing comcast's
> statement that <1% would be affected, based on experience with network
> operations. 250 GB is a lot of data.
>
> There is a reason why, even I, as a researcher with an underutilized
> 100 Mbps pipe to the Berkeley campus, with its >1 Gbps pipe out to the
> rest of the Internet, if I had to transfer >250 GB in a research
> project, I'd use Fedex.
>
>
> And have you been listening?
>
> I DON'T justify TW's 50GB cap. That is exactly the cap level you want
> if you want to be anticompetitive: it keeps all the websurfers and
> casuals happy, but it kills any attempt to do a lot of video over the
> net.
>
>
> That you treat the two the same is the heart of your problem: If you
> and your ilk are going to claim that any cap that could be potentilly,
> possably, maby anticompetitive in the future is just as evil as one
> which is anticompetitive today, why should any ISP listen to you?!?
>
> The ISPs are your frenemies, not your enemies. They are delivering an
> incredible service at incredibly low cost. You should work with them.
> Yes, you need to watch them like a hawk, but they are also rational
> actors and can be worked with.
>
> But your reaction to a reasonable cap, by being effectively the same
> as your reaction to unreasonable caps, has made it clear that there is
> no satisfying your position.
>
>
>>> Be thankful the model isn't "Over the limit? Throttle all traffic to
>>> 100 Kbps" instead, because THAT model is far less cost for Comcast, so
>>> there would be an incentive to reduce the threhhold to affect more
>>> users. But in the model of terminate if over limit, if ever more than
>>> a percent or so are affected, Comcast becomes the one with the serious
>>> problem, not Comcast's customers.

{Lauren says he prefers throttling vs cutoff models}

>
> Riddle me this, then. Which is more anticompetitive at preventing
> video over the net:
>
> a) A 250 GB/month cap where, if you go over, the first time you get
> called and the second time you get cut off?
>
> b) A 50 GB/month cap where, if you go over, you get throttled down to
> 400 Kbps?
>
>
> Lets see, the first is high cost to the customer if triggered, but
> also very high cost to the ISP, and only affects a trivial number of
> users today and even tomorrow, assuming 2.5 Mbps 720p video.
>
> The second is low cost to the ISP, mid cost to the customer, but
> pretty much guarentees that video-over-the-net can't be used as a
> significant form of entertainment.
>
> Cutting off users is an activity that a business can only take if they
> really are at levels that are abusive to the network. You WANT the
> reaction to be fully-cutting-off-the user: it greatly increases the
> cost to the ISP.
>
> User-terminating caps are far more neutral than bandwidth-throttling
> caps, because of the cost to the ISP means that a user-terminating cap
> can only be deployed in really extreme cases, especially when one
> considers the multi-service aspects.
>
>
>>> If Comcast stated that "We will increase the threshhold as demand
>>> grows so less than 1% of customers would ever be affected", would THAT
>>> be satisfactory?

{Lauren basically insists he won't believe it}

> What would it take? Auditors? If it was an audited statement, would
> you accept it then?
>
> Because I don't see how you can convince anyone that a cap affecting
> <1% of the users would have a significant anticompetitive effect. It
> EXACTLY meets your criteria below.
>
> Will you accept the following statement:
>
> IF a bandwidth cap affects fewer than 1% of the customers, it is not
> significantly anticompetitive.
>
> Yes or no.
>
>>> Can ANY cap be satisfactory to you?

{Lauren requires that any cap be justified, and complains that we don't have visibility into the networks in question.}

> A lot of the capabilities are directly reversable from the technology.
> Its all Gige with occasional 10 GigE from the DOCSIS hub, its all
> DOCSIS 2 with some 3 rolling out (but the DOCSIS 3 rollout only
> affects downstream, not upstream).
>
> The physics and all are well known, and if you wanted the details of
> just how many customers and how much frequency range is on a user's
> CMTS, look at the DEFCON work on sniffing cable modems.
>
> DOCSIS is a broadcast medium. I suspect that even with encryption
> turned on, you should be able to get all the information you want on
> the actual internals of the cable company's residential networks.
>
>
> Given a user at the 250 GB/month, 8 hr/day duty cycle, that user is at
> 2 Mbps. Since a DOCSIS channel is only ~40 Mbps, that user is tying
> up 5% of an entire cable channel for the whole neighborhood. Thats a
> big cost right there.
>
> Likewise, price out COMMITTED data: price out a T1. Thats a good $100+/Mbps.
>
> Lets assume Comcast's committed rate is 1/5th of that, say $20/Mbps.
> A user at 250 GB is going to use ~1 Mbps continuous, which means at
> MINIMUM, assuming they were at a continual low rate, the user costs
> $20/month. Since in reality users are bursty, AND somewhat diurnally
> synchronized, a 250 GB/month user could easily cost the ISP $60, 80,
> 100+ in transit cost alone.
>
>
> You don't need to trust the ISP's statements to know that a 250
> GB/month user is a severe moneylosing proposition, you just need to do
> a little math.
>
>>> 1) No traffic shaping: best effort only and let the end-points fight it out.

{Lauren says "Voluntary" traffic shaping, well defined, and doesn't skew costs.}

>
> There is no such thing as "voluntary" traffic shaping between users.
>
> And I suspect there really is no satisfying you on traffic shaping either.
>
> EG: a policy like this: "The network enforces fairness such that
> viewed over a time average of X minutes, all users have an equal share
> of bandwidth when congestion occurs".
>
> Now if you talk to Comcast's engineers, and watch their presentations
> at IETF meetings, you'll understand that what they are doing with
> their farness solution is trying to approximate that with simple
> measurement and two QOS bins, so they don't need to buy new equipment.
>
> Yet look at the reaction in this forum to it!
>
>>> 2) flat-rate billing, [1]

{Lauren says there may be cases where usage based pricing is OK}

>
> I am willing to bet that any usage-based pricing scheme that a company
> would deploy would kill your "HD over IP" dreams.
>
> Do you care to take that bet?
>
>>> 3) A significant committed information rate.

{Lauren asks what do I mean by significant}

> You and others seem to subscribe to the "bandwidth is a scarcity"
> arguments, and the "I bought a 16 Mbps download line, I should get a
> good fraction of that", which implies a huge committed information
> rate.
>
> For "significant", you probably mean at least 1 Mbps. Do you want
> your ISP service to cost $100/month more just to give you that?
>
>>> 4) All other services offered by the ISP should be treated as
>>> bandwidth-equivelent with the internet service for 1,2 and 3.

{Lauren notes this is case-by-case}

> If you can't at least approximate these costs with a
> back-of-the-envelope however, you are doing something wrong.
>
>>> [1] And of these, #2 is the greatest threat. Because if you accept
>>> usage-based pricing, that will kill off your future "true HD is 10
>>> Mbps encoding" services faster than you can say "$.20/GB becomes $1/hr
>>> for transport. Have you considered US Mail?"

{Lauren claims UPS is an inappropriate example when comparing data-delivery business models}

>
> It is EXACTLY appropriate, because USPS is the competition for ANY
> "data overnight" video service.
>
> The USPS can get you an incredible amount of data overnight. Lets
> see, a BluRay disk is 50 GB. Thats 4.5 Mbps, and a cost/GB of roughly
> $.02/GB.
>
> If you want "data now", even at just $.20/GB, that is $1/hr for the
> movie, period, with transcoding. Or $10 for a full BluRay disk. Have
> a nice day.
>
> With charges more likely to be on the order of $1/GB, what do you
> think that would do?
>
>
> Remember Tanenbaum's famous maxim: "Never underestimate the bandwidth
> of a station wagon full of mag-tape".
>
>
> This is also why I don't believe in P2P for video content delivery:
> For "data now", it adds bandwidth.
>
> For "data overnight", where it would be friendlier than TCP, then it
> competes with US Mail and US mail's incredibly low cost/bit.
>
>>> [2] And for all the talk about the ISP being an evil monopoly, its
>>> really an evil DUopoly, where the ISP service is often used as a
>>> competitive lever across all services. If there was a huge profit to
>>> still be made in being an ISP, where are the metro-area WISPs? The
>>> third party DSL-based ISPs? They died in the marketplace due to
>>> competitive pressures: there is not much profit margin in being an
>>> ISP.

{Lauren states that the third party ISPs largely died because of the regulatory environment.}

> Start your own. Quit whining and start your own ISP.
>
> The legislative environment has almost no effect on point-to-point
> WISPs. All you need is a tall antenna someplace. A minor headache
> with the local zoning board.
>
> And you can still get DSL lines from the incumbent telco (I do for my
> home service) with layer 3 provided by a third party. That system
> still seems to be working just fine.
>
> I suspect for all the complaints about regulation keeping that duopoly
> intact, the bigger problem is just that the cable/telcos view the ISP
> service as something of a loss-leader: voice and video (either through
> new line or sattelite if you are a telco) is far more profitable, but
> IP service can get people to switch.
>

Monday, May 19, 2008

HTTP is Hazardous to Your Health

The following is not original, but simply a summary of widely known information.

It has been known for decades that plaintext protocols, such as HyperText Transfer Protocol (http) are vulnerable to man-in-the-middle attacks. Yet we are now at the point where there is simply too many ways to man-in-the-middle the web browser, and too much lovely mayhem that can be constructed, for this to be tolerable. We MUST demand that web-sites shift to HTTPS for everything, and ship web browsers that disable http altogether.

How to Man in the Middle: There are simply far too many ways to act as a man in the middle to a web browser. These include maskerading as any access point requested by a system (Karma), ARP cache poisioning (arpiframe), DNS cache poisoning, WiFi packet injection ( airpwn), or simply an ISP attempting to monetize the network (advertisement injection,Phorm). If an adversary can eavesdrop on our HTTP sessions, they can act as a man-in-the-middle.

The problem arises from all the malicious fun that can be done by a man-in-the-middle. This can include:

  • Cookie Pillaging: By having the web browser transparently redirect through a long list of sites, the browser will transmit EVERY non-secure (not-SSL-only) cookie to the eavesdropper. Which means the eavesdropper can read! your! gmail!! and other such lovely mayhem, as many sites which allow SSL access don't actually set the cookies properly to mandate SSL access, which means from the viewpoint of an active attacker, SSL does no good at protecting the site!

  • Autocomplete Pillaging: Instead of just redirecting through a long list of sites, include hidden forms and javascript to capture all the autocomplete information present in the browser. A technique developed by H.D. Moore.

  • SMB Reflection: IE will happily open an SMB share when given the proper URL, which can be the attacker's share on the local network. The attacker can use this for the SMB reflection attack (at least on older systems), allowing the attacker on many systems to read and write the user's directory if file sharing is enabled, or to relay authorization credentials to a third party file server. Its unclear how well this can still work, but its at least worth trying.

  • Worms: Take the IE 0-day exploit-du-jour and make a worm that uses packet injection/AP spoofing to spread to all other systems on the local wireless network. For extra credit, release such a worm at JFK airport and include a phone-home visit to the CDC website giving the CDC a nice model for how an influenza-of-doom would spread. (Heathrow may be slightly better, but it is far cheaper to have your worm fly to heathrow in your place. Also, the spread rates in the Usenix paper are probably conservative, because they don't model effects like an infected notebook carrier doing work in a taxi)

  • Drive traffic to your blog:. Gotta have a proof of concept to get people's attention! Note that it only took a couple of hours to hook up a fragile but nontheless working demo. The attack would have been much more effective if I actually played games with wireless transmission power.


So what is to be done? Simple. NO MORE HTTP! Everything, and I mean EVERYTHING should be through HTTPS/SSL. The security community managed to kill off Telnet. We need to do the same to HTTP (oh, and non-secure DNS, too.)

Tuesday, March 25, 2008

Japanese are going to do "graph takedown"...

Geore Ou reports that the Japanese ISPs are going to start doing something similar to what I noted in january, albeit instead of just attacking the graphs of communication, simply warning and then disconnecting users.

(typo fixed, grr)

Sunday, January 27, 2008

A security thought: AT&T Copyright Fighting

The following is just my own opinion and speculation, to a hypothetical question: If I was AT&T, why and how would I implement the AT&T plan to enforce copyright on user traffic. (Note, this post is an extension of my slashdot comment on that thread, and basically describes a "DMCA Takedown on the Network Layer" style of response.)

I also believe this would be a significant problem if implemented. I'm a believer that general network neutrality is a mostly good thing. But when a company seriously proposes filtering, I believe we should attempt to determine what shape such filtering would take, and how it could maximize the stated objectives while minimizing collateral damage. This also gives those opposed to filtering a leg up on attempting to counter it.

To begin with, AT&T probably has a huge incentive to block pirated traffic. Time-Warner cable supposedly has 50% of the bandwidth used by 5% of the users. Who wants to bet that of this bandwidth, it is almost all pirated material and/or pornography? As an ISP, wouldn't you want to remove 1/3rd of your traffic? Especially if its customers that can't really complain about it?

The strength of piracy on the Internet is the ease of getting the pirated material,and the ease of distribution. Thus pirated material must be easy to find if it is to be a substantial portion of traffic and to have a significant economic impact.

So all the MPAA has to do is find the easy-to-find content, and do something about it. Currently, they've tried playing Whak-A-Mole on the Torrent tracking servers, but this has been a losing game, as these servers have already fled to "Countries of convenience", where they are difficult for the MPAA to sue off the network.

But rather than playing Whak-A-Mole on Torrent tracker servers (which are largely offshore), with ISP cooperation from AT&T it becomes possible to play Whak-A-Mole on the torrents themselves. Such a system would benefit both the content owners and the ISPs.

All that is necessary is that the MPAA or their contractor automatically spiders for torrents. When it finds torrents, it connects to each torrent with manipulated clients. The client would first transfer enough content to verify copyright, and then attempt to map the participants in the Torrent.

Now the MPAA has a "map" of the participants, a graph of all clients of a particular stream. Simply send this as an automated message to the ISP saying "This current graph is bad, block it". All the ISP has to do is put in a set of short lived (10 minute) router ACLs which block all pairs that cross its network, killing all traffic for that torrent on the ISP's network. By continuing to spider the Torrent, the MPAA can find new users as they are added and dropped, updating the map to the ISP in near-real-time.

This would be a powerful system, and the likely solution AT&T will use if they carry through on their plans to enforce copyright:

  • This requires no wiretapping. Instead, it relies solely on public information: the torrent servers and being able to contact participants in order to map those fetching an individual file. BitTorrent encryption would have no impact on this scheme.
  • It can be completely automated, both for the MPAA and AT&T
  • It also minimizes collateral damage, since only participants in an individual torrent can't communicate with each other when a Torrent is blocked. If the MPAA actually spiders the torrent (rather then trusting information from the trackers), there should be no false edges in the graph. The only collateral damage is if a pair of systems is also performing legitimate communication at the same time they are participating in the Torrent, something the ISP probably considers acceptable.
  • Any real collateral damage (incorrectly blocking content) AT&T can say is the fault of the MPAA.
  • It should be robust in the arms race: if the pirated material is open and distributed in a P2P manner, the MPAA's spiders should be able to track it. (Remember, even if CAPTCHAs are used to protect trackers or aspects of the systems, solving a CAPTCHA only costs $.01).
  • And its inexpensive. All AT&T has to do is deploy a small program to set and release a bunch of router ACLs, and thats it. AT&T can even keep the number of ACLs reasonably low, because they expire quickly and only need to be partially effective. No new hardware is required and everything can be fully automated. All the real costs (of spidering the Torrents, content identification, affirming that it is actually a copyright violation, and constructing the graphs) is placed on the MPAA or their contractor.

Likewise, (IANAL) AT&T can possibly avoid most liability. They aren't doing any wiretapping, nor even making a decision about which traffic to block.

Finally, AT&T has a huge number of reasons to deploy such a system:

  • It keeps the content providers happy for when they are negotiating their compete-with-iTunes/Netflix video on demand and cable TV services.
  • It keeps the content providers from pushing through very draconian legislation, or at least draconian legislation you aren't happy with. (It can F-up your competitors, but thats just a bonus)
  • And it drops their bandwidth bills by 30-50% by eliminating a large amount of deliberately-noncacheable (both politically and because of bittorrent encryption) traffic.

This won't stop closed-world pirates, those with a significant entry and secrecy, but those are far less significant. Closed-world pirates are much lower bandwidth for the ISP, because its far more difficult for pirates to get the content. But it should be able to shut down Bittorrent for open-world piracy, without blocking legitimate BitTorrent. It also won't stop child porn, although AT&T would probably claim that it does.

This was speculation. I have no evidence that this is what AT&T is planning. But given the huge expense (deep packet inspection), legal implications (wiretapping, false positives) and limitations (cryptography), I find it doubtful that AT&T really wants to detect copyrighted material directly. Performing deep packet inspection at line rates, especially to match a large database of copyrighted material, is hugely expensive, and would fail in the presence of encrypted Torrents and SSL-equipped Torrent search servers.

Thus I'm almost certain that if AT&T truely wishes to carry forward with its copyright-enforcement plants, the system will be similar to the one I've described.

Detecting this if they do deploy copyright enforcement would be possible, by participating in torrents (to generate the block) and then checking how that affects connectivity. If AT&T blocks Torrents but other TCP connectivity in those port ranges remains between two hosts, they aren't using only the speculated system, instead they would have to be directly inspecting the traffic between the hosts to determine that an individual flow is participating, information which can only be obtained by directly monitoring communication between the two hosts.


EDIT/addition: Richard Bennet has also discussed this technique at the Network Neutrality forum on 1/26/2008 (Slides at Richard Bennet's web site, on how easy it is to find pirated materials and participating peers to tell the ISP what to block).

He also brings up the important question: "Is there any reason that such an automated system should not be used, or does Net Neutrality now connote a license to steal?" This is a tough argument to counter.

The ongoing discussion can be viewed at The NNSquad Mailing List archive.

EDIT/addition #2: Delayed release of keys (distribute then release keys, as Richard Clayton pointed out) would slow down any spider, but also slows down users from getting content. The spider could still block all users after the key is released, and as people couldn't tell what they are downloading BEFORE the key is released, the MPAA could produce a large number of poisoned (false data) torrents during this window.