Net Neutrality – The “Build Out” Argument
[Update – Excellent executive summary via a friend I was just talking to on the phone who is not terribly interested/versed in technology: “I get it, it makes more sense to just throw more tubes on the pile than paying engineers to constantly crawl through each one trying to figure out what’s in there.”]
I’ll be going off to Ottawa at the end of next week to offer the CFTPA whatever help I can for their “Net Neutrality” presentation to the CRTC on the 8th (incidentally it’s nice to see that the CFTPA’s position on throttling and neutrality is actually getting some appreciative notice from sectors that, incorrectly, seem to automatically assume that content producers are “the enemy”).
One of the major arguments of the CFTPA’s initial filing to the CRTC is that if solving network congestion is truly the primary concern of ISP’s, increasing network capacity is the only way to do so without stifling consumer choice, competition, and tying an anchor to the creative sector. As I’ve said many times before – the moment that ISP’s get the green light to *evaluate* content (instead of just transporting it) you will make them the sole gatekeepers of how (and what) content will be transported to their end-users. Even if they didn’t misuse that power (and given that both Rogers and Bell have significant digital content-delivery interests – I’m not sure how they could, in good faith to their shareholders, not push the envelope as far as possible) content creators, distributors, and the public would never again know where they stand, and the viability of an entire future of independent content distribution would be lost (or at the very least imperilled).
Aside from that gigantic point, I’m becoming increasingly aware of an equally compelling argument that over-provisioning (increasing network capacity beyond immediate demand) is the more cost-effective solution to network capacity issues as well. David Isenberg has written a very nice post on the “cost” of Net Neutrality which does all of the heavy lifting for this line of thought – I’ll just update it with a couple of numbers for an example.
If we take the Sandvine Internet Traffic Trends Report from October at face value (and I’d point out that as a manufacturer of “traffic optimization” technology they have an extremely large dog in the hunt) up to 22% of current global Internet traffic is due to P2P applications (I’m ignoring their claim about “upstream” traffic – as the differentiation is a sticky wicket for a future day – especially when network traffic is so asynchronous. Given that upstream for end users (who are where Sandvines numbers come from) is usually ~1/5-1/20 that of downstream – a weighted *total* composition of P2P traffic would still be, at maximum, ~20-25%).
So let’s correlate the Sandvine report with CISCO’s 2008-2013 Networking Forecast – which projects that Global IP traffic will quintuple in the next five years. This gives us an interesting forecast.
Presuming that the ISP’s are truly concerned and that their networks are at capacity, with P2P traffic threatening to “tip the balance” as it were, QoS/throttling/deep packet inspection actually would have no impact at all on the eventual outcome. Even if QoS technology could reduce the impact of P2P on the network to ZERO – you would still have at least 300-400% of current demand in the next five years (or an amount equal to 12-16x the entire current amount of P2P traffic). So increasing network capacity is inevitable, regardless.
Now if we go back to David Isenberg’s post, and take into account his very clear arguments on why increasing capacity is actually cheaper than QoS approaches (the brush strokes is that the cost of engineer time to implement the latter (as well as inevitable error, adjustment, monitoring, upgrade) is constant – while additional capacity costs decrease with volume.
So even if you could make an argument that QoS is a more cost-effective approach than increasing capcity at this frozen minute in time – ISP’s are faced with the reality of having to increase capacity by as much as a factor of four to maintain current service levels anyway over the next five years. The question then becomes is it a more logical approach to mix the more-expensive QoS monitoring with the capacity that is going to be otherwise required – or just tack on some additional over-provisioning?
It’s outside of my area of expertise, but I’d be very curious for a projection of how QoS approach costs scale with throughput growth.
So if the effect of P2P traffic on the reality of the short-term Internet is, at best, nominal to the broader issue of global traffic growth (and the CISCO report has some great projections about the volume of video content set to start to use the ‘net as a transport mechanisim which dwarf the current impact of, say, BitTorrent) then what benefit does throttling give ISP’s? Well, other than a very expensive “foot in the door” for when the next “threat to network capacity” comes along. Say, iTunes. Or Skype.