Skip to content

Posts tagged ‘CRTC’

Usage Based Billing – The Elephant in the Muddy Waters in the Middle Ground

Oh Wikimedia Commons, is there any topic you don't have the perfect image for?

I got an e-mail from a friend (on his way to a Useage Based Billing consultation) yesterday curious as to what my thoughts were on the whole thing. I haven’t written anything about it at length (other than the odd tweet, mostly because my position generally falls outside both of the established “camps”, and when I have talked about it I generally found the discussion quickly deteriorated to me being asked to defend “the other side” and tenants I didn’t actually agree with.

Actually my biggest problem with the “debate” so far is that the two sides usually distill down to the arguments that “UBB is necessary” vs. “UBB is bad” and given that those aren’t actually mutually exclusive positions it’s frustrating to try and even define what the core issues are.

But in yesterdays exchange, I realized that I do have some thoughts which are a different viewpoint from most of what I’ve been seeing written – so if nothing else it might provide a different angle for people to contextualize their own positions – whatever they may be. Read more

CRTC Issues Net Neutrality Decision

Submitted without comment.

Submitted without comment.

The CRTC has issued their net neutrality decision. Personally, I’m a little dissapointed in the ruling. Michael Geist points out a couple of areas to feel good about the ruling, but I tend to agree with this quote given to the CBC by Public Interest Advisory Committee legal counsel John Lawford:

“It approves all of the throttling practices that ISPs currently engage in. It requires consumers to prove something funny is going on and consumers don’t have the means to figure out what ISPs are doing and they don’t have the resources to bring that to the commission’s attention,”

Read more

CRTC Lobbies for Expanded Consumer Copyright Protection

Interesting story c/o Michael Geist that the CRTC filing to the Copyright consultation pretty much asks for the same private copyright concessions I think are a good idea for producers, primarily:

  • time shifting
  • format shifting
  • personal backup

Interesting read.

Mark Goldberg Raises Questions about the Couchathon – I Attempt to Answer Them

Mark Golgberg has asked some fair questions of me over at his blog regarding last year’s couchathon and the throttling difficulties we had. As I posted yesterday, am convinced those issues were due to misapplied BitTorrent throttling.

I’ve responded directly on Mark’s blog – but he moderates his comments so I’m not sure when they’ll show up there. In the meantime I thought it would be worthwhile to cross-post my response here – especially as I see some traffic coming through from his site.

Incidentally, Mark’s post flagged that I never updated the Couchathon website with the final totals from the event. With late donations, and some very kind post-event sponsor contributions we were able to raise over $10,000 dollars for Sick Kids Foundation and Child’s Play – not the $5,500 posted on the couchathon site. I must go amend that at once!

My response to Mark below the cut.
Read more

Five Arguments in Favour of Throttling – And Why They’re Wrong

Tell me folks are you sufferin' from the

Tell me folks are you sufferin' from the congestion?

Yes I know this is turning quickly into a net-neutrality blog – but since net-neutrality traffic is up at the moment, I figured I should strike while the iron is hot.

While I thought the CRTC presentation was quite strong, you’re always left with regrets about the questions that didn’t come up. There was a couple of points I was really hoping would be raised, since they are popular talking points of the major ISP’s and it would have been nice to offer a counter-point. So while they’re still fresh, here’s five ISP arguments in favour of traffic throttling, that I just don’t think hold much water:

1. Increasing capacity is prohibitively expensive.

Regardless of my prior post on why building additional capacity is likely far more fiscally responsible than throttling BitTorrent – total smarty-pants Jason Roks made a compelling calculation on Tuesday at the CRTC hearing that a certain national network could likely more than double it’s capacity at the most likely congestion spots for less than $2 per user per month. Of course it’s hard to offer more concrete suggestions when we have no idea of what the profit margins of the major ISP corporate units are (or what portion of their network is devoted to functions other than the Internet – like television, phone, and video-on-demand).
Read more

Net Neutrality for Content Creators – Am I a “Media Personality” Yet?

photo by dalboz17

I now retreat safely to my hole!

So I’m back from my two-day sojourn into the heart of darkness of government.

The CRTC net neutrality public hearings have a couple of (big) days to go yet. I think the joint CFTPA / IFTA team did a tremendous job in preparation and all we can do now is hope that we at least planted the seeds of our message so that the ISP’s don’t get an easy ride when they’re up on Friday.

I had some very positive discussions with media (and other gallery observers) following our presentation – which at least made me feel that our main points got across and we got a lot of nice write-ups today:

Thanks to everyone who sent me links to articles, or kind words following the presentation. Special hat-tip to Erin – for the lengthy consultation on what tie I should bring to Ottawa.

I didn’t speak up when they came for Napster…

Graphic Concept 3

Very interesting day in Ottawa yesterday preparing for the CFTPA presentation to the CRTC today. Lots of involved conversation with extremely intelligent individuals… I’m coming to the startling realization that this “government” of ours actually entails a lot of hard work. Who knew?

Having probably read, spoke, and thought more about the myriad aspects of this net neutrality hearing in the last week than I ever have in my life (and likely, more than is probably healthy) I thought it would be an interesting time to do a little follow up to the series of posts I’ve written following this issue, primarily on why the average end-user, with little interest in public policy should care.

The problem that the Net Neutrality “movement” has is somewhat similar to the issue faced by the ubiquitous WTO protesters – everyone’s in it for a different reason and for completely different politics. For every libertarian who proposes Net Neutrality to guarantee their freedom of net access – another decries any non-market intervention in industry. For every network engineer desperate to keep blanket traffic shaping off their protocols – there’s another that could argue legislation would limit the ability to improve end-user service quality.

I think my viewpoint boils down to this: The majority of these hearings Globally (and the Canadian proceedings specifically) have centered around traffic management of BitTorrent. As a content producer I have a mixed relationship with BitTorrent. I have used it as a legal, valuable, distribution tool – and I have seen it used to pirate works that cost me money (that’s not an abstract “piracy costs the industry billions of dollars” which I still believe is mostly distracting nonsense, that’s a concrete comment at a torrent tracker that was essentially “thanks, I was just about to go buy this on-line”). But BitTorrent is nothing if not a giant red herring. Gopher, Usenet, zero-day websites, kazaa, napster, limewire, WinNY, Tor… all are, essentially, placeholders for “any technology”.

BitTorrent is only particularly interesting in this instance because it has two distinct characteristics:

  • It has certain “P2P” tendencies that make it difficult to manage on a network
  • It is popular

Everything else (for the purpose of Net Neutrality) is distracting chaff.

Well guess what? Pretty much any technology that gets introduced from this point forward will have “P2P tendencies that (will) make it difficult to manage on a network”. World of Warcraft has P2P tendencies now. New VoIP applications have P2P tendencies. Flash (one of the widest technologies in use worldwide) is starting to adopt P2P tendencies. So really the only thing that makes BitTorrent particularly unique at this point in time is that it’s popular. And is that the precedent that we are willing to set? When a technology is widely adopted at a level not conceived of in an original network design the optimum management technique is to strangle it? I heard a great line today (and I haven’t asked for permission so I won’t attribute it) that if we had judged YouTube’s potential on what it was in 2005 (crotch kicks and cat videos) it never would have become such a platform for independent content and political discourse (and, of course, high def crotch kicks, and cats playing piano).

Maybe I, personally, wouldn’t be entirely heartbroken if BitTorrent was throttled out of usefulness… but what about when the next “popular” but “difficult” application is YouTube, or iTunes, or Skype, or my independent video distribution service? How technologies are used change. What technologies we use change. If how we respond to those technologies is to be consistent, we need to make sure they will consistently foster a future we feel is worth working for – not kill that goose before it lays any egg – let alone a golden one.

I can’t see a future of exciting new development opportunities fostered on a network where content judgements of any stripe is allowed ISPs who have their own content interests. That’s not a slight on their character, nor a suggestion of impropriety; Rather it would be improper if they didn’t use that leverage to prioritize their own vision of the future. That’s how the future is built – battling self-interests. But I do think (or hope) that there are more people self-interested in a future with an even playing field that they can build on.

When people ask me why I get so revved up about technology – I generally talk about how I am now able to do things that I couldn’t have imagined when I first logged on to the “Internet” fifteen years ago. Not only things that, literally, would have seemed like magic – but I can tell different stories, to different people, in ways that would have, quite literally, seemed like science fiction. I would like to live in a world where the next fifteen years will be equally as vibrant, creative, and revolutionary to how we – as humanity – tell stories to each other.

Net Neutrality – The “Build Out” Argument

the-internet-a-series-of-tubes

[Update – Excellent executive summary via a friend I was just talking to on the phone who is not terribly interested/versed in technology: “I get it, it makes more sense to just throw more tubes on the pile than paying engineers to constantly crawl through each one trying to figure out what’s in there.”]

I’ll be going off to Ottawa at the end of next week to offer the CFTPA whatever help I can for their “Net Neutrality” presentation to the CRTC on the 8th (incidentally it’s nice to see that the CFTPA’s position on throttling and neutrality is actually getting some appreciative notice from sectors that, incorrectly, seem to automatically assume that content producers are “the enemy”).

One of the major arguments of the CFTPA’s initial filing to the CRTC is that if solving network congestion is truly the primary concern of ISP’s, increasing network capacity is the only way to do so without stifling consumer choice, competition, and tying an anchor to the creative sector. As I’ve said many times before – the moment that ISP’s get the green light to *evaluate* content (instead of just transporting it) you will make them the sole gatekeepers of how (and what) content will be transported to their end-users. Even if they didn’t misuse that power (and given that both Rogers and Bell have significant digital content-delivery interests – I’m not sure how they could, in good faith to their shareholders, not push the envelope as far as possible) content creators, distributors, and the public would never again know where they stand, and the viability of an entire future of independent content distribution would be lost (or at the very least imperilled).

Aside from that gigantic point, I’m becoming increasingly aware of an equally compelling argument that over-provisioning (increasing network capacity beyond immediate demand) is the more cost-effective solution to network capacity issues as well. David Isenberg has written a very nice post on the “cost” of Net Neutrality which does all of the heavy lifting for this line of thought – I’ll just update it with a couple of numbers for an example.

If we take the Sandvine Internet Traffic Trends Report from October at face value (and I’d point out that as a manufacturer of “traffic optimization” technology they have an extremely large dog in the hunt) up to 22% of current global Internet traffic is due to P2P applications (I’m ignoring their claim about “upstream” traffic – as the differentiation is a sticky wicket for a future day – especially when network traffic is so asynchronous. Given that upstream for end users (who are where Sandvines numbers come from) is usually ~1/5-1/20 that of downstream – a weighted *total* composition of P2P traffic would still be, at maximum, ~20-25%).

So let’s correlate the Sandvine report with CISCO’s 2008-2013 Networking Forecast – which projects that Global IP traffic will quintuple in the next five years. This gives us an interesting forecast.

Presuming that the ISP’s are truly concerned and that their networks are at capacity, with P2P traffic threatening to “tip the balance” as it were, QoS/throttling/deep packet inspection actually would have no impact at all on the eventual outcome. Even if QoS technology could reduce the impact of P2P on the network to ZERO – you would still have at least 300-400% of current demand in the next five years (or an amount equal to 12-16x the entire current amount of P2P traffic). So increasing network capacity is inevitable, regardless.

Now if we go back to David Isenberg’s post, and take into account his very clear arguments on why increasing capacity is actually cheaper than QoS approaches (the brush strokes is that the cost of engineer time to implement the latter (as well as inevitable error, adjustment, monitoring, upgrade) is constant – while additional capacity costs decrease with volume.

So even if you could make an argument that QoS is a more cost-effective approach than increasing capcity at this frozen minute in time – ISP’s are faced with the reality of having to increase capacity by as much as a factor of four to maintain current service levels anyway over the next five years. The question then becomes is it a more logical approach to mix the more-expensive QoS monitoring with the capacity that is going to be otherwise required – or just tack on some additional over-provisioning?

It’s outside of my area of expertise, but I’d be very curious for a projection of how QoS approach costs scale with throughput growth.

So if the effect of P2P traffic on the reality of the short-term Internet is, at best, nominal to the broader issue of global traffic growth (and the CISCO report has some great projections about the volume of video content set to start to use the ‘net as a transport mechanisim which dwarf the current impact of, say, BitTorrent) then what benefit does throttling give ISP’s? Well, other than a very expensive “foot in the door” for when the next “threat to network capacity” comes along. Say, iTunes. Or Skype.