Author Archive

Netnod’s Exemplary Response to the EU’s Public Consultation on “Traffic Management”

by on Oct.18, 2012, under Uncategorized


Patrik Fältström and Netnod have provided a sterling example of how to correct the misframing of issues related to “managed services” or “specialized services.”  Read the submission itself for a perfect illustration of how drawing the distinction correctly leads the way to policy insight.

(from Netnod’s blog.)

Netnod has filed a response to the Public Consultation on specific aspects of transparency, traffic management and switching in an Open Internet that DG Communications Networks, Content and Technology use for information gathering for the Commission’s planned recommendations that commissioner Kroes announced on May 29 2012.

In summary, Netnod does believe further work is required to clarify the use of the term ‘Internet Access’. Netnod does however urge caution since Netnod does not agree with some conclusions. This as Netnod does believe some of the concepts described, related to congestion control and Quality of Service, are not applicable in a packet based network.

You can read the full statement here.

Contact person at Netnod: Patrik Fältström, (Head of Research and Development).

Comments Off on Netnod’s Exemplary Response to the EU’s Public Consultation on “Traffic Management” more...

Susan Crawford: We Can’t All Be in Google’s Kansas

by on Oct.18, 2012, under Uncategorized

(Original at Wired)

If the current internet access providers that dominate the American telecommunications landscape could get away with it, they’d sell nothing but specialized services and turn internet access into a dirt road.

[. . .]

[I]ncumbent internet access providers such as Comcast and Time Warner (for wired access) and AT&T and Verizon (for complementary wireless access) are in “harvesting” mode. They’re raising average revenue per user through special pricing for planned “specialized services” and usage-based billing, which allows the incumbents to constrain demand. The ecosystem these companies have built is never under stress, because consumers do their best to avoid heavy charges for using more data than they’re supposed to. Where users have no expectation of abundance, there’s no need to build fiber on the wired side of the business or build small cells fed by fiber on the wireless side.

If the current internet access providers that dominate the American telecommunications landscape could get away with it, they’d sell nothing but specialized services and turn internet access into a dirt road.

But the key barrier to competition – the incumbents’ not-so-secret weapon – is the high up-front costs of building fiber networks. That’s why the new 1-gigabit-per-second network planned by Google for residences in Kansas City was cited as an example of a “positive recent development” in the FCC chairman’s speech. Google was welcomed with open arms by Kansas City because the company offered a wildly better product than anything the cable distributors can provide: gigabit symmetric fiber access. The company has the commercial strength to finance this build itself, and it has driven down costs in every part of its product to make its Kansas City experiment commercially viable.

While the Google Fiber plan provides a valuable model, other communities that want to ensure their residents get fiber to the home shouldn’t have to wait.

We need policies that lower the barriers to entry for competitors. Otherwise, we’ll be stuck with the second-best cable networks now in place around the country, with their cramped upload capacity, bundled nature, deep affection for usage-based billing, and successful political resistance to any form of oversight.

[. . .]

Comments Off on Susan Crawford: We Can’t All Be in Google’s Kansas more...

Meeting of Open Internet Advisory Board: October 9

by on Oct.04, 2012, under Uncategorized

(from Berkman Center for Internet and Society and FCC)

At its October 9, 2012 meeting, the Committee will consider issues relating to the subject areas of its four working groups—Mobile Broadband, Economic Impacts of Open Internet Frameworks, Specialized Services, and Transparency—as well as other open Internet related issues.

Tuesday, October 9, 10am-12pm
Harvard Law School, Wasserstein Hall, Milstein West A Room
This event will be webcast live

By this Public Notice, the Federal Communications Commission (“Commission”) announces the date, time, and agenda of the next meeting of the Open Internet Advisory Committee (“Committee”).

The next meeting of the Committee will take place on October 9, 2012, from 10:00 A.M. to 12:00 P.M. in Milstein West A at the Wasserstein Hall/Caspersen Student Center, Harvard Law School, 1585 Massachusetts Avenue, Cambridge, MA 02138.

At its October 9, 2012 meeting, the Committee will consider issues relating to the subject areas of its four working groups—Mobile Broadband, Economic Impacts of Open Internet Frameworks, Specialized Services, and Transparency—as well as other open Internet related issues.  A limited amount of time will be available on the agenda for comments from the public.  Alternatively, members of the public may send written comments to Daniel Kirschner, Designated Federal Officer of the Committee, or Deborah Broderson, Deputy Designated Federal Officer, at the addresses provided below.

The meeting is open to the public and the site is fully accessible to people using wheelchairs or other mobility aids.  Other reasonable accommodations for people with disabilities are available upon request.  The request should include a detailed description of the accommodation needed and contact information.  Please provide as much advance notice as possible; last minute requests will be accepted, but may not be possible to fill.  To request an accommodation, send an email to [protected] or call the Consumer and Governmental Affairs Bureau at  202-418-0530  (voice),  202-418-0432  (TTY).

The meeting of the Committee will also be broadcast live with open captioning over the Internet at

For further information about the Committee, contact:  Daniel Kirschner, Designated Federal Officer, Office of General Counsel, Federal Communications Commission, Room 8-C830, 445 12th Street, S.W. Washington, DC 20554; phone:  202-418-1735 ; email: [protected]; or Deborah Broderson, Deputy Designated Federal Officer, Consumer and Governmental Affairs Bureau, Federal Communications Commission, Room 5-C736, 445 12th Street, S.W. Washington, DC 20554; phone:  202-418-0652 ; email: [protected].

Comments Off on Meeting of Open Internet Advisory Board: October 9 more...

FCC Announces Open Internet Advisory Committee Members

by on May.29, 2012, under Uncategorized

(Original at Azi Ronen’s Broadband Traffic Management blog)

Sunday, May 27, 2012

While Comcast challenges the Net Neutrality rules (“We do not prioritize our video .. we just provision separate, additional bandwidth for it” – here), OTT providers protest (Sony, Netflix) and the FCC chairman says that “Business model innovation is very important” (here) it seems that the US needs to re-examine its Open Internet laws (see “FCC’s Net Neutrality Rules Made Official; Start on Nov. 20” – here).

The FCC announced the “composition of the Open Internet Advisory Committee (OIAC). The OIAC’s members include representatives from community organizations, equipment manufacturers, content and application providers, venture capital firms, startups, Internet service providers, and Internet governance organizations, as well as professors in law, computer science, and economics“.

Jonathan Zittrain, Professor of Law at Harvard Law School and Harvard Kennedy School, Professor of Computer Science at the Harvard School of Engineering and Applied Sciences, and CoFounder of the Berkman Center for Internet and Society, will serve as Chair of the OIAC

Among the members are Neil Hunt, Chief Product Officer, Netflix; Kevin McElearney, Senior Vice President for Network Engineering, Comcast ; Chip Sharp, Director, Technology Policy and Internet Governance, Cisco SystemsMarcus Weldon, Chief Technology Officer, Alcatel-Lucent and Charles Kalmanek, Vice President of Research, AT&T.

[. . .]

Comments Off on FCC Announces Open Internet Advisory Committee Members more...

Eben Moglen at F2C: Innovation Under Austerity

by on May.23, 2012, under Uncategorized

Eben Moglen describes at Freedom 2 Connect exactly how innovation proceeds on the basis of the whole assemblage of components that enable us to “[let] the kids play and [get] out of the way” (including, of course, the general purpose Internet platform):


Comments Off on Eben Moglen at F2C: Innovation Under Austerity more...

Freedom to Connect 2012

by on May.21, 2012, under Uncategorized

Today and tomorrow, May 21-22, at AFI Silver Theatre in Silver Spring, Maryland.

(See and

About F2C: Freedom to Connect

F2C: Freedom to Connect is a conference devoted to preserving and celebrating the essential properties of the Internet. The Internet is a success today because it is stupid, abundant and simple. In other words, its neutrality, its openness to rapidly developing technologies and its layered architecture are the reasons it has succeeded where others (e.g., ISDN, Interactive TV) failed.

The Internet’s issues are under-represented in Washington DC policy circles. F2C: Freedom to Connect is designed to advocate for innovation, for creativity, for expression, for little-d democracy. The Freedom to Connect is about an Internet that supports human freedoms and personal security. These values, held by many of us whose consciousness has been shaped by the Internet, are not common on K Street or Capitol Hill or at the FCC.

F2C: Freedom to Connect is about having access to the Internet as infrastructure. Infratructures belong to — and enrich — the whole society in which they exist. They gain value — in a wide variety of ways, some of which are difficult to anticipate — when more members of society have access to them. F2C: Freedom to Connect especially honors those who build communications infrastructure for the Internet in their own communities, often overcoming resistance from incumbent cable and telephone companies to do so.

The phrase Freedom to Connect is now official US foreign policy, thanks to Secretary of State Clinton’s Remarks on Internet Freedom in 2010. She said that Freedom to Connect is, “the idea that governments should not prevent people from connecting to the internet, to websites, or to each other. The freedom to connect is like the freedom of assembly, only in cyberspace.” Her speech presaged the Internet-fueled assemblies from Alexandria, Egypt to Zuccotti Park.

The Agenda is now quite stable.

Confirmed keynote speakers include Vint Cerf, Michael Copps, Susan Crawford, Cory Doctorow (via telecon), Benoît Felten, Lawrence Lessig, Barry C. Lynn, Rebecca MacKinnon, Eben Moglen, Mike Marcus and Aaron Swartz.

Panels include:

  • Big Enough to Succeed
  • BTOP, Gig-U and other big pipe experiments
  • Freedom & Connectivity from Alexandria, Egypt to Zuccotti Park
  • Internet Freedom is Local
  • The Fight for Community Broadband

F2C: Freedom to Connect Agenda

Monday 5/21

8:00 to 9:00 AM Registration, Continental Breakfast

9:00 to 10:30 AM
Vint Cerf keynote  (45 min)
Rebecca MacKinnon keynote (Ian Schuler, US State Dept., discussant) (45 min)

11:00 to 12:30
Big Enough to Succeed: small carriers at the leading edge — entrepreneurial (non-Municipal) carriers show a fourth way (after Telco, Cable and Muni) to the future of connectivity. (60 min)

Susan Crawford keynote (30 min)

12:30 to 1:30 Lunch

1:30 to 3:00

BTOP, Gig-U, and other big pipe experiments (60 min)

Mike Marcus keynote (30 min) Dewayne Hendricks (brief intro)

3:30 to 5:00
Benoit Felten
, keynote (30 minutes)
Aaron Swartz
, “How we stopped SOPA” keynote (30 min)
Michael Copps keynote (30 min)
– Jim Baller introduces Commissioner Copps

RECEPTION, location tbd.

Tuesday, 5/22

8:00 to 9:00 AM Registration, Continental Breakfast

9:00 to 10:30 AM

Cory Doctorow remote (skype) keynote (30 min)

Freedom & Connectivity from Alexandria, Egypt to Zuccotti Park (60 min)

11:00AM to 12:30

Eben Moglen keynote, Innovation under Austerity (60 min)
Doc Searls and others, Discussion of Moglen’s talk (30 min)

12:30 to 1:30 Lunch

1:30PM to 3:00

Barry C. Lynn, keynote, American Liberties in the New Age of Monopoly (30 min)

Internet Freedom is Local (30 min)

A Word from Our Sponsors – (30 min) – each sponsor of F2C has a stake in Internet Freedom

  • Helen Brunner, Media Democracy Fund
  • Rick Whitt, Google
  • John Wonderlich, Sunlight Foundation
  • Will Barkis, Mozilla Foundation
  • Elliot Noss, Ting

3:30 to 5:00

Larry Lessig keynote, “The War Against Community Broadband” (30 min)

Panel, the Fight for Community Broadband: (60 min)


Comments Off on Freedom to Connect 2012 more...

Vint Cerf in Wired: We Knew What We Were Unleashing on the World

by on Apr.23, 2012, under Uncategorized

(Original at Wired Magazine)

Wired: So from the beginning, people, including yourself, had a vision of where the internet was going to go. Are you surprised, though, that at this point the IP protocol seems to beat almost anything it comes up against?

Cerf: I’m not surprised at all because we designed it to do that.

This was very conscious. Something we did right at the very beginning, when we were writing the specifications, we wanted to make this a future-proof protocol. And so the tactic that we used to achieve that was to say that the protocol did not know how — the packets of the internet protocol layer didn’t know how they were being carried. And they didn’t care whether it was a satellite link or mobile radio link or an optical fiber or something else.

[. . .]

Wired: So how did you come to be the author of the TCP/IP protocol?

Vinton Cerf: Bob Kahn and I had worked together in the Arpanet project that was funded by ARPA, and it was an attempt at doing a national-scale packet switching experiment to see whether computers could usually be interconnected through this packet-switching medium. In 1970‚ there was a single telephone company in the United States called AT&T and its technology was called circuit switching and that was all any telecom engineer worried about.

We had a different idea and I can’t claim any responsibility for having suggested to use packet-switching. That was really three other people working independently who suggested that idea simultaneously in the 1960s. So by the time I get involved in all in this, I was a graduate student at UCLA. I am working with my colleague and very close friend Steve Crocker, who now is the chairman of ICANN, a position I held for about a year.

A part of our job was to figure out what the software should look like for computers connecting to each other through this Arpanet. It was very successful — there was a big public demonstration in October of 1972 which is organized by Kahn. After the October demo was done Bob went to ARPA and I went to Stanford.

So in early 1973, Bob appears in my lab at Stanford and says ‘I have a problem.’ My first question is ‘What’s the problem?” He said we now have the Arpanet working and we are now thinking, ‘How do we use computers in command and control?”

If we wanted to use a computer to organize our resources, a smaller group might defeat a larger one because it is managing its resources better with the help of computers. The problem is that if you are serious about using computers, you better be able to put them in mobile vehicles, ships at sea, and aircraft, as well as at fixed installations.

At that point, the only experience we had was with fixed installations of the Arpanet. So he had already begun thinking about what he called open networking and believed you might optimize radio network differently than a satellite network for ships at sea, which might be different from what you do with dedicated telephone lines.

So we had multiple networks, in his formulation, all of them packet-switched, but with different characteristics. Some were larger, some went faster, some had packets that got lost, some didn’t. So the question is how can you make all the computers on each of those various networks think they are part of one common network — despite all these variations and diversity.

That was the internet problem.

In September 1973 I presented a paper to a group that I chaired called the International Network Working Group. We refined the paper and published formally in May of 1974, a description of how the internet would work.

Wired: Did you have any idea back then what the internet would develop into?

Cerf: People often ask, ‘How could you possibly have imagined what’s happening today?’ And of course, you know, we didn’t. But it’s also not honest to roll that answer off as saying we didn’t have any idea what we had done, or what the opportunity was.

You need to appreciate that by the time, mid-July ’73, we had two years of experience with e-mail. We had substantial amount of experience with Doug Englebart‘s system at SRI called The Online System. That system for all practical purposes was a one-computer world wide web. It had documents that pointed to each other using hyperlinks. Engelbart invented the mouse that pointed to things on the screen. […] So we had those experiences, plus remote access through the net to the time-sharing machines, which is the Telnet protocol …. So we had all that experience as we were thinking our way through the internet design.

The big deal about the internet design was you could have arbitrary large number of networks so that they would all work together. And the theory we had is that if we just specify what the protocols would look like and what software you needed to write, anybody who wanted to build a piece of internet would do that and find somebody who would be willing to connect to them. Then the system would grow organically because it didn’t have any central control.

And that’s exactly what happened.

The network has grown mostly organically. The closest thing that was in anyway close to central control is the Internet Corporation for Assigned Names and Numbers (ICANN) and its job was to allocate internet address space and oversee the domain name system, which had not been invented until 1984.

So, we were in this early stage. We were struggling to make sure that the protocols are as robust as possible. We went through several implementations of them until finally we started implementing them on as many different operating system as we could. And by January 1st 1983, we launched the internet.

That’s where it is dated as operational and that’s nearly 30 years ago, which is pretty incredible.

[. . .]

Wired: So from the beginning, people, including yourself, had a vision of where the internet was going to go. Are you surprised, though, that at this point the IP protocol seems to beat almost anything it comes up against?

Cerf: I’m not surprised at all because we designed it to do that.

This was very conscious. Something we did right at the very beginning, when we were writing the specifications, we wanted to make this a future-proof protocol. And so the tactic that we used to achieve that was to say that the protocol did not know how — the packets of the internet protocol layer didn’t know how they were being carried. And they didn’t care whether it was a satellite link or mobile radio link or an optical fiber or something else.

We were very, very careful to isolate that protocol layer from any detailed knowledge of how it was being carried. Plainly, the software had to know how to inject it into a radio link, or inject it into an optical fiber, or inject it into a satellite connection. But the basic protocol didn’t know how that worked.

And the other thing that we did was to make sure that the network didn’t know what the packets had in them. We didn’t encrypt them to prevent it from knowing — we just didn’t make it have to know anything. It’s just a bag of bits as far as the net was concerned.

We were very successful in these two design features, because every time a new kind of communications technology came along, like frame relay or asynchronous transfer mode or passive optical networking or mobile radio‚ all of these different ways of communicating could carry internet packets.

We would hear people saying, ‘The internet will be replaced by X25,’ or ‘The internet will be replaced by frame relay,’ or ‘The internet will be replaced by APM,’ or ‘The internet will be replaced by add-and-drop multiplexers.’

Of course, the answer is, ‘No, it won’t.’ It just runs on top of everything. And that was by design. I’m actually very proud of the fact that we thought of that and carefully designed that capability into the system.

Wired: Right. You mentioned TCP/IP not knowing what’s within the packets. Are you concerned with the growth of things like Deep Packet Inspection and telecoms interested in having more control over their networks?

Cerf: Yes, I am. I’ve been very noisy about that.

First of all, the DPI thing is easy to defeat. All you have to do is use end-to-end encryption. HTTPS is your friend in that case, or IPSEC is your friend. I don’t object to DPI when you’re trying to figure out what’s wrong with a network.

I am worried about two things: one is the network neutrality issue. That’s a business issue. The issue has to do with the lack of competition in broadband access and therefore, the lack of discipline in the market to competition. There is no discipline in the American market right now because there isn’t enough facilities-based competition for broadband service.

And although the FCC has tried to introduce net neutrality rules to avoid abusive practices like favoring your own services over others, they have struggled because there has been more than one court case in which it was asserted the FCC didn’t have the authority to punish ISPs for abusing their control over the broadband channel. So, I think that’s a serious problem.

The other thing I worry about is the introduction of IPv6 because technically we have run out of internet addresses — even though the original design called for a 32-bit address, which would allowed for 4.3 trillion terminations if it had been efficiently used.

And we are clearly over-subscribed this point. But it was only last year that we ran out. So one thing that I am anticipating is that on June 6 this year, all of those who can are going to turn on IPv6 capability.

[. . .]

Comments Off on Vint Cerf in Wired: We Knew What We Were Unleashing on the World more...

Bob Frankston: From DIY to the Internet

by on Mar.12, 2012, under Uncategorized

(Original at Bob’s blog)


This talk is aimed at an audience that wants “more Internet”. But what does that mean?

To many people it means more high speed connections to services like the Web and YouTube because they are the public face of the today’s Internet.

The Internet is limited by policy choices we make and not by the technology. We don’t see connected devices such as medical monitors because they don’t work well using today’s infrastructure because those who provide the infrastructure don’t have any incentive to support such applications even if, literally, our lives depend on it.

We don’t see these applications because they are at odds with the business model we call telecommunications. It’s a business that assumes the network transports a valuable content called “information”. But, as we’ll see, today we exchange bits. But bits in themselves are not information in the sense that humans understand it. Far more important for a business, the number of bits doesn’t correspond to the value of information. It’s as if there is no difference in value between beautiful poems and doggerel that fills pages.

Bits are nothing more than letters of the alphabet and it’s hard to make a business based on the ability to control the supply of the letter “e”.

To understand how the Internet is different we have to step back to the days when those of us working with computers wanted to interconnect them. It was a Do-It-Yourself (DIY) effort. We were already using modems to repurpose the telephone network as a data transport.

To oversimplify history, when we had computers in the same room we simply connected a wire between them and then wrote software to send bits between the two. We could extend the range using radios. If we lost a message we could simply retransmit it.

Later we could extend the range by using modems that ran over telephone lines. It was all new and exciting and we were happy to simply be able to connect to a distant system even if we could do it at a far lower speed than our local networks and at a higher cost. That cost was typically borne by companies as overhead.

This approach works fine as long as we stay within the confines of that model. Innovations that require availability elsewhere are simply too difficult. The Internet has given us a sense of what is possible but before we can realize those possibilities we need to understand the motivations of those who own the paths that carry the bits and understand why they can’t extend their business model.

We can’t take a top down approach, as in expecting Congress and the FCC to make major policy changes. Fortunately, thanks to the very nature of the Internet we can still apply a DIY approach for local connectivity. This is the real Internet – today we can indeed reach across the globe but we have difficulty interconnecting with devices in the next apartment.

As we come to appreciate the value of peer connectivity we can extend the model beyond our homes and simply obviate the need for a telecommunications industry as it is presently constituted.

[Slide Commentary . . .]

Comments Off on Bob Frankston: From DIY to the Internet more...

Doc Searls: Edging Toward the Fully Licensed World

by on Mar.01, 2012, under Uncategorized

(Original at Doc Searl’s Weblog)

February 29, 2012

[. . .]

Nothing you watch on your cable or satellite systems is yours. In most cases the gear isn’t yours either. It’s a subscription service you rent and pay for monthly. Companies in the cable and telephone business would very much like the Internet to work the same way. Everything becomes billable, regularly, continuously. All digital pipes turn into metered spigots for “content” and services on the telephony model, where you pay for easily billable data forms such as minutes and texts. (If AT&T or Verizon ran email you’d pay by the message, or agree to a “deal” for X number of emails per month.)

Free public wi-fi is getting crowded out by cellular companies looking to move some of the data carrying load over to their own billable wi-fi systems. Some operators are looking to bill the sources of content for bandwidth while others experiment with usage-based pricing, helping turn the Net into a multi-tier commercial system. (Never mind that “data hogs” mostly aren’t.)

[. . .]

What’s hard for [“BigCo walled gardeners such as Apple and Amazon”] to grok — and for us as well as their users and customers — is that  the free and open worlds created by generative systems such as PCs and the Internet have boundaries sufficiently wide to allow creation of what Umair Haque calls “thick value” in abundance. To Apple, Amazon, AT&T and Verizon, building private worlds for captive customers might look like thick value, but ultimately captive customer husbandry closes more opportunities across the marketplace than they open. Companies and governments do compete, but the market and civilization are games that support positive sum outcomes for multiple players.

[. . .]

Comments Off on Doc Searls: Edging Toward the Fully Licensed World more...

Dave Winer on Apple, Twitter and Tumblr: The Un-Internet

by on Jan.01, 2012, under Uncategorized

(Original at Dave’s Scripting News Blog)

The Un-Internet

By Dave Winer on Saturday, December 31, 2011 at 11:00 AM.

[. . .]

This time around, Apple has been the leader in the push to control users. They say they’re protecting users, and to some extent that is true. I can download software onto my iPad feeling fairly sure that it’s not going to harm the computer. I wouldn’t mind what Apple was doing if that’s all they did, keep the nasty bits off my computer. But of course, that’s not all they do. Nor could it be all they do. Once they took the power to decide what software could be distributed on their platform, it was inevitable that speech would be restricted too. I think of the iPad platform as Disneyfied. You wouldn’t see anything there that you wouldn’t see in a Disney theme park or in a Pixar movie.

The sad thing is that Apple is providing a bad example for younger, smaller companies like Twitter and Tumblr, who apparently want to control the “user experience” of their platforms in much the same way as Apple does.

[. . .]

My first experience with the Internet came as a grad student in the late 70s, but it wasn’t called the Internet then. I loved it because of its simplicity and the lack of controls. There was no one to say you could or couldn’t ship something. No gatekeeper. In the world it was growing up alongside, the mainframe world, the barriers were huge. An individual person couldn’t own a computer. To get access you had to go to work for a corporation, or study at a university.

Every time around the loop, since then, the Internet has served as the antidote to the controls that the tech industry would place on users. Every time, the tech industry has a rationale, with some validity, that wide-open access would be a nightmare. But eventually we overcome their barriers, and another layer comes on. And the upstarts become the installed-base, and they make the same mistakes all over again.

[. . .]

Comments Off on Dave Winer on Apple, Twitter and Tumblr: The Un-Internet more...

DSL Reports: The “Bandwidth Hog” is a Myth

by on Dec.05, 2011, under Uncategorized

(Original at DSL Reports)

. . . And Caps Don’t Really Address Truly Disruptive Users

by Karl Bode Wednesday 30-Nov-2011

“The correlation between real-time bandwidth usage and data downloaded over time is weak and the net cast by data caps captures users that cannot possibly be responsible for congestion.”

You might recall that back in 2009, we mentioned a piece claiming that the “bandwidth hog,” a term used ceaselessly by industry executives to justify rate hikes, net neutrality infractions, and pretty much everything else — was a myth. The piece was penned by Yankee analyst Benoit Felten and Herman Wagter, who knows a little something about consumption — as he’s the man largely responsible for Amsterdam’s FTTH efforts. The problem wasn’t bandwidth hogs, argued Wagter, the problem was poorly designed networks built by people more interested in cutting corners than offering quality product.

[. . .]

In a blog post, Felten notes that the pair took real user data for all customers connected to a single aggregation link and analyzed the network statistics on data consumption — in five minute time increments — over a whole day. What they found is that capping ISPs often don’t really understand customer usage patterns, and are confusing data consumption (how much data was downloaded over a whole period) and bandwidth usage (how much bandwidth capacity was used at any given point in time).

[. . .]

To simplify, one of our readers puts the dreaded highway metaphor, often used by ISPs to justify caps, to work in the opposite direction:

1% of vehicle drivers on the road travel a disproportionate amount of miles compared to the average driver. But they are on the road all the time. Most of the time they are on the road there is no rush hour congestion.The heavy drivers are likely to be involved in rush hour traffic jams, but only represent a small, not terribly relevant, fraction of total drivers in the traffic jam.Limiting the amount of miles a driver can drive, does nothing to widen the roads and little to keep people off the roads during traffic jams, thus does not help with congestion.

The researchers themselves note that blunt caps simply may not work, and they punish those that aren’t really causing any network problems:

Assuming that if disruptive users exist (which, as mentioned above we could not prove) they would be amongst those that populate the top 1% of bandwidth users during peak periods. To test this theory, we crossed that population with users that are over cap (simulating AT&T’s established data caps) and found out that only 78% of customers over cap are amongst the top 1%, which means that one fifth of customers being punished by the data cap policy cannot possibly be considered to be disruptive (even assuming that the remaining four fifths are).

Data caps, therefore, are a very crude and unfair tool when it comes to targeting potentially disruptive users. The correlation between real-time bandwidth usage and data downloaded over time is weak and the net cast by data caps captures users that cannot possibly be responsible for congestion.

[. . .]


Comments Off on DSL Reports: The “Bandwidth Hog” is a Myth more...

Susan Crawford: The Communications Crisis in America

by on Nov.02, 2011, under Uncategorized

(Original at Harvard Law and Policy Review)

The cable system providers will have both the motive (maintaining the money flow) and the ability (physical control of the Internet Protocol pipe to the home) to ensure that competing pure Internet businesses dependent on high-capacity connections will not be a meaningful part of the media landscape unless they pay tribute to the cable operators.

[. . .]

Let’s assume that all communications across the cable-provided pipe are “just like” Internet transmissions, in that they take advantage of the efficiencies of the Internet Protocol. Let’s further assume that all of the “channels” conveyed via that pipe are digital and thus virtual—making the capacity of cable’s DOCSIS 3.0 pipe almost unlimited. Let’s further assume that cable systems have adopted a services overlay that puts IP services into a common provisioning and management system, complete with elaborate digital rights management control; in other words, the cable industry will be able to perfectly charge for each thing you do “online,” invisibly, just like the wireless carriers do. Let’s further assume that that services overlay will allow for the personal targeting of advertisements across that pipe based on (and inserted into) your use of voice, video, Internet access, social networking, gaming, and location-based services. Let’s assume, finally, that the device wars are lost and that only devices sold by the cable network provider are allowed to access all of this information and present it to consumers.

[. . .]

Avoiding disruption [of the cable Pay-TV model] depends on making over-the-top services of all kinds—not just entertainment, but any interactive engagement that depends on reliably real-time high-bandwidth communication, like videoconferencing, news, and certainly sports—less attractive to consumers. Unless, of course, those over-the-top services are willing to do a deal with the cable companies on their terms by giving them a piece of their money flow, in which case the companies have every interest in prioritizing them and calling them “specialized services,” which are not subject to any net neutrality rules. The cable distribution industry is interested in having more people sign up for its high-speed Internet access services, because that’s where future growth lies. The industry just wants to make sure that the services being accessed by consumers are in the right kind of commercial relationship with the cable distributors: providing a piece of equity, or paying for carriage. Given all of these assumptions and predictions, the existence of a single, powerful pipe to many homes in America raises a number of troubling policy questions. We will be discussing these problems for years.

The cable system providers will have both the motive (maintaining the money flow) and the ability (physical control of the Internet Protocol pipe to the home) to ensure that competing pure Internet businesses dependent on high-capacity connections will not be a meaningful part of the media landscape unless they pay tribute to the cable operators. Because the cable operators will be providing both pay-TV distribution and high-speed Internet access distribution, they are well positioned to prevent the outbreak of competition and new business models made possible by the higher-speed Internet.

[. . .]

The emergence of a de facto cable monopoly in high-speed wired Internet access in most of the country cannot stay a secret. At the least, affordability concerns will become salient at some point. Despite the best efforts of the National Cable & Telecommunications Association (NCTA) and the cable companies’ lobbyists, legislators may begin to care about telecommunications policy because the American people may begin to care.

What tools are available to confront the looming cable monopoly? At some point, the Telecommunications Act of 1996, which required basic “telecommunications” providers to be subject to regulation but has been effectively avoided through litigation and regulatory legerdemain, will need to be re-written. A mosh pit of stakeholders will do their worst.

[. . .]

It will take time, and hard work, but surely we are capable of taking on the overall question of data access without assuming that the current market structure is the right one for all of us.


The looming cable monopoly is prompting a crisis in American communications. As the big squeeze continues, the genuine economic and cultural problems created by this monopoly may become more obvious to all Americans. We could tell this story by comparing the market power of the major cable companies in this country to the worst days of the railroad and oil trusts of the early 20th century; we could do it by comparing our country’s policies on high-speed Internet access—policies pushed relentlessly forward by the incumbent network operators—to the plans of our developed-country brethren; we could do it by gathering anonymous anecdotes from people who have tried to do transactions with the cable companies and are now afraid of retribution from them. Finally, we could take a deep breath and examine our country’s approach to “culture”—once we had the courage to say the word—and the effect of these singular giant pipes on our shared future. However we decide to proceed, we should pay attention to these pipes.

Comments Off on Susan Crawford: The Communications Crisis in America more...

Bob Frankston: Thinking Outside the Pipe

by on Oct.18, 2011, under Uncategorized

(Original at Bob’s Blog)

Monday, October 17, 2011

We’ve unnecessarily restricted the benefits that we and our economy can enjoy from [the Internet’s] abundance because of the artificial limitations of the telecommunication industry’s limited palette of services.


A picky eater can be undernourished amidst abundance. The Internet has given us a taste of the abundance all around us. But we’ve unnecessarily restricted the benefits that we and our economy can enjoy from that abundance because of the artificial limitations of the telecommunication industry’s limited palette of services.

Connecting a mobile pacemaker to a physician’s office is simple using Internet protocols but it becomes difficult when the telecommunications providers control the path and need to assure that they make a profit from each message. It’s similar to the problem of asking a railroad to serve a small town that doesn’t buy many tickets. Fortunately we have an alternative – roads serve the communities without having to be profitable because they benefit the community.

Cities provide roads everywhere because they don’t need every inch of pavement to be a profit center. When New York City’s private transit companies failed, the city took them over instead of letting them fail.

The wires that run along our streets cost very little by comparison to roads, so why are we investing so much effort to prevent us from communicating unless we pay a provider?

[. . .]

We need to free ourselves from the past and recognize that the Internet is based on a very different concept.

To understand this we can look at the packets, or containers, we use to ship goods across the oceans. They can be loaded on boats without the ship owner knowing what is inside. The containers can take any path across the ocean – they aren’t restricted to channels and you can even use airplanes.

If you are shipping an entire factory you split up the components and place them in containers. When they get to the destination you reassemble them in order and if some get lost you ship replacements.

One might not be so casual about delays and replacements for expensive gear; but with Internet packets that all happens within a thousandth of a second.

[. . .]


Comments Off on Bob Frankston: Thinking Outside the Pipe more...

Verizon Asks Federal Appeals Court to Halt FCC Open Internet Order

by on Oct.01, 2011, under Uncategorized

(Original at Reuters)

By Jonathan Stempel

Fri Sep 30, 2011 6:19pm EDT

(Reuters) – Verizon Communications Inc on Friday asked a federal appeals court to block the Federal Communications Commission from imposing new rules on how Internet service providers manage their networks.

The FCC last Friday said its so-called net neutrality rules were scheduled to take effect on November 20.

[. . .]

In a filing with the federal appeals court in Washington, D.C., Verizon said the FCC was “arbitrary” and “capricious” and acted beyond its statutory authority in imposing the rules.

The rules “impose potentially sweeping and unneeded regulations on broadband networks and services and on the Internet itself,” Michael Glover, deputy general counsel at Verizon, said in a statement.

[. . .]

Some public interest groups have also criticized the FCC rules, saying they are weak and favor some phone and cable companies with large Internet presences, such as AT&T Inc and Comcast Corp.

The D.C. Circuit in April threw out a challenge by Verizon and MetroPCS Communications Inc to the rules, calling it premature.

[. . .]

The case is Verizon v. FCC et al, D.C. Circuit Court of Appeals, No. 11-1359.

[. . .]

Comments Off on Verizon Asks Federal Appeals Court to Halt FCC Open Internet Order more...

Video: Dynamic Coalition on Core Internet Values

by on Sep.28, 2011, under Uncategorized

Meeting of the Dynamic Coalition on Core Internet Values at Internet Governance Forum 11 in Nairobi, Kenya on Sep 28 2011.

Panel: Alejandro Pisanty, Vint Cerf, Scott Bradner, Sivubramanian Muthusumy

(Click for video)

Comments Off on Video: Dynamic Coalition on Core Internet Values more...

Free Press Petitions for Review of FCC Open Internet Order

by on Sep.28, 2011, under Uncategorized

(Original at PC World)

By Grant Gross, IDG News

[. . .]

Free Press filed the lawsuit Wednesday in the U.S. Court of Appeals for the First Circuit in Boston, just days after the FCC published the rules in the Federal Register, the last step before they go into effect.

The regulations, sometimes called open Internet rules, bar wireline broadband providers from “unreasonable discrimination” against Web traffic, but don’t have the same prohibition for mobile broadband providers. The rules prohibit mobile providers from blocking voice and other applications that compete with their services, but don’t prohibit them from blocking other applications.

[. . .]

The FCC will likely see more challenges from companies on the other side of the net neutrality debate from Free Press.

Earlier this year, Verizon Communications and MetroPCS filed challenges to the rules, but the U.S. Court of Appeals for the District of Columbia rejected the lawsuits because the companies filed before the rules were published in the Federal Register. Verizon, which has said it plans to refile a lawsuit, has argued that the FCC does not have the authority to regulate broadband.

[. . .]

Comments Off on Free Press Petitions for Review of FCC Open Internet Order more...

FCC Files Open Internet Final Rule

by on Sep.22, 2011, under Uncategorized

Selections from the FCC’s Final Rule in the Open Internet Proceeding, filed with the Federal Register today:

SUMMARY: This Report and Order establishes protections for broadband service to preserve and reinforce Internet freedom and openness. The Commission adopts three basic protections that are grounded in broadly accepted Internet norms, as well as our own prior decisions. First, transparency: fixed and mobile broadband providers must disclose the network management practices, performance characteristics, and commercial terms of their broadband services. Second, no blocking: fixed broadband providers may not block lawful content, applications, services, or non-harmful devices; mobile broadband providers may not block lawful websites, or block applications that compete with their voice or video telephony services. Third, no unreasonable discrimination: fixed broadband providers may not unreasonably discriminate in transmitting lawful network traffic. These rules, applied with the complementary principle of reasonable network management, ensure that the freedom and openness that have enabled the Internet to flourish as an engine for creativity and commerce will continue. This framework thus provides greater certainty and predictability to consumers, innovators, investors, and broadband providers, as well as the flexibility providers need to effectively manage their networks. The framework promotes a virtuous circle of innovation and investment in which new uses of the network—including new content, applications, services, and devices—lead to increased end-user demand for broadband, which drives network improvements that in turn lead to further innovative network uses.

DATES: Effective Date: These rules are effective November 20, 2011.

[. . .]

Synopsis of the Order 


In this Order the Commission takes an important step to preserve the Internet as an open platform for innovation, investment, job creation, economic growth, competition, and free expression. To provide greater clarity and certainty regarding the continued freedom and openness of the Internet, we adopt three basic rules that are grounded in broadly accepted Internet norms, as well as our own prior decisions:

i. Transparency. Fixed and mobile broadband providers must disclose the network management practices, performance characteristics, and terms and conditions of their broadband services;

ii. No blocking. Fixed broadband providers may not block lawful content, applications, services, or non-harmful devices; mobile broadband providers may not block lawful websites, or block applications that compete with their voice or video telephony services; and

iii. No unreasonable discrimination. Fixed broadband providers may not unreasonably discriminate in transmitting lawful network traffic.

We believe these rules, applied with the complementary principle of reasonable network management, will empower and protect consumers and innovators while helping ensure that the Internet continues to flourish, with robust private investment and rapid innovation at both the core and the edge of the network. This is consistent with the National Broadband Plan goal of broadband access that is ubiquitous and fast, promoting the global competitiveness of the United States.

[. . .]

We recognize that broadband providers may offer other services over the same last-mile connections used to provide broadband service. These “specialized services” can benefit end users and spur investment, but they may also present risks to the open Internet. We will closely monitor specialized services and their effects on broadband service to ensure, through all available mechanisms, that they supplement but do not supplant the open Internet.

[. . .]


[. . .]

A. The Internet’s Openness Promotes Innovation, Investment, Competition, Free Expression, and Other National Broadband Goals

Like electricity and the computer, the Internet is a “general purpose technology” that enables new methods of production that have a major impact on the entire economy. The Internet’s founders intentionally built a network that is open, in the sense that it has no gatekeepers limiting innovation and communication through the network.3 Accordingly, the Internet enables an end user to access the content and applications of her choice, without requiring permission from broadband providers. This architecture enables innovators to create and offer new applications and services without needing approval from any controlling entity, be it a network provider, equipment manufacturer, industry body, or government agency. End users benefit because the Internet’s openness allows new technologies to be developed and distributed by a broad range of sources, not just by the companies that operate the network. For example, Sir Tim Berners-Lee was able to invent the World Wide Web nearly two decades after engineers developed the Internet’s original protocols, without needing changes to those protocols or any approval from network operators. Startups and small businesses benefit because the Internet’s openness enables anyone connected to the network to reach and do business with anyone else, allowing even the smallest and most remotely located businesses to access national and global markets, and contribute to the economy through e-commerce4 and online advertising.5 Because Internet openness enables widespread innovation and allows all end users and edge providers (rather than just the significantly smaller number of broadband providers) to create and determine the success or failure of content, applications, services, and devices, it maximizes commercial and non-commercial innovations that address key national challenges—including improvements in health care, education, and energy efficiency that benefit our economy and civic life.

The Internet’s openness is critical to these outcomes, because it enables a virtuous circle of innovation in which new uses of the network—including new content, applications, services, and devices—lead to increased end-user demand for broadband, which drives network improvements, which in turn lead to further innovative network uses. Novel, improved, or lower-cost offerings introduced by content, application, service, and device providers spur enduser demand and encourage broadband providers to expand their networks and invest in new broadband technologies.6 Streaming video and e-commerce applications, for instance, have led to major network improvements such as fiber to the premises, VDSL, and DOCSIS 3.0. These network improvements generate new opportunities for edge providers, spurring them to innovate further.7 Each round of innovation increases the value of the Internet for broadband providers, edge providers, online businesses, and consumers. Continued operation of this virtuous circle, however, depends upon low barriers to innovation and entry by edge providers, which drive enduser demand. Restricting edge providers’ ability to reach end users, and limiting end users’ ability to choose which edge providers to patronize, would reduce the rate of innovation at the edge and, in turn, the likely rate of improvements to network infrastructure. Similarly, restricting the ability of broadband providers to put the network to innovative uses may reduce the rate of improvements to network infrastructure.

[. . .]

B. Broadband Providers Have the Incentive and Ability to Limit Internet Openness

[. . .]

The record in this proceeding reveals that broadband providers potentially face at least three types of incentives to reduce the current openness of the Internet. First, broadband providers may have economic incentives to block or otherwise disadvantage specific edge providers or classes of edge providers, for example by controlling the transmission of network traffic over a broadband connection, including the price and quality of access to end users. A broadband provider might use this power to benefit its own or affiliated offerings at the expense of unaffiliated offerings.

[. . .]

Second, broadband providers may have incentives to increase revenues by charging edge providers, who already pay for their own connections to the Internet, for access or prioritized access to end users. Although broadband providers have not historically imposed such fees, they have argued they should be permitted to do so. A broadband provider could force edge providers to pay inefficiently high fees because that broadband provider is typically an edge provider’s only option for reaching a particular end user.17 Thus broadband providers have the ability to act as gatekeepers.18

[. . .]

Third, if broadband providers can profitably charge edge providers for prioritized access to end users, they will have an incentive to degrade or decline to increase the quality of the service they provide to non-prioritized traffic. This would increase the gap in quality (such as latency in transmission) between prioritized access and non-prioritized access, induce more edge providers to pay for prioritized access, and allow broadband providers to charge higher prices for prioritized access. Even more damaging, broadband providers might withhold or decline to expand capacity in order to “squeeze” non-prioritized traffic, a strategy that would increase the likelihood of network congestion and confront edge providers with a choice between accepting low-quality transmission or paying fees for prioritized access to end users.

Moreover, if broadband providers could block specific content, applications, services, or devices, end users and edge providers would lose the control they currently have over whether other end users and edge providers can communicate with them through the Internet. Content, application, service, and device providers (and their investors) could no longer assume that the market for their offerings included all U.S. end users. And broadband providers might choose to implement undocumented practices for traffic differentiation that undermine the ability of developers to create generally usable applications without having to design to particular broadband providers’ unique practices or business arrangements.25

[. . .]

C. Broadband Providers Have Acted to Limit Openness

These dangers to Internet openness are not speculative or merely theoretical. Conduct of this type has already come before the Commission in enforcement proceedings.

[. . .]

These practices have occurred notwithstanding the Commission’s adoption of open Internet principles in the Internet Policy Statement; enforcement proceedings against Madison River Communications and Comcast for their interference with VoIP and P2P traffic, respectively; Commission orders that required certain broadband providers to adhere to open Internet obligations; longstanding norms of Internet openness; and statements by major broadband providers that they support and are abiding by open Internet principles.

[. . .]

D. The Benefits of Protecting the Internet’s Openness Exceed the Costs

Widespread interference with the Internet’s openness would likely slow or even break the virtuous cycle of innovation that the Internet enables, and would likely cause harms that may be irreversible or very costly to undo. For example, edge providers could make investments in reliance upon exclusive preferential arrangements with broadband providers, and network management technologies may not be easy to change.38 If the next revolutionary technology or business is not developed because broadband provider practices chill entry and innovation by edge providers, the missed opportunity may be significant, and lost innovation, investment, and competition may be impossible to restore after the fact. Moreover, because of the Internet’s role as a general purpose technology, erosion of Internet openness threatens to harm innovation, investment in the core and at the edge of the network, and competition in many sectors, with a disproportionate effect on small, entering, and non-commercial edge providers that drive much of the innovation on the Internet.39

[. . .]

There is no evidence that prior open Internet obligations have discouraged investment;41 and numerous commenters explain that, by preserving the virtuous circle of innovation, open Internet rules will increase incentives to invest in broadband infrastructure. Moreover, if permitted to deny access, or charge edge providers for prioritized access to end users, broadband providers may have incentives to allow congestion rather than invest in expanding network capacity. And as described in Part III, below, our rules allow broadband providers sufficient flexibility to address legitimate congestion concerns and other network management considerations.

[. . .]

Finally, we note that there is currently significant uncertainty regarding the future enforcement of open Internet principles and what constitutes appropriate network management, particularly in the wake of the court of appeals’ vacatur of the Comcast Network Management Practices Order.

[. . .]


[. . .]

Comments Off on FCC Files Open Internet Final Rule more...

Bob Frankston at OneWebDay: Infrastructure Commons – The Future of Connectivity

by on Sep.15, 2011, under Uncategorized

(Announcement at ISOC New York)

ISOC-NY OneWebDay Event:

Bob Frankston – “Infrastructure Commons – the Future of Connectivity”


The 6th annual global OneWebDay celebration will be Thursday September 22 2011. ISOC-NY’s contribution will be to host respected computer scientist and Internet iconoclast Bob Frankston who will present on the theme “Infrastructure commons – the future of connectivity”.

The subways, roads and sidewalks are vital infrastructure. The Internet should be no different – our economy, health and safety depend on our ability to communicate. Yet its provision and economy are based on outdated, inequitable, and inefficient telecom models. How do we move to a connected future?

What: Bob Frankston “Infrastructure commons – the future of connectivity”
When: OneWebDay, Thu Sep 22 2011 – 7.15pm – 9pm
Where: Rm. 202, Courant Institute NYU, 251 Mercer St NYC
Who: Public welcome. In person or by webcast.
Twitter:@isocny, #onewebday, @bobfrankston


We are happy to also announce that Dave Burstein of DSL Prime has agreed to moderate the session. Dave will also talk about the practicalities of establishing community networks.

About Bob Frankston

Bob Frankston is a native Brooklynite who first started working with computers in 1963 when he was just 13. He later graduated from MIT. He is best known as the co-author of VisiCalc – the spreadsheet program that was the original killer app that sold a million Apple II’s. This has led to many awards. Working for Microsoft in the 90s Frankston was very much responsible for the integration of Internet functionality into the Windows operating system, thus jumpstarting popular adoption of the network. In recent years, Frankston has been an outspoken advocate for reducing the role of telecommunications companies in the evolution of the Internet. He has coined the term “Regulatorium” to describe what he considers collusion between telecommunication companies and their regulators that prevents change. (Bio)

About Dave Burstein

As the editor and publisher of industry newsletter DSL Prime since 1999 Dave Burstein probably knows more about the state of the U.S. broadband industry than anyone else alive. He is an author and an award-winning broadcaster.


Comments Off on Bob Frankston at OneWebDay: Infrastructure Commons – The Future of Connectivity more...

A Choice of Futures: Dan York on Moving to a New Role at ISOC

by on Sep.14, 2011, under Uncategorized

(Original at Disruptive Telephony)

[. . .]

We have before us a choice of futures.

One choice leads to a future where innovative companies like Voxeo can emerge, thrive, disrupt and succeed.

Another choice leads to a future where what little “innovation” there is exists only at the will of the gatekeepers to the network after appropriate requirements and/or payments are met. Other choices lead to outcomes somewhere in between those polarities.

How will we choose?

[. . .]

[N]ow we see services like Facebook, Google+, Twitter and more that seek to provide a nice pretty space in which you can exchange messages, photos and more… without ever leaving the confines of the service… they are a walled garden with just many ways to access the garden and to look over the walls.

Everyone wants to own your eyeballs… to host your content… to provide your identity…

And we see companies like Apple, Google and Microsoft seeking to control a large degree of how we connect to and use the mobile Internet…

And we see a change from “permissionless innovation” where anyone can set up a new service… to a model where you have ask permission or agree to certain “terms of service” in order to connect your new service to other services or to have your app available on some platforms…

And we see countries that want to throw up a wall around their citizens… sometimes to keep information from coming in… and sometimes to keep information from going out… and sometimes to be able to shut down all access…

And we see players who did control our communications systems always looking for opportunities where they could maybe, just maybe, stuff the proverbial genie back in the bottle and regain that control they lost…

[. . .]

[T]his coming Monday, September 19th, I will join the Internet Society as a staff member.

The Missing Link

The particular project I will join within ISOC is a brand new initiative targeted at helping bridge the gap between the standards created within the IETF and the network operators and enterprises who are actually deploying networks and technologies based on those standards. To help translate those standards into operational guidance… to help people understand how to deploy those standards and why they should, what benefit they will see, etc

The initiative is currently called the “Deployment and Operationalization Hub”, or “DO Hub”, and while that may or may not be its final name, the idea is to find/curate content that is already out there created by others, create content where there are gaps, make it easy to distribute information about these resources… and promote the heck out of it so that people get connected to the resources that they need. The initial focus will be, somewhat predictably, on IPv6, but also DNSSEC and possibly another technology. It is a new project and the focus is being very deliberately kept tight to see how effective this can be.

[. . .]

Comments Off on A Choice of Futures: Dan York on Moving to a New Role at ISOC more...

(Europe/UK) Robert Kenny Rebuts AT Kearney’s “Viable Future Model for the Internet”

by on Aug.26, 2011, under Uncategorized

(From Benoit Felten’s blog and Communications Chambers)

Developments in Europe . . . Benoit Felten, A Slap in the Face of Net Discrimination Lobbyists:

Under the title Are Traffic Charges Needed to Avert a Coming Capex Catastrophy?, economist Robert Kenny builds a systematic refutation of the AT Kearney paper. Kenny disects each of the arguments that forms the AT Kearney reasoning and breaks each one of them down with clinical precision.

The starting point of Kenny’s piece is potentially the most important one: that the need for a change in traffic management is taken as a postulate by AT Kearney and in no way demonstrated. This to me is the most important messages for policy makers and regulators: before meddling with internet traffic management, make sure you understand exactly what is happening, don’t take anyone’s word for it.

From the Introduction to Robert Kenny’s rebuttal:

The net neutrality debate is now gathering steam in Europe, both at the Commission level and in member states. Against this background, four European telcos commissioned a report from AT Kearney [ATK], to support their opposition to net neutrality regulation. This report, A Viable Future Model for the Internet, claims that carriers are facing ballooning capex requirements to fund the growth of internet traffic and that the best way to address this structural problem is via traffic charges to online service providers [OSPs].

If massive capex is required, and this needs to be recovered from OSPs, that would be a significant argument against net neutrality regulation, since it would necessarily end the principal that consumers could access any (legal) site they wished – ISPs would block access to sites that had not paid the charges the ISPs had chosen to impose.

Broadly the logic of ATK’s report as follows:

  • Telco investors are already seeing lower returns than investors in other players in the internet value chain
  • Telcos face ballooning capex
  • This capex is unsustainable
  • OSPs are not contributing to the costs of traffic
  • In a two-sided market, both sides pay
  • Traffic charges are necessary because otherwise OSPs have no incentive to constrain traffic costs
  • OSPs can easily afford increased charges
  • Increasing retail prices will be challenging
  • It is practical to implement traffic charges to OSPs
  • Enhanced quality services can be introduced without degrading the basic internet

However I believe both its starting assumptions and its logic are open to significant challenge. This paper reviews the ATK report, from technical, economic and regulatory perspectives, and makes the case that ATK’s conclusion (that the best way forward is traffic charges to OSPs) is not at all well-founded. I consider in turn each of the logical steps above.

Note that the focus of the economic analysis in this paper is primarily on fixed networks, though the qualitative arguments apply equally to both fixed and mobile networks.


Comments Off on (Europe/UK) Robert Kenny Rebuts AT Kearney’s “Viable Future Model for the Internet” more...

Ford Pondering Mesh Networking

by on Aug.16, 2011, under Uncategorized

(Original at Connected Planet)

What if the car could be used to create a network? What if it could connect to other cars to form a constantly morphing mobile mesh network that helped drivers avoid accidents, identify traffic jams miles before they encounter them and even act as a relay point for Internet access?

By Kevin Fitchard

August 10, 2011

[. . .]

Ford believes the key is Wi-Fi, but not the ordinary access point and receiving device setup. What Ford envisions, [Chief Technology Officer and Vice President of Research Paul Mascarenas] said, is a high-powered, heavily encrypted Wi-Fi that establishes point-to-point connections between cars within a half-mile radius. Those connections could be used to communicate vital information between vehicles, either triggering alerts to the driver or interpreted by the vehicle’s computer. An intelligent car slamming on its brakes could communicate to all of the vehicles behind it that it’s coming to rapid halt, giving the driver that much more warning that he too needs to hit the brakes.

But because these cars are networked—the car in front of yours is connected to the car in front it and so forth—in a distributed mesh, an intelligent vehicle can know if cars miles down the road are slamming on their brakes, alerting the driver to potential traffic jams. Given enough vehicles with the technology, individual cars become nodes in a constantly changing, self-aware network that can not only monitor what’s going on in the immediate vicinity, but across a citywide traffic grid, Mascarenas said.

[. . .]

But Mascarenas said Ford and other automakers can build other applications into the intelligent vehicle network. For instance, not cars but the roads and structures cars use can be embedded with Wi-Fi radios allowing drivers to connect with parking garages, tollbooths or even rest areas through the ad hoc network.

[. . .]

The key, Mascarenas said, is drawing a sharp line between the vehicles as nodes on the network and the vehicles as receivers of information. In order for the system to work, every car acts as a node on the network, occasionally receiving information and services pertinent to the driver but most often acting as a mere relay passing that data down the line of cars until it reaches its destination. When paying a toll, no driver wants to share his credit card data with the 20 cars between his and the tool booth. Ford, however, believes it can put the security and encryption in place that allows such relays to work without compromising individual the privacy of its customers.

The collaborative mesh network could even be used as a mobile broadband alternative to the wide area cellular network. Offload points on the roadside would be used to backhaul traffic to the Internet, but the cars themselves—so long as they all remained a half mile from one another—could pass a Netflix movie stream or a video call down the highway to the vehicle requesting it.

[. . .]

Comments Off on Ford Pondering Mesh Networking more...

Public Knowledge: Data Caps Are Screwing Things Up

by on Jul.15, 2011, under Uncategorized

(Original at Public Knowledge)

It is unclear why excessive data use that does not cause network congestion matters to Comcast. It is further unclear how Comcast determined that 250 GB was “excessive” in 2008, and why it has not revised that level in the years since.

By Michael Weinberg on July 14, 2011

[. . .]

While Comcast is not alone in imposing data caps, its data cap is problematic for at least two reasons.  First, the punishment for going over the cap is draconian.  Two violations in six months can result in one year of internet exile.  For many customers, losing access to Comcast will be losing access to their best option for a fast internet connection.  (Take a look at this chart of ISP performance from Netflix if you are not convinced.  Notice that ISPs end up clustering by underlying technology type, with cable providers leading the pack followed by DSL and eventually wireless.)

Second, Comcast does not even claim that the caps serve a legitimate purpose.  In 2008, Comcast drew an explicit distinction between throttling designed to ease network congestion and data caps designed to punish “excessive” users.  It is unclear why excessive data use that does not cause network congestion matters to Comcast.  It is further unclear how Comcast determined that 250 GB was “excessive” in 2008, and why it has not revised that level in the years since.

In fact, Comcast appears to now be contradicting statements it made to the FCC in the past about its data cap.  In 2008, Comcast went to some pains to draw a distinction between congestion management practices such as peak time throttling and “excessive use” policies like data caps:

“These congestion management practices [such as throttling] are independent of, and should not be confused with, our recent announcement that we will amend the ‘excessive use’ portion of our Acceptable Use Policy, effective October 1, 2008, to establish a specific monthly data usage threshold of 250 GB per account for all residential HIS customers.  … That cap does not address the issue of network congestion, which results from traffic levels that vary from minute to minute.”

[. . .]

Ultimately these caps punish consumers for trying to adopt new internet services, especially services based in the cloud.  As the FCC noted in the National Broadband Plan, cloud-based services can bring huge benefits to the public.  However, many cloud-based services involve transferring significant amounts of data back and forth between a user and a remote server.  As a result, data caps allow ISPs to discourage people from using cloud-based services simply because they can.

[. . .]

Comments Off on Public Knowledge: Data Caps Are Screwing Things Up more...

Matt Lasar/Ars Technica: Metering: Lack of Competition, Not Congestion

by on Jul.12, 2011, under Uncategorized

(Original article at Ars Technica)

[. . .]

Newspaper reports suggest that at least some [Canadian Telecommunications (CRTC)] commissioners aren’t buying arguments that the telcos need [usage-based billing] to “discipline” consumers so that they won’t congest networks with excessive downloading.

“No single user or wholesale customer is the cause of congestion,” Bell vice-president Mirko Bibic explained to the CRTC at the event. “But clearly, wholesale users contribute a disproportionate share of total traffic, and by extension, congestion.”

This did not impress CRTC Vice Chair Len Katz. “When I took a look at your forecast over the next five years, (Internet traffic) growth seems to have curtailed,” he challenged the telco. “Am I missing something here?”

[. . .]

Canadian telecommunications advocate Michael Geist has been following the hearings as well. “By the time lunch rolled around, it was clear that claims that usage based billing practices are a response to network congestion is a myth,” Geist wrote about Monday’s discussions.

[. . .]

This event follows a long and controversial debate about UBB. The fight went into high gear when the CRTC acceded to telco requests last September, granting them the right to charge indie ISPs on a metered wholesale basis (plus a 15 percent discount).

But in January, one of the indies published its new rate schedule just before the policy was about to go into effect. This included data caps that dropped from 200GB to 25GB in exchange for monthly rates that jumped by about CAN$10 a month.

The yogurt hit the fan. Over a third of a million people (around one percent of Canada’s population) signed’s petition against wholesale metered billing. With all of Canada’s top parties raising hay about the matter, and elections approaching, Canada’s conservative Prime Minister Stephen Harper got the memo. The government told the CRTC to suspend the decision—or have it done for them from above.

[. . .]

Comments Off on Matt Lasar/Ars Technica: Metering: Lack of Competition, Not Congestion more...

Video: M-Lab Broadband Measurement Tool

by on Jul.01, 2011, under Uncategorized

RT @saschameinrath: “Everything you wanted to know about the M-Lab broadband measurement platform (in an awesome 2-minute video):

Comments Off on Video: M-Lab Broadband Measurement Tool more...

Barbara van Schewick Calls on FCC to Open Public Comments on Verizon/Android Tethering

by on Jul.01, 2011, under Uncategorized

(Original at Barbara’s blog, Net Architecture)

The questions raised by the complaint are too important to be decided without public participation: The C Block of the 700 MHz band is currently the only spectrum that is subject to mobile network neutrality rules.

June 11, 2011

According to recent news reports, Verizon Wireless has asked Google to disable tethering applications in Google’s mobile application store, the Android Market. Tethering applications allow users to use laptops or other devices over their mobile Internet connection by attaching them to their smart phones.

In early June, Free Press filed a complaint with the FCC alleging that this behavior violates the openness conditions that govern the use of the part of the 700 MHz spectrum over which Verizon Wireless’s LTE network operates. The FCC seems to have designated the proceeding as a restricted proceeding under its ex parte rules, which means that the public will not be invited to comment on the issues raised by Free Press’s complaint.

Today, I asked the FCC to open up the proceeding for public comment. (The full text of the letter is here (pdf) and copied [on Barbara’s blog].) The questions raised by the complaint are too important to be decided without public participation: The C Block of the 700 MHz band is currently the only spectrum that is subject to mobile network neutrality rules.[1] Knowing that there is at least some part of the mobile spectrum that is protected by basic network neutrality principles is important for users, innovators and investors. Whether the openness conditions indeed afford protection depends, however, on how they are interpreted and enforced. Thus, the proceeding has important implications for many businesses, innovators and users in the Internet ecosystem, so they should have a chance to have their voice heard, too. In addition, as I explain in the letter, the proceeding raises important issues regarding openness in mobile networks in general.  Here is the text of the letter.

[. . .]

Comments Off on Barbara van Schewick Calls on FCC to Open Public Comments on Verizon/Android Tethering more...

NY Times: Dutch Adopt Net Neutrality Law

by on Jun.22, 2011, under Uncategorized

“I could also see some countries following the Dutch example,” said Jacques de Greling, an analyst at Natixis, a French bank. “I believe there will be pressure from consumers to make it clear what they are buying, whether it is the full Internet or Internet-light.”

(Original at The New York Times)


June 22, 2011


BERLIN — The Netherlands on Wednesday became the first country in Europe, and only the second in the world, to enshrine the concept of network neutrality into national law by banning its mobile telephone operators from blocking or charging consumers extra for using Internet-based communications services like Skype or WhatsApp, a free text service.

[. . .]

Operators could still offer a range of mobile data tariffs with different download speeds and levels of service, but they would not be able to tie specific rates to the use of specific free Internet services.

Under the law, Dutch operators could be fined up to 10 percent of their annual sales for violations by the national telecommunications regulator, OPTA.

Patrick Nickolson, a spokesman for KPN, said that the measure could lead to higher broadband prices in the Netherlands because operators would be limited in their ability to structure differentiated data packages based on consumption.

“We regret that the Dutch Parliament didn’t take more time to consider this,” Mr. Nickolson said. “This will limit our ability to develop a new portfolio of tariffs and there is at least the risk of higher prices, because our options to differentiate will now be more limited.”

[. . .]

The Dutch restrictions on operators are the first in the 27-nation European Union. The European Commission and European Parliament have endorsed network neutrality guidelines but as yet have taken no legal action against operators that block or impose extra fees on consumers using services like Skype, the voice and video Internet service being acquired by Microsoft, and WhatsApp, a mobile software maker which is based in Santa Clara, California.

[. . .]

Maxime Verhagen, the Dutch deputy prime minister who supported the net neutrality restrictions, said that the new rules would ensure that Internet services were never threatened.

“The blocking of services or the imposition of a levy is a brake on innovation,” Mr. Verhagen said. “That’s not good for the economy. This measure guarantees a completely free Internet which both citizens and the providers of the online services can then rely on.”

Besides the Netherlands, only one country, Chile, has written network neutrality requirements into its telecommunications law. The Chilean law, which was approved in July 2010, only took effect in May.

[. . .]

The debate over net neutrality in the Netherlands erupted in May when Eelco Blok, the new chief executive of KPN, the former phone monopoly, announced plans to create a new set of mobile data tariffs that included charges on services like WhatsApp that allow smartphone users to avoid operator charges for sending text messages.

Use of the free text service has spread rapidly, eroding operator text revenues.

According to KPN, 85 percent of the company’s customers who use a Google Android phone downloaded WhatsApp onto their handsets from last August through April. As a result, KPN’s revenue from text messaging, which had risen 8 percent in the first quarter of 2010 from a year earlier, declined 13 percent in the first quarter of this year.

At a presentation to investors in London on May 10, analysts questioned where KPN had obtained the rapid adoption figures for WhatsApp. A midlevel KPN executive explained that the operator had deployed analytical software which uses a technology called deep packet inspection to scrutinize the communication habits of individual users.

The disclosure, widely reported in the Dutch news media, set off an uproar that fueled the legislative drive, which in less than two months culminated in lawmakers adopting the Continent’s first net neutrality measures with real teeth.

Comments Off on NY Times: Dutch Adopt Net Neutrality Law more...

Rob Powell, Nov. 2010: Definitions, Dialogue, and the FCC

by on Jun.06, 2011, under Uncategorized

(Original at Network Ramblings)

On Friday [November 5, 2010], a wide-ranging group of thinkers filed a statement with the FCC in response to an otherwise unassuming NPRM entitled “Further Inquiry into Two Under-developed Issues in the Open Internet Proceeding“.  What they had to say was not for or against the NPRM itself.  Rather, they simply praised how it separated the concept of the internet from that of specialized services.

[. . .]

I think that this differentiation gets directly to what I once on this site referred to as the sloppiness of language surrounding the Network Neutrality shouting matches.  We all view the network neutrality from our own perspective.  Viewers see it in terms of pricing and choice.  Corporations see it in terms of services they provide and get paid for.  Institutions see it as a transformative societal phenomenon, and so on.

But when the FCC goes to regulate it, it has run into a lack of vocabulary common to each point of view.  After all the discussions, proposals, arguments, and lobbying, one still cannot state network neutrality unambiguously today without it being crippled by unintended consequences or containing loopholes the QE2 could pass through at low tide.  This is because the internet is a living thing.  It’s not a service but a platform that can provide virtually any combination old, current and future services, and it can and will morph in response to regulations much more quickly than the rules and regulations ever could.

[. . .]

Comments Off on Rob Powell, Nov. 2010: Definitions, Dialogue, and the FCC more...

Hungry Beast on Net Neutrality: “The Internet Does Not Judge”

by on May.19, 2011, under Uncategorized

(Original video on Marc’s blog)

Marc Fennell of Hungry Beast gets a lot right here.  This video features Barbara van Schewick, Douglas Rushkoff and John Perry Barlow. The key to it is where he starts: with Barbara van Schewick explaining “Network Neutrality’s” origin in the Internet’s design as a general purpose platform for end user innovation. A proper discussion of the nature of the issue can be laid out if we start there — because that characteristic is what’s at stake in all the discussions of such concerns as “next generation networks,” “quality of service,” “specialized services” or “reasonable network management.”

Comments Off on Hungry Beast on Net Neutrality: “The Internet Does Not Judge” more...

ISOC-NY, June 14: INET Regional Conference NY

by on May.13, 2011, under Uncategorized

(from Internet Society-New York)

INET NY announced for June 14 2011

Vint Cerf, Tim Berners Lee, Larry Strickling to speak

The Internet Society (ISOC) will present an INET Regional Conference on June 14 2011 at the Sentry Center in NYC. The theme is It’s Your Call. What Kind of Internet Do You Want? The distinguished lineup of speakers will include ‘Father of the Internet’ Vint Cerf, World Wide Web inventor Sir Tim Berners-Lee, and Assistant Secretary for Communications and Information at the U.S. Department of Commerce Lawrence Strickling.

What: INET New York
When: Tuesday June 14, 2011: 9am-5.30pm EDT
Where: Sentry Center, 730 Third Avenue, NY NY 10017
Who: ISOC Members $25, Others $50
Hashtag: #inetny

With almost two billion people online, the Internet is a catalyst for boundless creativity and growth. But the decisions we make in the coming months and years will determine whether it remains a global platform for innovation and expression for people everywhere. Join us on June 14 as we set the agenda for the future of an open Internet. We’ll identify and examine the critical decisions that will shape the future of the Internet:

  • Who will help define the Internet’s evolution?
  • What role should government and private industry play?
  • How do we provide greater bandwidth and access?
  • What does online privacy mean in the age of Facebook and Wikileaks?

This is a unique opportunity to network with the thought leaders and policy makers who are designing the global networks of tomorrow and help develop the policies that will drive future Internet innovation.

Comments Off on ISOC-NY, June 14: INET Regional Conference NY more...

Wired, Jan 2009: Comcast’s Dark Lord Tries to Fix Image

by on May.11, 2011, under Uncategorized

(Original at Wired)

[. . .]

It took him six weeks of short-burst sleuthing to reach his conclusion. In a detailed post on DSL Reports — a site for broadband enthusiasts — under his online name, funchords, [Robb] Topolski laid out a case against his Internet service provider. Comcast appeared to be blocking file-sharing applications by creating fake data packets that interfered with trading sessions. The packets were cleverly disguised to look as if they were coming from the user, not the ISP. It was as if, in the middle of a phone call to a friend, Comcast got on the line and in the caller’s own voice told the friend he was hanging up, while the caller simultaneously heard the same message in the friend’s voice.

[. . .]

By the end of 2007, 22 cents of every dollar spent on broadband in the US went directly to Comcast. And that figure looks like it’s only going to increase; the number of ways to connect to the Internet reliably and at high speed is shrinking, not growing. “There’s this magical thinking, both in the tech community and the regulatory community, that competition will solve all problems,” says Craig Moffett, an analyst at Sanford C. Bernstein. “Well, get over it. The evidence says we’re not going from two pipes to three but from two pipes to one.”

[. . .]

[Brian] Roberts truly believed Comcast was ready for tech stardom as the Facebook or Google of 2008. Instead, he got Topolskied. On October 19, 2007, the AP story broke with the headline “Comcast Actively Hinders Subscribers’ File-Sharing Traffic, AP Testing Shows.” Bloggers called for protests and boycotts; the Electronic Frontier Foundation said Comcast was using tricks formerly used by “malicious hackers.” A coalition of Internet law scholars and consumer groups petitioned the FCC to step in. Instead of basking in glory, Roberts found himself at the center of the fight over network neutrality—the attempt to keep ISPs from discriminating between different kinds of traffic and, say, favoring their own video or VoIP services over another company’s.

[. . .]

The Topolski affair, as far as Roberts is concerned, is all based on a misunderstanding. Every company “manages” its network by restricting and opening access to maintain speeds. [. . .] “You’ve always had Ma Bell managing its network for things like how you handle voice traffic on Mother’s Day. You get a busy signal occasionally.”

[. . .]

In August, the FCC issued a 67-page report that read as if Comcast was the worst company the FCC had ever regulated. Comcast lied about its actions, schemed to prevent oversight, confused customers, and put the future of Net-based innovation at risk. The commissioners doubted Comcast’s contention that blocking BitTorrent helped its network. [. . .] The final verdict was devastating: “In laymen’s terms, Comcast opens its customers’ mail because it wants to deliver mail not based on the address or type of stamp on the envelope but on the type of letter contained therein,” the FCC wrote. “This practice is not ‘minimally intrusive’ but invasive and outright discriminatory.”

The FCC didn’t levy a fine. In fact, it’s still not even clear whether the commission has the regulatory right to punish such behavior.

[. . .]

Comments Off on Wired, Jan 2009: Comcast’s Dark Lord Tries to Fix Image more...

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!


hosted by ibiblio