Uncategorized

Netnod’s Exemplary Response to the EU’s Public Consultation on “Traffic Management”

by admin on Oct.18, 2012, under Uncategorized

Patrik Fältström and Netnod have provided a sterling example of how to correct the misframing of issues related to “managed services” or “specialized services.”  Read the submission itself for a perfect illustration of how drawing the distinction correctly leads the way to policy insight.

(from Netnod’s blog.)

Netnod has filed a response to the Public Consultation on specific aspects of transparency, traffic management and switching in an Open Internet that DG Communications Networks, Content and Technology use for information gathering for the Commission’s planned recommendations that commissioner Kroes announced on May 29 2012.

In summary, Netnod does believe further work is required to clarify the use of the term ‘Internet Access’. Netnod does however urge caution since Netnod does not agree with some conclusions. This as Netnod does believe some of the concepts described, related to congestion control and Quality of Service, are not applicable in a packet based network.

You can read the full statement here.

Contact person at Netnod: Patrik Fältström, (Head of Research and Development).

Post to Twitter Post to Facebook Post to LinkedIn

Comments Off more...

Susan Crawford: We Can’t All Be in Google’s Kansas

by admin on Oct.18, 2012, under Uncategorized

(Original at Wired)

If the current internet access providers that dominate the American telecommunications landscape could get away with it, they’d sell nothing but specialized services and turn internet access into a dirt road.

[. . .]

[I]ncumbent internet access providers such as Comcast and Time Warner (for wired access) and AT&T and Verizon (for complementary wireless access) are in “harvesting” mode. They’re raising average revenue per user through special pricing for planned “specialized services” and usage-based billing, which allows the incumbents to constrain demand. The ecosystem these companies have built is never under stress, because consumers do their best to avoid heavy charges for using more data than they’re supposed to. Where users have no expectation of abundance, there’s no need to build fiber on the wired side of the business or build small cells fed by fiber on the wireless side.

If the current internet access providers that dominate the American telecommunications landscape could get away with it, they’d sell nothing but specialized services and turn internet access into a dirt road.

But the key barrier to competition – the incumbents’ not-so-secret weapon – is the high up-front costs of building fiber networks. That’s why the new 1-gigabit-per-second network planned by Google for residences in Kansas City was cited as an example of a “positive recent development” in the FCC chairman’s speech. Google was welcomed with open arms by Kansas City because the company offered a wildly better product than anything the cable distributors can provide: gigabit symmetric fiber access. The company has the commercial strength to finance this build itself, and it has driven down costs in every part of its product to make its Kansas City experiment commercially viable.

While the Google Fiber plan provides a valuable model, other communities that want to ensure their residents get fiber to the home shouldn’t have to wait.

We need policies that lower the barriers to entry for competitors. Otherwise, we’ll be stuck with the second-best cable networks now in place around the country, with their cramped upload capacity, bundled nature, deep affection for usage-based billing, and successful political resistance to any form of oversight.

[. . .]

Post to Twitter Post to Facebook Post to LinkedIn

Comments Off more...

Meeting of Open Internet Advisory Board: October 9

by admin on Oct.04, 2012, under Uncategorized

(from Berkman Center for Internet and Society and FCC)

At its October 9, 2012 meeting, the Committee will consider issues relating to the subject areas of its four working groups—Mobile Broadband, Economic Impacts of Open Internet Frameworks, Specialized Services, and Transparency—as well as other open Internet related issues.

Tuesday, October 9, 10am-12pm
Harvard Law School, Wasserstein Hall, Milstein West A Room
This event will be webcast live

By this Public Notice, the Federal Communications Commission (“Commission”) announces the date, time, and agenda of the next meeting of the Open Internet Advisory Committee (“Committee”).

The next meeting of the Committee will take place on October 9, 2012, from 10:00 A.M. to 12:00 P.M. in Milstein West A at the Wasserstein Hall/Caspersen Student Center, Harvard Law School, 1585 Massachusetts Avenue, Cambridge, MA 02138.

At its October 9, 2012 meeting, the Committee will consider issues relating to the subject areas of its four working groups—Mobile Broadband, Economic Impacts of Open Internet Frameworks, Specialized Services, and Transparency—as well as other open Internet related issues.  A limited amount of time will be available on the agenda for comments from the public.  Alternatively, members of the public may send written comments to Daniel Kirschner, Designated Federal Officer of the Committee, or Deborah Broderson, Deputy Designated Federal Officer, at the addresses provided below.

The meeting is open to the public and the site is fully accessible to people using wheelchairs or other mobility aids.  Other reasonable accommodations for people with disabilities are available upon request.  The request should include a detailed description of the accommodation needed and contact information.  Please provide as much advance notice as possible; last minute requests will be accepted, but may not be possible to fill.  To request an accommodation, send an email to fcc504@fcc.gov or call the Consumer and Governmental Affairs Bureau at  202-418-0530  (voice),  202-418-0432  (TTY).

The meeting of the Committee will also be broadcast live with open captioning over the Internet at http://cyber.law.harvard.edu/events/2012/10/oiac.

For further information about the Committee, contact:  Daniel Kirschner, Designated Federal Officer, Office of General Counsel, Federal Communications Commission, Room 8-C830, 445 12th Street, S.W. Washington, DC 20554; phone:  202-418-1735 ; email: daniel.kirschner@fcc.gov; or Deborah Broderson, Deputy Designated Federal Officer, Consumer and Governmental Affairs Bureau, Federal Communications Commission, Room 5-C736, 445 12th Street, S.W. Washington, DC 20554; phone:  202-418-0652 ; email: deborah.broderson@fcc.gov.

Post to Twitter Post to Facebook Post to LinkedIn

Comments Off more...

FCC Announces Open Internet Advisory Committee Members

by admin on May.29, 2012, under Uncategorized

(Original at Azi Ronen’s Broadband Traffic Management blog)

Sunday, May 27, 2012

While Comcast challenges the Net Neutrality rules (“We do not prioritize our video .. we just provision separate, additional bandwidth for it” – here), OTT providers protest (Sony, Netflix) and the FCC chairman says that “Business model innovation is very important” (here) it seems that the US needs to re-examine its Open Internet laws (see “FCC’s Net Neutrality Rules Made Official; Start on Nov. 20” – here).

The FCC announced the “composition of the Open Internet Advisory Committee (OIAC). The OIAC’s members include representatives from community organizations, equipment manufacturers, content and application providers, venture capital firms, startups, Internet service providers, and Internet governance organizations, as well as professors in law, computer science, and economics“.

Jonathan Zittrain, Professor of Law at Harvard Law School and Harvard Kennedy School, Professor of Computer Science at the Harvard School of Engineering and Applied Sciences, and CoFounder of the Berkman Center for Internet and Society, will serve as Chair of the OIAC

Among the members are Neil Hunt, Chief Product Officer, Netflix; Kevin McElearney, Senior Vice President for Network Engineering, Comcast ; Chip Sharp, Director, Technology Policy and Internet Governance, Cisco SystemsMarcus Weldon, Chief Technology Officer, Alcatel-Lucent and Charles Kalmanek, Vice President of Research, AT&T.

[. . .]

Post to Twitter Post to Facebook Post to LinkedIn

Comments Off more...

Eben Moglen at F2C: Innovation Under Austerity

by admin on May.23, 2012, under Uncategorized

Eben Moglen describes at Freedom 2 Connect exactly how innovation proceeds on the basis of the whole assemblage of components that enable us to “[let] the kids play and [get] out of the way” (including, of course, the general purpose Internet platform):

 

Post to Twitter Post to Facebook Post to LinkedIn

Comments Off more...

Freedom to Connect 2012

by admin on May.21, 2012, under Uncategorized

Today and tomorrow, May 21-22, at AFI Silver Theatre in Silver Spring, Maryland.

(See http://freedom-to-connect.net/ and http://freedom-to-connect.net/agenda-2/)

About F2C: Freedom to Connect

F2C: Freedom to Connect is a conference devoted to preserving and celebrating the essential properties of the Internet. The Internet is a success today because it is stupid, abundant and simple. In other words, its neutrality, its openness to rapidly developing technologies and its layered architecture are the reasons it has succeeded where others (e.g., ISDN, Interactive TV) failed.

The Internet’s issues are under-represented in Washington DC policy circles. F2C: Freedom to Connect is designed to advocate for innovation, for creativity, for expression, for little-d democracy. The Freedom to Connect is about an Internet that supports human freedoms and personal security. These values, held by many of us whose consciousness has been shaped by the Internet, are not common on K Street or Capitol Hill or at the FCC.

F2C: Freedom to Connect is about having access to the Internet as infrastructure. Infratructures belong to — and enrich — the whole society in which they exist. They gain value — in a wide variety of ways, some of which are difficult to anticipate — when more members of society have access to them. F2C: Freedom to Connect especially honors those who build communications infrastructure for the Internet in their own communities, often overcoming resistance from incumbent cable and telephone companies to do so.

The phrase Freedom to Connect is now official US foreign policy, thanks to Secretary of State Clinton’s Remarks on Internet Freedom in 2010. She said that Freedom to Connect is, “the idea that governments should not prevent people from connecting to the internet, to websites, or to each other. The freedom to connect is like the freedom of assembly, only in cyberspace.” Her speech presaged the Internet-fueled assemblies from Alexandria, Egypt to Zuccotti Park.

The Agenda is now quite stable.

Confirmed keynote speakers include Vint Cerf, Michael Copps, Susan Crawford, Cory Doctorow (via telecon), Benoît Felten, Lawrence Lessig, Barry C. Lynn, Rebecca MacKinnon, Eben Moglen, Mike Marcus and Aaron Swartz.

Panels include:

  • Big Enough to Succeed
  • BTOP, Gig-U and other big pipe experiments
  • Freedom & Connectivity from Alexandria, Egypt to Zuccotti Park
  • Internet Freedom is Local
  • The Fight for Community Broadband

F2C: Freedom to Connect Agenda 0.99.9.1

Monday 5/21

8:00 to 9:00 AM Registration, Continental Breakfast

9:00 to 10:30 AM
Vint Cerf keynote  (45 min)
Rebecca MacKinnon keynote (Ian Schuler, US State Dept., discussant) (45 min)

11:00 to 12:30
Big Enough to Succeed: small carriers at the leading edge — entrepreneurial (non-Municipal) carriers show a fourth way (after Telco, Cable and Muni) to the future of connectivity. (60 min)

Susan Crawford keynote (30 min)

12:30 to 1:30 Lunch

1:30 to 3:00

BTOP, Gig-U, and other big pipe experiments (60 min)

Mike Marcus keynote (30 min) Dewayne Hendricks (brief intro)

3:30 to 5:00
Benoit Felten
, keynote (30 minutes)
Aaron Swartz
, “How we stopped SOPA” keynote (30 min)
Michael Copps keynote (30 min)
– Jim Baller introduces Commissioner Copps

RECEPTION, location tbd.

Tuesday, 5/22

8:00 to 9:00 AM Registration, Continental Breakfast

9:00 to 10:30 AM

Cory Doctorow remote (skype) keynote (30 min)

Freedom & Connectivity from Alexandria, Egypt to Zuccotti Park (60 min)

11:00AM to 12:30

Eben Moglen keynote, Innovation under Austerity (60 min)
Doc Searls and others, Discussion of Moglen’s talk (30 min)

12:30 to 1:30 Lunch

1:30PM to 3:00

Barry C. Lynn, keynote, American Liberties in the New Age of Monopoly (30 min)

Internet Freedom is Local (30 min)

A Word from Our Sponsors – (30 min) – each sponsor of F2C has a stake in Internet Freedom

  • Helen Brunner, Media Democracy Fund
  • Rick Whitt, Google
  • John Wonderlich, Sunlight Foundation
  • Will Barkis, Mozilla Foundation
  • Elliot Noss, Ting

3:30 to 5:00

Larry Lessig keynote, “The War Against Community Broadband” (30 min)

Panel, the Fight for Community Broadband: (60 min)

 

Post to Twitter Post to Facebook Post to LinkedIn

Comments Off more...

Vint Cerf in Wired: We Knew What We Were Unleashing on the World

by admin on Apr.23, 2012, under Uncategorized

(Original at Wired Magazine)

Wired: So from the beginning, people, including yourself, had a vision of where the internet was going to go. Are you surprised, though, that at this point the IP protocol seems to beat almost anything it comes up against?

Cerf: I’m not surprised at all because we designed it to do that.

This was very conscious. Something we did right at the very beginning, when we were writing the specifications, we wanted to make this a future-proof protocol. And so the tactic that we used to achieve that was to say that the protocol did not know how — the packets of the internet protocol layer didn’t know how they were being carried. And they didn’t care whether it was a satellite link or mobile radio link or an optical fiber or something else.

[. . .]

Wired: So how did you come to be the author of the TCP/IP protocol?

Vinton Cerf: Bob Kahn and I had worked together in the Arpanet project that was funded by ARPA, and it was an attempt at doing a national-scale packet switching experiment to see whether computers could usually be interconnected through this packet-switching medium. In 1970‚ there was a single telephone company in the United States called AT&T and its technology was called circuit switching and that was all any telecom engineer worried about.

We had a different idea and I can’t claim any responsibility for having suggested to use packet-switching. That was really three other people working independently who suggested that idea simultaneously in the 1960s. So by the time I get involved in all in this, I was a graduate student at UCLA. I am working with my colleague and very close friend Steve Crocker, who now is the chairman of ICANN, a position I held for about a year.

A part of our job was to figure out what the software should look like for computers connecting to each other through this Arpanet. It was very successful — there was a big public demonstration in October of 1972 which is organized by Kahn. After the October demo was done Bob went to ARPA and I went to Stanford.

So in early 1973, Bob appears in my lab at Stanford and says ‘I have a problem.’ My first question is ‘What’s the problem?” He said we now have the Arpanet working and we are now thinking, ‘How do we use computers in command and control?”

If we wanted to use a computer to organize our resources, a smaller group might defeat a larger one because it is managing its resources better with the help of computers. The problem is that if you are serious about using computers, you better be able to put them in mobile vehicles, ships at sea, and aircraft, as well as at fixed installations.

At that point, the only experience we had was with fixed installations of the Arpanet. So he had already begun thinking about what he called open networking and believed you might optimize radio network differently than a satellite network for ships at sea, which might be different from what you do with dedicated telephone lines.

So we had multiple networks, in his formulation, all of them packet-switched, but with different characteristics. Some were larger, some went faster, some had packets that got lost, some didn’t. So the question is how can you make all the computers on each of those various networks think they are part of one common network — despite all these variations and diversity.

That was the internet problem.

In September 1973 I presented a paper to a group that I chaired called the International Network Working Group. We refined the paper and published formally in May of 1974, a description of how the internet would work.

Wired: Did you have any idea back then what the internet would develop into?

Cerf: People often ask, ‘How could you possibly have imagined what’s happening today?’ And of course, you know, we didn’t. But it’s also not honest to roll that answer off as saying we didn’t have any idea what we had done, or what the opportunity was.

You need to appreciate that by the time, mid-July ’73, we had two years of experience with e-mail. We had substantial amount of experience with Doug Englebart‘s system at SRI called The Online System. That system for all practical purposes was a one-computer world wide web. It had documents that pointed to each other using hyperlinks. Engelbart invented the mouse that pointed to things on the screen. [...] So we had those experiences, plus remote access through the net to the time-sharing machines, which is the Telnet protocol …. So we had all that experience as we were thinking our way through the internet design.

The big deal about the internet design was you could have arbitrary large number of networks so that they would all work together. And the theory we had is that if we just specify what the protocols would look like and what software you needed to write, anybody who wanted to build a piece of internet would do that and find somebody who would be willing to connect to them. Then the system would grow organically because it didn’t have any central control.

And that’s exactly what happened.

The network has grown mostly organically. The closest thing that was in anyway close to central control is the Internet Corporation for Assigned Names and Numbers (ICANN) and its job was to allocate internet address space and oversee the domain name system, which had not been invented until 1984.

So, we were in this early stage. We were struggling to make sure that the protocols are as robust as possible. We went through several implementations of them until finally we started implementing them on as many different operating system as we could. And by January 1st 1983, we launched the internet.

That’s where it is dated as operational and that’s nearly 30 years ago, which is pretty incredible.

[. . .]

Wired: So from the beginning, people, including yourself, had a vision of where the internet was going to go. Are you surprised, though, that at this point the IP protocol seems to beat almost anything it comes up against?

Cerf: I’m not surprised at all because we designed it to do that.

This was very conscious. Something we did right at the very beginning, when we were writing the specifications, we wanted to make this a future-proof protocol. And so the tactic that we used to achieve that was to say that the protocol did not know how — the packets of the internet protocol layer didn’t know how they were being carried. And they didn’t care whether it was a satellite link or mobile radio link or an optical fiber or something else.

We were very, very careful to isolate that protocol layer from any detailed knowledge of how it was being carried. Plainly, the software had to know how to inject it into a radio link, or inject it into an optical fiber, or inject it into a satellite connection. But the basic protocol didn’t know how that worked.

And the other thing that we did was to make sure that the network didn’t know what the packets had in them. We didn’t encrypt them to prevent it from knowing — we just didn’t make it have to know anything. It’s just a bag of bits as far as the net was concerned.

We were very successful in these two design features, because every time a new kind of communications technology came along, like frame relay or asynchronous transfer mode or passive optical networking or mobile radio‚ all of these different ways of communicating could carry internet packets.

We would hear people saying, ‘The internet will be replaced by X25,’ or ‘The internet will be replaced by frame relay,’ or ‘The internet will be replaced by APM,’ or ‘The internet will be replaced by add-and-drop multiplexers.’

Of course, the answer is, ‘No, it won’t.’ It just runs on top of everything. And that was by design. I’m actually very proud of the fact that we thought of that and carefully designed that capability into the system.

Wired: Right. You mentioned TCP/IP not knowing what’s within the packets. Are you concerned with the growth of things like Deep Packet Inspection and telecoms interested in having more control over their networks?

Cerf: Yes, I am. I’ve been very noisy about that.

First of all, the DPI thing is easy to defeat. All you have to do is use end-to-end encryption. HTTPS is your friend in that case, or IPSEC is your friend. I don’t object to DPI when you’re trying to figure out what’s wrong with a network.

I am worried about two things: one is the network neutrality issue. That’s a business issue. The issue has to do with the lack of competition in broadband access and therefore, the lack of discipline in the market to competition. There is no discipline in the American market right now because there isn’t enough facilities-based competition for broadband service.

And although the FCC has tried to introduce net neutrality rules to avoid abusive practices like favoring your own services over others, they have struggled because there has been more than one court case in which it was asserted the FCC didn’t have the authority to punish ISPs for abusing their control over the broadband channel. So, I think that’s a serious problem.

The other thing I worry about is the introduction of IPv6 because technically we have run out of internet addresses — even though the original design called for a 32-bit address, which would allowed for 4.3 trillion terminations if it had been efficiently used.

And we are clearly over-subscribed this point. But it was only last year that we ran out. So one thing that I am anticipating is that on June 6 this year, all of those who can are going to turn on IPv6 capability.

[. . .]

Post to Twitter Post to Facebook Post to LinkedIn

Comments Off more...

Bob Frankston: From DIY to the Internet

by admin on Mar.12, 2012, under Uncategorized

(Original at Bob’s blog)

 

This talk is aimed at an audience that wants “more Internet”. But what does that mean?

To many people it means more high speed connections to services like the Web and YouTube because they are the public face of the today’s Internet.

The Internet is limited by policy choices we make and not by the technology. We don’t see connected devices such as medical monitors because they don’t work well using today’s infrastructure because those who provide the infrastructure don’t have any incentive to support such applications even if, literally, our lives depend on it.

We don’t see these applications because they are at odds with the business model we call telecommunications. It’s a business that assumes the network transports a valuable content called “information”. But, as we’ll see, today we exchange bits. But bits in themselves are not information in the sense that humans understand it. Far more important for a business, the number of bits doesn’t correspond to the value of information. It’s as if there is no difference in value between beautiful poems and doggerel that fills pages.

Bits are nothing more than letters of the alphabet and it’s hard to make a business based on the ability to control the supply of the letter “e”.

To understand how the Internet is different we have to step back to the days when those of us working with computers wanted to interconnect them. It was a Do-It-Yourself (DIY) effort. We were already using modems to repurpose the telephone network as a data transport.

To oversimplify history, when we had computers in the same room we simply connected a wire between them and then wrote software to send bits between the two. We could extend the range using radios. If we lost a message we could simply retransmit it.

Later we could extend the range by using modems that ran over telephone lines. It was all new and exciting and we were happy to simply be able to connect to a distant system even if we could do it at a far lower speed than our local networks and at a higher cost. That cost was typically borne by companies as overhead.

This approach works fine as long as we stay within the confines of that model. Innovations that require availability elsewhere are simply too difficult. The Internet has given us a sense of what is possible but before we can realize those possibilities we need to understand the motivations of those who own the paths that carry the bits and understand why they can’t extend their business model.

We can’t take a top down approach, as in expecting Congress and the FCC to make major policy changes. Fortunately, thanks to the very nature of the Internet we can still apply a DIY approach for local connectivity. This is the real Internet – today we can indeed reach across the globe but we have difficulty interconnecting with devices in the next apartment.

As we come to appreciate the value of peer connectivity we can extend the model beyond our homes and simply obviate the need for a telecommunications industry as it is presently constituted.

[Slide Commentary . . .]

Post to Twitter Post to Facebook Post to LinkedIn

Comments Off more...

Doc Searls: Edging Toward the Fully Licensed World

by admin on Mar.01, 2012, under Uncategorized

(Original at Doc Searl’s Weblog)

February 29, 2012

[. . .]

Nothing you watch on your cable or satellite systems is yours. In most cases the gear isn’t yours either. It’s a subscription service you rent and pay for monthly. Companies in the cable and telephone business would very much like the Internet to work the same way. Everything becomes billable, regularly, continuously. All digital pipes turn into metered spigots for “content” and services on the telephony model, where you pay for easily billable data forms such as minutes and texts. (If AT&T or Verizon ran email you’d pay by the message, or agree to a “deal” for X number of emails per month.)

Free public wi-fi is getting crowded out by cellular companies looking to move some of the data carrying load over to their own billable wi-fi systems. Some operators are looking to bill the sources of content for bandwidth while others experiment with usage-based pricing, helping turn the Net into a multi-tier commercial system. (Never mind that “data hogs” mostly aren’t.)

[. . .]

What’s hard for ["BigCo walled gardeners such as Apple and Amazon"] to grok — and for us as well as their users and customers — is that  the free and open worlds created by generative systems such as PCs and the Internet have boundaries sufficiently wide to allow creation of what Umair Haque calls “thick value” in abundance. To Apple, Amazon, AT&T and Verizon, building private worlds for captive customers might look like thick value, but ultimately captive customer husbandry closes more opportunities across the marketplace than they open. Companies and governments do compete, but the market and civilization are games that support positive sum outcomes for multiple players.

[. . .]

Post to Twitter Post to Facebook Post to LinkedIn

Comments Off more...

Dave Winer on Apple, Twitter and Tumblr: The Un-Internet

by admin on Jan.01, 2012, under Uncategorized

(Original at Dave’s Scripting News Blog)

The Un-Internet

By Dave Winer on Saturday, December 31, 2011 at 11:00 AM.

[. . .]

This time around, Apple has been the leader in the push to control users. They say they’re protecting users, and to some extent that is true. I can download software onto my iPad feeling fairly sure that it’s not going to harm the computer. I wouldn’t mind what Apple was doing if that’s all they did, keep the nasty bits off my computer. But of course, that’s not all they do. Nor could it be all they do. Once they took the power to decide what software could be distributed on their platform, it was inevitable that speech would be restricted too. I think of the iPad platform as Disneyfied. You wouldn’t see anything there that you wouldn’t see in a Disney theme park or in a Pixar movie.

The sad thing is that Apple is providing a bad example for younger, smaller companies like Twitter and Tumblr, who apparently want to control the “user experience” of their platforms in much the same way as Apple does.

[. . .]

My first experience with the Internet came as a grad student in the late 70s, but it wasn’t called the Internet then. I loved it because of its simplicity and the lack of controls. There was no one to say you could or couldn’t ship something. No gatekeeper. In the world it was growing up alongside, the mainframe world, the barriers were huge. An individual person couldn’t own a computer. To get access you had to go to work for a corporation, or study at a university.

Every time around the loop, since then, the Internet has served as the antidote to the controls that the tech industry would place on users. Every time, the tech industry has a rationale, with some validity, that wide-open access would be a nightmare. But eventually we overcome their barriers, and another layer comes on. And the upstarts become the installed-base, and they make the same mistakes all over again.

[. . .]

Post to Twitter Post to Facebook Post to LinkedIn

Comments Off more...

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!