Archive for April, 2012
Vint Cerf in Wired: We Knew What We Were Unleashing on the World
by The Internet Distinction on Apr.23, 2012, under Uncategorized
(Original at Wired Magazine)
Wired: So from the beginning, people, including yourself, had a vision of where the internet was going to go. Are you surprised, though, that at this point the IP protocol seems to beat almost anything it comes up against?
Cerf: I’m not surprised at all because we designed it to do that.
This was very conscious. Something we did right at the very beginning, when we were writing the specifications, we wanted to make this a future-proof protocol. And so the tactic that we used to achieve that was to say that the protocol did not know how — the packets of the internet protocol layer didn’t know how they were being carried. And they didn’t care whether it was a satellite link or mobile radio link or an optical fiber or something else.
[. . .]
Wired: So how did you come to be the author of the TCP/IP protocol?
Vinton Cerf: Bob Kahn and I had worked together in the Arpanet project that was funded by ARPA, and it was an attempt at doing a national-scale packet switching experiment to see whether computers could usually be interconnected through this packet-switching medium. In 1970‚ there was a single telephone company in the United States called AT&T and its technology was called circuit switching and that was all any telecom engineer worried about.
We had a different idea and I can’t claim any responsibility for having suggested to use packet-switching. That was really three other people working independently who suggested that idea simultaneously in the 1960s. So by the time I get involved in all in this, I was a graduate student at UCLA. I am working with my colleague and very close friend Steve Crocker, who now is the chairman of ICANN, a position I held for about a year.
A part of our job was to figure out what the software should look like for computers connecting to each other through this Arpanet. It was very successful — there was a big public demonstration in October of 1972 which is organized by Kahn. After the October demo was done Bob went to ARPA and I went to Stanford.
So in early 1973, Bob appears in my lab at Stanford and says ‘I have a problem.’ My first question is ‘What’s the problem?” He said we now have the Arpanet working and we are now thinking, ‘How do we use computers in command and control?”
If we wanted to use a computer to organize our resources, a smaller group might defeat a larger one because it is managing its resources better with the help of computers. The problem is that if you are serious about using computers, you better be able to put them in mobile vehicles, ships at sea, and aircraft, as well as at fixed installations.
At that point, the only experience we had was with fixed installations of the Arpanet. So he had already begun thinking about what he called open networking and believed you might optimize radio network differently than a satellite network for ships at sea, which might be different from what you do with dedicated telephone lines.
So we had multiple networks, in his formulation, all of them packet-switched, but with different characteristics. Some were larger, some went faster, some had packets that got lost, some didn’t. So the question is how can you make all the computers on each of those various networks think they are part of one common network — despite all these variations and diversity.
That was the internet problem.
In September 1973 I presented a paper to a group that I chaired called the International Network Working Group. We refined the paper and published formally in May of 1974, a description of how the internet would work.
Wired: Did you have any idea back then what the internet would develop into?
Cerf: People often ask, ‘How could you possibly have imagined what’s happening today?’ And of course, you know, we didn’t. But it’s also not honest to roll that answer off as saying we didn’t have any idea what we had done, or what the opportunity was.
You need to appreciate that by the time, mid-July ’73, we had two years of experience with e-mail. We had substantial amount of experience with Doug Englebart‘s system at SRI called The Online System. That system for all practical purposes was a one-computer world wide web. It had documents that pointed to each other using hyperlinks. Engelbart invented the mouse that pointed to things on the screen. […] So we had those experiences, plus remote access through the net to the time-sharing machines, which is the Telnet protocol …. So we had all that experience as we were thinking our way through the internet design.
The big deal about the internet design was you could have arbitrary large number of networks so that they would all work together. And the theory we had is that if we just specify what the protocols would look like and what software you needed to write, anybody who wanted to build a piece of internet would do that and find somebody who would be willing to connect to them. Then the system would grow organically because it didn’t have any central control.
And that’s exactly what happened.
The network has grown mostly organically. The closest thing that was in anyway close to central control is the Internet Corporation for Assigned Names and Numbers (ICANN) and its job was to allocate internet address space and oversee the domain name system, which had not been invented until 1984.
So, we were in this early stage. We were struggling to make sure that the protocols are as robust as possible. We went through several implementations of them until finally we started implementing them on as many different operating system as we could. And by January 1st 1983, we launched the internet.
That’s where it is dated as operational and that’s nearly 30 years ago, which is pretty incredible.
[. . .]
Wired: So from the beginning, people, including yourself, had a vision of where the internet was going to go. Are you surprised, though, that at this point the IP protocol seems to beat almost anything it comes up against?
Cerf: I’m not surprised at all because we designed it to do that.
This was very conscious. Something we did right at the very beginning, when we were writing the specifications, we wanted to make this a future-proof protocol. And so the tactic that we used to achieve that was to say that the protocol did not know how — the packets of the internet protocol layer didn’t know how they were being carried. And they didn’t care whether it was a satellite link or mobile radio link or an optical fiber or something else.
We were very, very careful to isolate that protocol layer from any detailed knowledge of how it was being carried. Plainly, the software had to know how to inject it into a radio link, or inject it into an optical fiber, or inject it into a satellite connection. But the basic protocol didn’t know how that worked.
And the other thing that we did was to make sure that the network didn’t know what the packets had in them. We didn’t encrypt them to prevent it from knowing — we just didn’t make it have to know anything. It’s just a bag of bits as far as the net was concerned.
We were very successful in these two design features, because every time a new kind of communications technology came along, like frame relay or asynchronous transfer mode or passive optical networking or mobile radio‚ all of these different ways of communicating could carry internet packets.
We would hear people saying, ‘The internet will be replaced by X25,’ or ‘The internet will be replaced by frame relay,’ or ‘The internet will be replaced by APM,’ or ‘The internet will be replaced by add-and-drop multiplexers.’
Of course, the answer is, ‘No, it won’t.’ It just runs on top of everything. And that was by design. I’m actually very proud of the fact that we thought of that and carefully designed that capability into the system.
Wired: Right. You mentioned TCP/IP not knowing what’s within the packets. Are you concerned with the growth of things like Deep Packet Inspection and telecoms interested in having more control over their networks?
Cerf: Yes, I am. I’ve been very noisy about that.
First of all, the DPI thing is easy to defeat. All you have to do is use end-to-end encryption. HTTPS is your friend in that case, or IPSEC is your friend. I don’t object to DPI when you’re trying to figure out what’s wrong with a network.
I am worried about two things: one is the network neutrality issue. That’s a business issue. The issue has to do with the lack of competition in broadband access and therefore, the lack of discipline in the market to competition. There is no discipline in the American market right now because there isn’t enough facilities-based competition for broadband service.
And although the FCC has tried to introduce net neutrality rules to avoid abusive practices like favoring your own services over others, they have struggled because there has been more than one court case in which it was asserted the FCC didn’t have the authority to punish ISPs for abusing their control over the broadband channel. So, I think that’s a serious problem.
The other thing I worry about is the introduction of IPv6 because technically we have run out of internet addresses — even though the original design called for a 32-bit address, which would allowed for 4.3 trillion terminations if it had been efficiently used.
And we are clearly over-subscribed this point. But it was only last year that we ran out. So one thing that I am anticipating is that on June 6 this year, all of those who can are going to turn on IPv6 capability.