Section 23.5
The History of Protocols

Originally, networking was confined to very small setups, usually in one building. In fact, most networks started out as ways for terminals and other peripheral equipment to transmit data to the CPU, rather than for several different CPUs to share files or mail. These days were in the mid-1960s when large timesharing mainframes were being established and there was a need to connect hundreds of terminals to a central CPU. Then someone got the idea of sending data over telephone lines, so modems were invented to convert the computer's digital signals into analog signals that the telephone lines could handle.

Eventually, someone got the idea that CPUs might want to communicate directly with each other, especially long distance. The airline reservation system SABRE is an early example of a large distributed application where the "program" was never-ending and actually spread over many computers and terminal controllers over a wide geographic area.

As corporations began to employ tens, then hundreds, finally thousands, of CPUs in their various offices, they wanted them to communicate in some faster fashion than "sneakernet," a facetious name for the transportation of tapes or diskettes across machine rooms or through the mail. Using the modem, these corporations asked vendors to develop complete networking solutions for them, and many of the large vendors did, such as IBM with their SNA (Systems Network Architecture) and DEC with their DNA (Digital Network Architecture).

All of these proprietary systems were flawed in that they didn't allow communication between computers of different vendors. But back in the 1960s, nobody thought that someday almost all computers would be connected. Nobody but the military, that is.

In 1969, the United States DoD (Department of Defense) initiated a research project under the ARPA agency (Advanced Research Projects Agency) which would see if a robust computer network could be built. This project became the highly successful ARPANET which spawned the TCP/IP protocols and became the core of today's Internet.

TCP/IP is actually a set (or suite) of protocols. In 1973, the DoD threw away the original protocols used in the ARPANET because they weren't scaling well. That is, these protocols no longer functioned efficiently as the number of computers on the ARPANET mushroomed from 4 in 1969 to around 60. So Vint Cerf and Robert Kahn headed the team that wrote their replacements, and these protocols, the so-called TCP/IP suite, are still in constant use today and probably will remain important for at least another decade. They have scaled admirably because today there are millions of hosts worldwide.

TCP stands for Transmission Control Protocol and is used to establish a reliable two-way connection over which files and email can be sent and remote login sessions can be conducted. IP stands for "Internet Protocol" remote login sessions can be conducted. IP stands for Internet Protocol and is the main way that packets cross the many individual networks through gateways in order to get to their destinations. Here are a few of the other protocols in the suite:

UDP      User Datagram Protocol, used for sometimes unreliable short messages
SMTP     Simple Mail Transport Protocol, used for email over the Internet
HTTP     Hypertext Transport Protocol, used to transmit web pages and HTML
         documents
FTP      File Transport Protocol, used to reliably send files
Telnet   Remote login and terminal emulation, used to turn a terminal hooked
         to a computer into a terminal of a remote computer
SNMP     Simple Network Management Protocol, used to control gateways and
         hosts and keep statistics on usage and problems
ICMP     Internet Control Message Protocol, used by gateways to tell about
         conditions on the networks and to share routing information

Some of these protocols are accompanied by user programs of the same name, such as ftp and telnet. Others, such as SMTP and HTTP, support other programs not named in the same way. And others like UDP, SNMP, and ICMP are hidden behind the scenes, but without their functionality, the Internet would not work at all.

The individual vendors still have their own proprietary systems, but they often provide TCP/IP software so their computers can be hooked to the Internet. It is not uncommon for one computer to have several different sets of networking software and be using all of it simultaneously.

Another set of protocols developed in the 1970s as a rational approach to multi-vendor interoperability was the ISO/OSI model. ISO is the International Standards Organization and OSI stands for Open Systems Interconnect. This set of protocols never quite got off the ground because the programs to implement it were not in place by the late 1980s, when the Internet exploded and TCP/IP became the dominant interoperable networking model. However, the OSI model was incredibly important theoretically and as an ideal to aim for. Everyone believes that TCP/IP's successor will be a cleaner, updated form of OSI.