2015년 5월 27일 수요일

Brief History of the Internet

http://www.internetsociety.org/internet/what-internet/history-internet/brief-history-internet

Brief History of the Internet

Introduction

The Internet has revolutionized the computer and communications world like nothing before. The invention of the telegraph, telephone, radio, and computer set the stage for this unprecedented integration of capabilities. The Internet is at once a world-wide broadcasting capability, a mechanism for information dissemination, and a medium for collaboration and interaction between individuals and their computers without regard for geographic location. The Internet represents one of the most successful examples of the benefits of sustained investment and commitment to research and development of information infrastructure. Beginning with the early research in packet switching, the government, industry and academia have been partners in evolving and deploying this exciting new technology. Today, terms like "bleiner@computer.org" and "http://www.acm.org" trip lightly off the tongue of the random person on the street. 1
This is intended to be a brief, necessarily cursory and incomplete history. Much material currently exists about the Internet, covering history, technology, and usage. A trip to almost any bookstore will find shelves of material written about the Internet. 2
In this paper,3 several of us involved in the development and evolution of the Internet share our views of its origins and history. This history revolves around four distinct aspects. There is the technological evolution that began with early research on packet switching and the ARPANET (and related technologies), and where current research continues to expand the horizons of the infrastructure along several dimensions, such as scale, performance, and higher-level functionality. There is the operations and management aspect of a global and complex operational infrastructure. There is the social aspect, which resulted in a broad community of Internauts working together to create and evolve the technology. And there is the commercialization aspect, resulting in an extremely effective transition of research results into a broadly deployed and available information infrastructure.
The Internet today is a widespread information infrastructure, the initial prototype of what is often called the National (or Global or Galactic) Information Infrastructure. Its history is complex and involves many aspects - technological, organizational, and community. And its influence reaches not only to the technical fields of computer communications but throughout society as we move toward increasing use of online tools to accomplish electronic commerce, information acquisition, and community operations.

Origins of the Internet

The first recorded description of the social interactions that could be enabled through networking was a series of memos written by J.C.R. Licklider of MIT in August 1962 discussing his "Galactic Network" concept. He envisioned a globally interconnected set of computers through which everyone could quickly access data and programs from any site. In spirit, the concept was very much like the Internet of today. Licklider was the first head of the computer research program at DARPA,4 starting in October 1962. While at DARPA he convinced his successors at DARPA, Ivan Sutherland, Bob Taylor, and MIT researcher Lawrence G. Roberts, of the importance of this networking concept.
Leonard Kleinrock at MIT published the first paper on packet switching theoryin July 1961 and the first book on the subject in 1964. Kleinrock convinced Roberts of the theoretical feasibility of communications using packets rather than circuits, which was a major step along the path towards computer networking. The other key step was to make the computers talk together. To explore this, in 1965 working with Thomas Merrill, Roberts connected the TX-2 computer in Mass. to the Q-32 in California with a low speed dial-up telephone line creating the first (however small) wide-area computer network ever built. The result of this experiment was the realization that the time-shared computers could work well together, running programs and retrieving data as necessary on the remote machine, but that the circuit switched telephone system was totally inadequate for the job. Kleinrock's conviction of the need for packet switching was confirmed.
In late 1966 Roberts went to DARPA to develop the computer network concept and quickly put together his plan for the "ARPANET", publishing it in 1967. At the conference where he presented the paper, there was also a paper on a packet network concept from the UK by Donald Davies and Roger Scantlebury of NPL. Scantlebury told Roberts about the NPL work as well as that of Paul Baran and others at RAND. The RAND group had written a paper on packet switching networks for secure voice in the military in 1964. It happened that the work at MIT (1961-1967), at RAND (1962-1965), and at NPL (1964-1967) had all proceeded in parallel without any of the researchers knowing about the other work. The word "packet" was adopted from the work at NPL and the proposed line speed to be used in the ARPANET design was upgraded from 2.4 kbps to 50 kbps. 5
In August 1968, after Roberts and the DARPA funded community had refined the overall structure and specifications for the ARPANET, an RFQ was released by DARPA for the development of one of the key components, the packet switches called Interface Message Processors (IMP's). The RFQ was won in December 1968 by a group headed by Frank Heart at Bolt Beranek and Newman (BBN). As the BBN team worked on the IMP's with Bob Kahn playing a major role in the overall ARPANET architectural design, the network topology and economics were designed and optimized by Roberts working with Howard Frank and his team at Network Analysis Corporation, and the network measurement system was prepared by Kleinrock's team at UCLA. 6
Due to Kleinrock's early development of packet switching theory and his focus on analysis, design and measurement, his Network Measurement Center at UCLA was selected to be the first node on the ARPANET. All this came together in September 1969 when BBN installed the first IMP at UCLA and the first host computer was connected. Doug Engelbart's project on "Augmentation of Human Intellect" (which included NLS, an early hypertext system) at Stanford Research Institute (SRI) provided a second node. SRI supported the Network Information Center, led by Elizabeth (Jake) Feinler and including functions such as maintaining tables of host name to address mapping as well as a directory of the RFC's.
One month later, when SRI was connected to the ARPANET, the first host-to-host message was sent from Kleinrock's laboratory to SRI. Two more nodes were added at UC Santa Barbara and University of Utah. These last two nodes incorporated application visualization projects, with Glen Culler and Burton Fried at UCSB investigating methods for display of mathematical functions using storage displays to deal with the problem of refresh over the net, and Robert Taylor and Ivan Sutherland at Utah investigating methods of 3-D representations over the net. Thus, by the end of 1969, four host computers were connected together into the initial ARPANET, and the budding Internet was off the ground. Even at this early stage, it should be noted that the networking research incorporated both work on the underlying network and work on how to utilize the network. This tradition continues to this day.
Computers were added quickly to the ARPANET during the following years, and work proceeded on completing a functionally complete Host-to-Host protocol and other network software. In December 1970 the Network Working Group (NWG) working under S. Crocker finished the initial ARPANET Host-to-Host protocol, called the Network Control Protocol (NCP). As the ARPANET sites completed implementing NCP during the period 1971-1972, the network users finally could begin to develop applications.
In October 1972, Kahn organized a large, very successful demonstration of the ARPANET at the International Computer Communication Conference (ICCC). This was the first public demonstration of this new network technology to the public. It was also in 1972 that the initial "hot" application, electronic mail, was introduced. In March Ray Tomlinson at BBN wrote the basic email message send and read software, motivated by the need of the ARPANET developers for an easy coordination mechanism. In July, Roberts expanded its utility by writing the first email utility program to list, selectively read, file, forward, and respond to messages. From there email took off as the largest network application for over a decade. This was a harbinger of the kind of activity we see on the World Wide Web today, namely, the enormous growth of all kinds of "people-to-people" traffic.

The Initial Internetting Concepts

The original ARPANET grew into the Internet. Internet was based on the idea that there would be multiple independent networks of rather arbitrary design, beginning with the ARPANET as the pioneering packet switching network, but soon to include packet satellite networks, ground-based packet radio networks and other networks. The Internet as we now know it embodies a key underlying technical idea, namely that of open architecture networking. In this approach, the choice of any individual network technology was not dictated by a particular network architecture but rather could be selected freely by a provider and made to interwork with the other networks through a meta-level "Internetworking Architecture". Up until that time there was only one general method for federating networks. This was the traditional circuit switching method where networks would interconnect at the circuit level, passing individual bits on a synchronous basis along a portion of an end-to-end circuit between a pair of end locations. Recall that Kleinrock had shown in 1961 that packet switching was a more efficient switching method. Along with packet switching, special purpose interconnection arrangements between networks were another possibility. While there were other limited ways to interconnect different networks, they required that one be used as a component of the other, rather than acting as a peer of the other in offering end-to-end service.
In an open-architecture network, the individual networks may be separately designed and developed and each may have its own unique interface which it may offer to users and/or other providers. including other Internet providers. Each network can be designed in accordance with the specific environment and user requirements of that network. There are generally no constraints on the types of network that can be included or on their geographic scope, although certain pragmatic considerations will dictate what makes sense to offer.
The idea of open-architecture networking was first introduced by Kahn shortly after having arrived at DARPA in 1972. This work was originally part of the packet radio program, but subsequently became a separate program in its own right. At the time, the program was called "Internetting". Key to making the packet radio system work was a reliable end-end protocol that could maintain effective communication in the face of jamming and other radio interference, or withstand intermittent blackout such as caused by being in a tunnel or blocked by the local terrain. Kahn first contemplated developing a protocol local only to the packet radio network, since that would avoid having to deal with the multitude of different operating systems, and continuing to use NCP.
However, NCP did not have the ability to address networks (and machines) further downstream than a destination IMP on the ARPANET and thus some change to NCP would also be required. (The assumption was that the ARPANET was not changeable in this regard). NCP relied on ARPANET to provide end-to-end reliability. If any packets were lost, the protocol (and presumably any applications it supported) would come to a grinding halt. In this model NCP had no end-end host error control, since the ARPANET was to be the only network in existence and it would be so reliable that no error control would be required on the part of the hosts. Thus, Kahn decided to develop a new version of the protocol which could meet the needs of an open-architecture network environment. This protocol would eventually be called the Transmission Control Protocol/Internet Protocol (TCP/IP). While NCP tended to act like a device driver, the new protocol would be more like a communications protocol.
Four ground rules were critical to Kahn's early thinking:
  • Each distinct network would have to stand on its own and no internal changes could be required to any such network to connect it to the Internet.
  • Communications would be on a best effort basis. If a packet didn't make it to the final destination, it would shortly be retransmitted from the source.
  • Black boxes would be used to connect the networks; these would later be called gateways and routers. There would be no information retained by the gateways about the individual flows of packets passing through them, thereby keeping them simple and avoiding complicated adaptation and recovery from various failure modes.
  • There would be no global control at the operations level.
Other key issues that needed to be addressed were:
  • Algorithms to prevent lost packets from permanently disabling communications and enabling them to be successfully retransmitted from the source.
  • Providing for host-to-host "pipelining" so that multiple packets could be enroute from source to destination at the discretion of the participating hosts, if the intermediate networks allowed it.
  • Gateway functions to allow it to forward packets appropriately. This included interpreting IP headers for routing, handling interfaces, breaking packets into smaller pieces if necessary, etc.
  • The need for end-end checksums, reassembly of packets from fragments and detection of duplicates, if any.
  • The need for global addressing
  • Techniques for host-to-host flow control.
  • Interfacing with the various operating systems
  • There were also other concerns, such as implementation efficiency, internetwork performance, but these were secondary considerations at first.
Kahn began work on a communications-oriented set of operating system principles while at BBN and documented some of his early thoughts in an internal BBN memorandum entitled "Communications Principles for Operating Systems". At this point he realized it would be necessary to learn the implementation details of each operating system to have a chance to embed any new protocols in an efficient way. Thus, in the spring of 1973, after starting the internetting effort, he asked Vint Cerf (then at Stanford) to work with him on the detailed design of the protocol. Cerf had been intimately involved in the original NCP design and development and already had the knowledge about interfacing to existing operating systems. So armed with Kahn's architectural approach to the communications side and with Cerf's NCP experience, they teamed up to spell out the details of what became TCP/IP.
The give and take was highly productive and the first written version7 of the resulting approach was distributed at a special meeting of the International Network Working Group (INWG) which had been set up at a conference at Sussex University in September 1973. Cerf had been invited to chair this group and used the occasion to hold a meeting of INWG members who were heavily represented at the Sussex Conference.
Some basic approaches emerged from this collaboration between Kahn and Cerf:
  • Communication between two processes would logically consist of a very long stream of bytes (they called them octets). The position of any octet in the stream would be used to identify it.
  • Flow control would be done by using sliding windows and acknowledgments (acks). The destination could select when to acknowledge and each ack returned would be cumulative for all packets received to that point.
  • It was left open as to exactly how the source and destination would agree on the parameters of the windowing to be used. Defaults were used initially.
  • Although Ethernet was under development at Xerox PARC at that time, the proliferation of LANs were not envisioned at the time, much less PCs and workstations. The original model was national level networks like ARPANET of which only a relatively small number were expected to exist. Thus a 32 bit IP address was used of which the first 8 bits signified the network and the remaining 24 bits designated the host on that network. This assumption, that 256 networks would be sufficient for the foreseeable future, was clearly in need of reconsideration when LANs began to appear in the late 1970s.
The original Cerf/Kahn paper on the Internet described one protocol, called TCP, which provided all the transport and forwarding services in the Internet. Kahn had intended that the TCP protocol support a range of transport services, from the totally reliable sequenced delivery of data (virtual circuit model) to a datagram service in which the application made direct use of the underlying network service, which might imply occasional lost, corrupted or reordered packets. However, the initial effort to implement TCP resulted in a version that only allowed for virtual circuits. This model worked fine for file transfer and remote login applications, but some of the early work on advanced network applications, in particular packet voice in the 1970s, made clear that in some cases packet losses should not be corrected by TCP, but should be left to the application to deal with. This led to a reorganization of the original TCP into two protocols, the simple IP which provided only for addressing and forwarding of individual packets, and the separate TCP, which was concerned with service features such as flow control and recovery from lost packets. For those applications that did not want the services of TCP, an alternative called the User Datagram Protocol (UDP) was added in order to provide direct access to the basic service of IP.
A major initial motivation for both the ARPANET and the Internet was resource sharing - for example allowing users on the packet radio networks to access the time sharing systems attached to the ARPANET. Connecting the two together was far more economical that duplicating these very expensive computers. However, while file transfer and remote login (Telnet) were very important applications, electronic mail has probably had the most significant impact of the innovations from that era. Email provided a new model of how people could communicate with each other, and changed the nature of collaboration, first in the building of the Internet itself (as is discussed below) and later for much of society.
There were other applications proposed in the early days of the Internet, including packet based voice communication (the precursor of Internet telephony), various models of file and disk sharing, and early "worm" programs that showed the concept of agents (and, of course, viruses). A key concept of the Internet is that it was not designed for just one application, but as a general infrastructure on which new applications could be conceived, as illustrated later by the emergence of the World Wide Web. It is the general purpose nature of the service provided by TCP and IP that makes this possible.

Proving the Ideas

DARPA let three contracts to Stanford (Cerf), BBN (Ray Tomlinson) and UCL (Peter Kirstein) to implement TCP/IP (it was simply called TCP in the Cerf/Kahn paper but contained both components). The Stanford team, led by Cerf, produced the detailed specification and within about a year there were three independent implementations of TCP that could interoperate.
This was the beginning of long term experimentation and development to evolve and mature the Internet concepts and technology. Beginning with the first three networks (ARPANET, Packet Radio, and Packet Satellite) and their initial research communities, the experimental environment has grown to incorporate essentially every form of network and a very broad-based research and development community. [REK78] With each expansion has come new challenges.
The early implementations of TCP were done for large time sharing systems such as Tenex and TOPS 20. When desktop computers first appeared, it was thought by some that TCP was too big and complex to run on a personal computer. David Clark and his research group at MIT set out to show that a compact and simple implementation of TCP was possible. They produced an implementation, first for the Xerox Alto (the early personal workstation developed at Xerox PARC) and then for the IBM PC. That implementation was fully interoperable with other TCPs, but was tailored to the application suite and performance objectives of the personal computer, and showed that workstations, as well as large time-sharing systems, could be a part of the Internet. In 1976, Kleinrock published the first book on the ARPANET. It included an emphasis on the complexity of protocols and the pitfalls they often introduce. This book was influential in spreading the lore of packet switching networks to a very wide community.
Widespread development of LANS, PCs and workstations in the 1980s allowed the nascent Internet to flourish. Ethernet technology, developed by Bob Metcalfe at Xerox PARC in 1973, is now probably the dominant network technology in the Internet and PCs and workstations the dominant computers. This change from having a few networks with a modest number of time-shared hosts (the original ARPANET model) to having many networks has resulted in a number of new concepts and changes to the underlying technology. First, it resulted in the definition of three network classes (A, B, and C) to accommodate the range of networks. Class A represented large national scale networks (small number of networks with large numbers of hosts); Class B represented regional scale networks; and Class C represented local area networks (large number of networks with relatively few hosts).
A major shift occurred as a result of the increase in scale of the Internet and its associated management issues. To make it easy for people to use the network, hosts were assigned names, so that it was not necessary to remember the numeric addresses. Originally, there were a fairly limited number of hosts, so it was feasible to maintain a single table of all the hosts and their associated names and addresses. The shift to having a large number of independently managed networks (e.g., LANs) meant that having a single table of hosts was no longer feasible, and the Domain Name System (DNS) was invented by Paul Mockapetris of USC/ISI. The DNS permitted a scalable distributed mechanism for resolving hierarchical host names (e.g.www.acm.org) into an Internet address.
The increase in the size of the Internet also challenged the capabilities of the routers. Originally, there was a single distributed algorithm for routing that was implemented uniformly by all the routers in the Internet. As the number of networks in the Internet exploded, this initial design could not expand as necessary, so it was replaced by a hierarchical model of routing, with an Interior Gateway Protocol (IGP) used inside each region of the Internet, and an Exterior Gateway Protocol (EGP) used to tie the regions together. This design permitted different regions to use a different IGP, so that different requirements for cost, rapid reconfiguration, robustness and scale could be accommodated. Not only the routing algorithm, but the size of the addressing tables, stressed the capacity of the routers. New approaches for address aggregation, in particular classless inter-domain routing (CIDR), have recently been introduced to control the size of router tables.
As the Internet evolved, one of the major challenges was how to propagate the changes to the software, particularly the host software. DARPA supported UC Berkeley to investigate modifications to the Unix operating system, including incorporating TCP/IP developed at BBN. Although Berkeley later rewrote the BBN code to more efficiently fit into the Unix system and kernel, the incorporation of TCP/IP into the Unix BSD system releases proved to be a critical element in dispersion of the protocols to the research community. Much of the CS research community began to use Unix BSD for their day-to-day computing environment. Looking back, the strategy of incorporating Internet protocols into a supported operating system for the research community was one of the key elements in the successful widespread adoption of the Internet.
One of the more interesting challenges was the transition of the ARPANET host protocol from NCP to TCP/IP as of January 1, 1983. This was a "flag-day" style transition, requiring all hosts to convert simultaneously or be left having to communicate via rather ad-hoc mechanisms. This transition was carefully planned within the community over several years before it actually took place and went surprisingly smoothly (but resulted in a distribution of buttons saying "I survived the TCP/IP transition").
TCP/IP was adopted as a defense standard three years earlier in 1980. This enabled defense to begin sharing in the DARPA Internet technology base and led directly to the eventual partitioning of the military and non- military communities. By 1983, ARPANET was being used by a significant number of defense R&D and operational organizations. The transition of ARPANET from NCP to TCP/IP permitted it to be split into a MILNET supporting operational requirements and an ARPANET supporting research needs.
Thus, by 1985, Internet was already well established as a technology supporting a broad community of researchers and developers, and was beginning to be used by other communities for daily computer communications. Electronic mail was being used broadly across several communities, often with different systems, but interconnection between different mail systems was demonstrating the utility of broad based electronic communications between people.

Transition to Widespread Infrastructure

At the same time that the Internet technology was being experimentally validated and widely used amongst a subset of computer science researchers, other networks and networking technologies were being pursued. The usefulness of computer networking - especially electronic mail - demonstrated by DARPA and Department of Defense contractors on the ARPANET was not lost on other communities and disciplines, so that by the mid-1970s computer networks had begun to spring up wherever funding could be found for the purpose. The U.S. Department of Energy (DoE) established MFENet for its researchers in Magnetic Fusion Energy, whereupon DoE's High Energy Physicists responded by building HEPNet. NASA Space Physicists followed with SPAN, and Rick Adrion, David Farber, and Larry Landweber established CSNET for the (academic and industrial) Computer Science community with an initial grant from the U.S. National Science Foundation (NSF). AT&T's free-wheeling dissemination of the UNIX computer operating system spawned USENET, based on UNIX' built-in UUCP communication protocols, and in 1981 Ira Fuchs and Greydon Freeman devised BITNET, which linked academic mainframe computers in an "email as card images" paradigm.
With the exception of BITNET and USENET, these early networks (including ARPANET) were purpose-built - i.e., they were intended for, and largely restricted to, closed communities of scholars; there was hence little pressure for the individual networks to be compatible and, indeed, they largely were not. In addition, alternate technologies were being pursued in the commercial sector, including XNS from Xerox, DECNet, and IBM's SNA.8 It remained for the British JANET (1984) and U.S. NSFNET (1985) programs to explicitly announce their intent to serve the entire higher education community, regardless of discipline. Indeed, a condition for a U.S. university to receive NSF funding for an Internet connection was that "... the connection must be made available to ALL qualified users on campus."
In 1985, Dennis Jennings came from Ireland to spend a year at NSF leading the NSFNET program. He worked with the community to help NSF make a critical decision - that TCP/IP would be mandatory for the NSFNET program. When Steve Wolff took over the NSFNET program in 1986, he recognized the need for a wide area networking infrastructure to support the general academic and research community, along with the need to develop a strategy for establishing such infrastructure on a basis ultimately independent of direct federal funding. Policies and strategies were adopted (see below) to achieve that end.
NSF also elected to support DARPA's existing Internet organizational infrastructure, hierarchically arranged under the (then) Internet Activities Board (IAB). The public declaration of this choice was the joint authorship by the IAB's Internet Engineering and Architecture Task Forces and by NSF's Network Technical Advisory Group of RFC 985 (Requirements for Internet Gateways ), which formally ensured interoperability of DARPA's and NSF's pieces of the Internet.
In addition to the selection of TCP/IP for the NSFNET program, Federal agencies made and implemented several other policy decisions which shaped the Internet of today.
  • Federal agencies shared the cost of common infrastructure, such as trans-oceanic circuits. They also jointly supported "managed interconnection points" for interagency traffic; the Federal Internet Exchanges (FIX-E and FIX-W) built for this purpose served as models for the Network Access Points and "*IX" facilities that are prominent features of today's Internet architecture.
  • To coordinate this sharing, the Federal Networking Council9 was formed. The FNC also cooperated with other international organizations, such as RARE in Europe, through the Coordinating Committee on Intercontinental Research Networking, CCIRN, to coordinate Internet support of the research community worldwide.
  • This sharing and cooperation between agencies on Internet-related issues had a long history. An unprecedented 1981 agreement between Farber, acting for CSNET and the NSF, and DARPA's Kahn, permitted CSNET traffic to share ARPANET infrastructure on a statistical and no-metered-settlements basis.
  • Subsequently, in a similar mode, the NSF encouraged its regional (initially academic) networks of the NSFNET to seek commercial, non-academic customers, expand their facilities to serve them, and exploit the resulting economies of scale to lower subscription costs for all.
  • On the NSFNET Backbone - the national-scale segment of the NSFNET - NSF enforced an "Acceptable Use Policy" (AUP) which prohibited Backbone usage for purposes "not in support of Research and Education." The predictable (and intended) result of encouraging commercial network traffic at the local and regional level, while denying its access to national-scale transport, was to stimulate the emergence and/or growth of "private", competitive, long-haul networks such as PSI, UUNET, ANS CO+RE, and (later) others. This process of privately-financed augmentation for commercial uses was thrashed out starting in 1988 in a series of NSF-initiated conferences at Harvard's Kennedy School of Government on "The Commercialization and Privatization of the Internet" - and on the "com-priv" list on the net itself.
  • In 1988, a National Research Council committee, chaired by Kleinrock and with Kahn and Clark as members, produced a report commissioned by NSF titled "Towards a National Research Network". This report was influential on then Senator Al Gore, and ushered in high speed networks that laid the networking foundation for the future information superhighway.
  • In 1994, a National Research Council report, again chaired by Kleinrock (and with Kahn and Clark as members again), Entitled "Realizing The Information Future: The Internet and Beyond" was released. This report, commissioned by NSF, was the document in which a blueprint for the evolution of the information superhighway was articulated and which has had a lasting affect on the way to think about its evolution. It anticipated the critical issues of intellectual property rights, ethics, pricing, education, architecture and regulation for the Internet.
  • NSF's privatization policy culminated in April, 1995, with the defunding of the NSFNET Backbone. The funds thereby recovered were (competitively) redistributed to regional networks to buy national-scale Internet connectivity from the now numerous, private, long-haul networks.
The backbone had made the transition from a network built from routers out of the research community (the "Fuzzball" routers from David Mills) to commercial equipment. In its 8 1/2 year lifetime, the Backbone had grown from six nodes with 56 kbps links to 21 nodes with multiple 45 Mbps links. It had seen the Internet grow to over 50,000 networks on all seven continents and outer space, with approximately 29,000 networks in the United States.
Such was the weight of the NSFNET program's ecumenism and funding ($200 million from 1986 to 1995) - and the quality of the protocols themselves - that by 1990 when the ARPANET itself was finally decommissioned10, TCP/IP had supplanted or marginalized most other wide-area computer network protocols worldwide, and IP was well on its way to becoming THE bearer service for the Global Information Infrastructure.

The Role of Documentation

A key to the rapid growth of the Internet has been the free and open access to the basic documents, especially the specifications of the protocols.
The beginnings of the ARPANET and the Internet in the university research community promoted the academic tradition of open publication of ideas and results. However, the normal cycle of traditional academic publication was too formal and too slow for the dynamic exchange of ideas essential to creating networks.
In 1969 a key step was taken by S. Crocker (then at UCLA) in establishing theRequest for Comments (or RFC) series of notes. These memos were intended to be an informal fast distribution way to share ideas with other network researchers. At first the RFCs were printed on paper and distributed via snail mail. As the File Transfer Protocol (FTP) came into use, the RFCs were prepared as online files and accessed via FTP. Now, of course, the RFCs are easily accessed via the World Wide Web at dozens of sites around the world. SRI, in its role as Network Information Center, maintained the online directories. Jon Postel acted as RFC Editor as well as managing the centralized administration of required protocol number assignments, roles that he continued to play until his death, October 16, 1998.
The effect of the RFCs was to create a positive feedback loop, with ideas or proposals presented in one RFC triggering another RFC with additional ideas, and so on. When some consensus (or a least a consistent set of ideas) had come together a specification document would be prepared. Such a specification would then be used as the base for implementations by the various research teams.
Over time, the RFCs have become more focused on protocol standards (the "official" specifications), though there are still informational RFCs that describe alternate approaches, or provide background information on protocols and engineering issues. The RFCs are now viewed as the "documents of record" in the Internet engineering and standards community.
The open access to the RFCs (for free, if you have any kind of a connection to the Internet) promotes the growth of the Internet because it allows the actual specifications to be used for examples in college classes and by entrepreneurs developing new systems.
Email has been a significant factor in all areas of the Internet, and that is certainly true in the development of protocol specifications, technical standards, and Internet engineering. The very early RFCs often presented a set of ideas developed by the researchers at one location to the rest of the community. After email came into use, the authorship pattern changed - RFCs were presented by joint authors with common view independent of their locations.
The use of specialized email mailing lists has been long used in the development of protocol specifications, and continues to be an important tool. The IETF now has in excess of 75 working groups, each working on a different aspect of Internet engineering. Each of these working groups has a mailing list to discuss one or more draft documents under development. When consensus is reached on a draft document it may be distributed as an RFC.
As the current rapid expansion of the Internet is fueled by the realization of its capability to promote information sharing, we should understand that the network's first role in information sharing was sharing the information about its own design and operation through the RFC documents. This unique method for evolving new capabilities in the network will continue to be critical to future evolution of the Internet.

Formation of the Broad Community

The Internet is as much a collection of communities as a collection of technologies, and its success is largely attributable to both satisfying basic community needs as well as utilizing the community in an effective way to push the infrastructure forward. This community spirit has a long history beginning with the early ARPANET. The early ARPANET researchers worked as a close-knit community to accomplish the initial demonstrations of packet switching technology described earlier. Likewise, the Packet Satellite, Packet Radio and several other DARPA computer science research programs were multi-contractor collaborative activities that heavily used whatever available mechanisms there were to coordinate their efforts, starting with electronic mail and adding file sharing, remote access, and eventually World Wide Web capabilities. Each of these programs formed a working group, starting with the ARPANET Network Working Group. Because of the unique role that ARPANET played as an infrastructure supporting the various research programs, as the Internet started to evolve, the Network Working Group evolved into Internet Working Group.
In the late 1970s, recognizing that the growth of the Internet was accompanied by a growth in the size of the interested research community and therefore an increased need for coordination mechanisms, Vint Cerf, then manager of the Internet Program at DARPA, formed several coordination bodies - an International Cooperation Board (ICB), chaired by Peter Kirstein of UCL, to coordinate activities with some cooperating European countries centered on Packet Satellite research, an Internet Research Group which was an inclusive group providing an environment for general exchange of information, and an Internet Configuration Control Board (ICCB), chaired by Clark. The ICCB was an invitational body to assist Cerf in managing the burgeoning Internet activity.
In 1983, when Barry Leiner took over management of the Internet research program at DARPA, he and Clark recognized that the continuing growth of the Internet community demanded a restructuring of the coordination mechanisms. The ICCB was disbanded and in its place a structure of Task Forces was formed, each focused on a particular area of the technology (e.g. routers, end-to-end protocols, etc.). The Internet Activities Board (IAB) was formed from the chairs of the Task Forces.
It of course was only a coincidence that the chairs of the Task Forces were the same people as the members of the old ICCB, and Dave Clark continued to act as chair. After some changing membership on the IAB, Phill Gross became chair of a revitalized Internet Engineering Task Force (IETF), at the time merely one of the IAB Task Forces. As we saw above, by 1985 there was a tremendous growth in the more practical/engineering side of the Internet. This growth resulted in an explosion in the attendance at the IETF meetings, and Gross was compelled to create substructure to the IETF in the form of working groups.
This growth was complemented by a major expansion in the community. No longer was DARPA the only major player in the funding of the Internet. In addition to NSFNet and the various US and international government-funded activities, interest in the commercial sector was beginning to grow. Also in 1985, both Kahn and Leiner left DARPA and there was a significant decrease in Internet activity at DARPA. As a result, the IAB was left without a primary sponsor and increasingly assumed the mantle of leadership.
The growth continued, resulting in even further substructure within both the IAB and IETF. The IETF combined Working Groups into Areas, and designated Area Directors. An Internet Engineering Steering Group (IESG) was formed of the Area Directors. The IAB recognized the increasing importance of the IETF, and restructured the standards process to explicitly recognize the IESG as the major review body for standards. The IAB also restructured so that the rest of the Task Forces (other than the IETF) were combined into an Internet Research Task Force (IRTF) chaired by Postel, with the old task forces renamed as research groups.
The growth in the commercial sector brought with it increased concern regarding the standards process itself. Starting in the early 1980's and continuing to this day, the Internet grew beyond its primarily research roots to include both a broad user community and increased commercial activity. Increased attention was paid to making the process open and fair. This coupled with a recognized need for community support of the Internet eventually led to the formation of the Internet Society in 1991, under the auspices of Kahn's Corporation for National Research Initiatives (CNRI) and the leadership of Cerf, then with CNRI.
In 1992, yet another reorganization took place. In 1992, the Internet Activities Board was re-organized and re-named the Internet Architecture Board operating under the auspices of the Internet Society. A more "peer" relationship was defined between the new IAB and IESG, with the IETF and IESG taking a larger responsibility for the approval of standards. Ultimately, a cooperative and mutually supportive relationship was formed between the IAB, IETF, and Internet Society, with the Internet Society taking on as a goal the provision of service and other measures which would facilitate the work of the IETF.
The recent development and widespread deployment of the World Wide Web has brought with it a new community, as many of the people working on the WWW have not thought of themselves as primarily network researchers and developers. A new coordination organization was formed, the World Wide Web Consortium (W3C). Initially led from MIT's Laboratory for Computer Science by Tim Berners-Lee (the inventor of the WWW) and Al Vezza, W3C has taken on the responsibility for evolving the various protocols and standards associated with the Web.
Thus, through the over two decades of Internet activity, we have seen a steady evolution of organizational structures designed to support and facilitate an ever-increasing community working collaboratively on Internet issues.

Commercialization of the Technology

Commercialization of the Internet involved not only the development of competitive, private network services, but also the development of commercial products implementing the Internet technology. In the early 1980s, dozens of vendors were incorporating TCP/IP into their products because they saw buyers for that approach to networking. Unfortunately they lacked both real information about how the technology was supposed to work and how the customers planned on using this approach to networking. Many saw it as a nuisance add-on that had to be glued on to their own proprietary networking solutions: SNA, DECNet, Netware, NetBios. The DoD had mandated the use of TCP/IP in many of its purchases but gave little help to the vendors regarding how to build useful TCP/IP products.
In 1985, recognizing this lack of information availability and appropriate training, Dan Lynch in cooperation with the IAB arranged to hold a three day workshop for ALL vendors to come learn about how TCP/IP worked and what it still could not do well. The speakers came mostly from the DARPA research community who had both developed these protocols and used them in day-to-day work. About 250 vendor personnel came to listen to 50 inventors and experimenters. The results were surprises on both sides: the vendors were amazed to find that the inventors were so open about the way things worked (and what still did not work) and the inventors were pleased to listen to new problems they had not considered, but were being discovered by the vendors in the field. Thus a two-way discussion was formed that has lasted for over a decade.
After two years of conferences, tutorials, design meetings and workshops, a special event was organized that invited those vendors whose products ran TCP/IP well enough to come together in one room for three days to show off how well they all worked together and also ran over the Internet. In September of 1988 the first Interop trade show was born. 50 companies made the cut. 5,000 engineers from potential customer organizations came to see if it all did work as was promised. It did. Why? Because the vendors worked extremely hard to ensure that everyone's products interoperated with all of the other products - even with those of their competitors. The Interop trade show has grown immensely since then and today it is held in 7 locations around the world each year to an audience of over 250,000 people who come to learn which products work with each other in a seamless manner, learn about the latest products, and discuss the latest technology.
In parallel with the commercialization efforts that were highlighted by the Interop activities, the vendors began to attend the IETF meetings that were held 3 or 4 times a year to discuss new ideas for extensions of the TCP/IP protocol suite. Starting with a few hundred attendees mostly from academia and paid for by the government, these meetings now often exceed a thousand attendees, mostly from the vendor community and paid for by the attendees themselves. This self-selected group evolves the TCP/IP suite in a mutually cooperative manner. The reason it is so useful is that it is composed of all stakeholders: researchers, end users and vendors.
Network management provides an example of the interplay between the research and commercial communities. In the beginning of the Internet, the emphasis was on defining and implementing protocols that achieved interoperation.
As the network grew larger, it became clear that the sometime ad hoc procedures used to manage the network would not scale. Manual configuration of tables was replaced by distributed automated algorithms, and better tools were devised to isolate faults. In 1987 it became clear that a protocol was needed that would permit the elements of the network, such as the routers, to be remotely managed in a uniform way. Several protocols for this purpose were proposed, including Simple Network Management Protocol or SNMP (designed, as its name would suggest, for simplicity, and derived from an earlier proposal called SGMP) , HEMS (a more complex design from the research community) and CMIP (from the OSI community). A series of meeting led to the decisions that HEMS would be withdrawn as a candidate for standardization, in order to help resolve the contention, but that work on both SNMP and CMIP would go forward, with the idea that the SNMP could be a more near-term solution and CMIP a longer-term approach. The market could choose the one it found more suitable. SNMP is now used almost universally for network-based management.
In the last few years, we have seen a new phase of commercialization. Originally, commercial efforts mainly comprised vendors providing the basic networking products, and service providers offering the connectivity and basic Internet services. The Internet has now become almost a "commodity" service, and much of the latest attention has been on the use of this global information infrastructure for support of other commercial services. This has been tremendously accelerated by the widespread and rapid adoption of browsers and the World Wide Web technology, allowing users easy access to information linked throughout the globe. Products are available to facilitate the provisioning of that information and many of the latest developments in technology have been aimed at providing increasingly sophisticated information services on top of the basic Internet data communications.

History of the Future

On October 24, 1995, the FNC unanimously passed a resolution defining the term Internet. This definition was developed in consultation with members of the internet and intellectual property rights communities. RESOLUTION: The Federal Networking Council (FNC) agrees that the following language reflects our definition of the term "Internet". "Internet" refers to the global information system that -- (i) is logically linked together by a globally unique address space based on the Internet Protocol (IP) or its subsequent extensions/follow-ons; (ii) is able to support communications using the Transmission Control Protocol/Internet Protocol (TCP/IP) suite or its subsequent extensions/follow-ons, and/or other IP-compatible protocols; and (iii) provides, uses or makes accessible, either publicly or privately, high level services layered on the communications and related infrastructure described herein.
The Internet has changed much in the two decades since it came into existence. It was conceived in the era of time-sharing, but has survived into the era of personal computers, client-server and peer-to-peer computing, and the network computer. It was designed before LANs existed, but has accommodated that new network technology, as well as the more recent ATM and frame switched services. It was envisioned as supporting a range of functions from file sharing and remote login to resource sharing and collaboration, and has spawned electronic mail and more recently the World Wide Web. But most important, it started as the creation of a small band of dedicated researchers, and has grown to be a commercial success with billions of dollars of annual investment.
One should not conclude that the Internet has now finished changing. The Internet, although a network in name and geography, is a creature of the computer, not the traditional network of the telephone or television industry. It will, indeed it must, continue to change and evolve at the speed of the computer industry if it is to remain relevant. It is now changing to provide new services such as real time transport, in order to support, for example, audio and video streams.
The availability of pervasive networking (i.e., the Internet) along with powerful affordable computing and communications in portable form (i.e., laptop computers, two-way pagers, PDAs, cellular phones), is making possible a new paradigm of nomadic computing and communications. This evolution will bring us new applications - Internet telephone and, slightly further out, Internet television. It is evolving to permit more sophisticated forms of pricing and cost recovery, a perhaps painful requirement in this commercial world. It is changing to accommodate yet another generation of underlying network technologies with different characteristics and requirements, e.g. broadband residential access and satellites. New modes of access and new forms of service will spawn new applications, which in turn will drive further evolution of the net itself.
The most pressing question for the future of the Internet is not how the technology will change, but how the process of change and evolution itself will be managed. As this paper describes, the architecture of the Internet has always been driven by a core group of designers, but the form of that group has changed as the number of interested parties has grown. With the success of the Internet has come a proliferation of stakeholders - stakeholders now with an economic as well as an intellectual investment in the network.
We now see, in the debates over control of the domain name space and the form of the next generation IP addresses, a struggle to find the next social structure that will guide the Internet in the future. The form of that structure will be harder to find, given the large number of concerned stakeholders. At the same time, the industry struggles to find the economic rationale for the large investment needed for the future growth, for example to upgrade residential access to a more suitable technology. If the Internet stumbles, it will not be because we lack for technology, vision, or motivation. It will be because we cannot set a direction and march collectively into the future.
History of the Internet Timeline
Timeline

Footnotes

1 Perhaps this is an exaggeration based on the lead author's residence in Silicon Valley.
2 On a recent trip to a Tokyo bookstore, one of the authors counted 14 English language magazines devoted to the Internet.
3 An abbreviated version of this article appears in the 50th anniversary issue of the CACM, Feb. 97. The authors would like to express their appreciation to Andy Rosenbloom, CACM Senior Editor, for both instigating the writing of this article and his invaluable assistance in editing both this and the abbreviated version.
4 The Advanced Research Projects Agency (ARPA) changed its name to Defense Advanced Research Projects Agency (DARPA) in 1971, then back to ARPA in 1993, and back to DARPA in 1996. We refer throughout to DARPA, the current name.
5 It was from the RAND study that the false rumor started claiming that the ARPANET was somehow related to building a network resistant to nuclear war. This was never true of the ARPANET, only the unrelated RAND study on secure voice considered nuclear war. However, the later work on Internetting did emphasize robustness and survivability, including the capability to withstand losses of large portions of the underlying networks.
6 Including amongst others Vint Cerf, Steve Crocker, and Jon Postel. Joining them later were David Crocker who was to play an important role in documentation of electronic mail protocols, and Robert Braden, who developed the first NCP and then TCP for IBM mainframes and also was to play a long term role in the ICCB and IAB.
7 This was subsequently published as V. G. Cerf and R. E. Kahn, "A protocol for packet network interconnection" IEEE Trans. Comm. Tech., vol. COM-22, V 5, pp. 627-641, May 1974.
8 The desirability of email interchange, however, led to one of the first "Internet books": !%@:: A Directory of Electronic Mail Addressing and Networks, by Frey and Adams, on email address translation and forwarding.
9 Originally named Federal Research Internet Coordinating Committee, FRICC. The FRICC was originally formed to coordinate U.S. research network activities in support of the international coordination provided by the CCIRN.
10 The decommissioning of the ARPANET was commemorated on its 20th anniversary by a UCLA symposium in 1989.

References

P. Baran, "On Distributed Communications Networks", IEEE Trans. Comm. Systems, March 1964.
V. G. Cerf and R. E. Kahn, "A protocol for packet network interconnection", IEEE Trans. Comm. Tech., vol. COM-22, V 5, pp. 627-641, May 1974.
S. Crocker, RFC001 Host software, Apr-07-1969.
R. Kahn, Communications Principles for Operating Systems. Internal BBN memorandum, Jan. 1972.
Proceedings of the IEEE, Special Issue on Packet Communication Networks, Volume 66, No. 11, November 1978. (Guest editor: Robert Kahn, associate guest editors: Keith Uncapher and Harry van Trees)
L. Kleinrock, "Information Flow in Large Communication Nets", RLE Quarterly Progress Report, July 1961.
L. Kleinrock, Communication Nets: Stochastic Message Flow and Delay, Mcgraw-Hill (New York), 1964.
L. Kleinrock, Queueing Systems: Vol II, Computer Applications, John Wiley and Sons (New York), 1976
J.C.R. Licklider & W. Clark, "On-Line Man Computer Communication", August 1962.
L. Roberts & T. Merrill, "Toward a Cooperative Network of Time-Shared Computers", Fall AFIPS Conf., Oct. 1966.
L. Roberts, "Multiple Computer Networks and Intercomputer Communication", ACM Gatlinburg Conf., October 1967.

Authors

Barry M. Leiner was Director of the Research Institute for Advanced Computer Science. He passed away in April 2003.
Vinton G. Cerf is Vice President and Chief Internet Evangelist at Google.
David D. Clark is Senior Research Scientist at the MIT Laboratory for Computer Science.
Leonard Kleinrock is Leonard Kleinrock is Distinguished Professor of Computer Science at the University of California, Los Angeles, and is a Founder of Linkabit Corp., TTI/Vanguard,  Nomadix Inc., and Platformation Inc University of California, Los Angeles, and is Chairman and Founder of Nomadix.
Jon Postel served as Director of the Computer Networks Division of theInformation Sciences Institute of the University of Southern California until his untimely death October 16, 1998.
Dr. Lawrence G. Roberts is CEO, President, and Chairman of Anagran, Inc.

2015년 1월 5일 월요일

거의 모든 인터넷의 역사




인터넷의 발전과정

2. 인터넷의 발전과정
(1) 언터넷의 기원
인터넷의 발전은 미국 과학기술지원제도의 한 줄기를 이루는 “spin-off’ 정책
의 주요한 결설이라고 할 수 았다. spin-off’란 군사적 목적에 필요한, 민간기업
이 감당할 수 없는, 기술개발플 정부가 지원하면, 그 연구개발의 결과가 파급되
어 민간부문의 기술모 함께 발전하게 된다는 것을 말한다. 인터넷은 30 여 년 전
미국에서 군사적 목적으로 개발되기 시작하였지만, 이제는 전 세계적으로 수억
명의 사람들에 의해 학문적 연구나 상엽적 목적으로 활용되고 있다.
현재의 인터넷 효시는 미국 국방성의 ARP A(Advanced Research Pr ect
Agency) 의 자금지원 하에 1969 년 구축된 ARPANET 이라고 하는 네트워크이다.
ARPANET는 미국의 주요 대짝과 군수사업자들의 컴퓨터를 연결해서 이들 기관
의 연구자들이 자료름 교환할 수 있도록 고안되었다. 이것이 발전하여 1970 년대
와 80 년대 초에는 여러 개의 유사한 네트워크들이 주로 대학을 중심으로 형성되
었다. 이 네트워크들은 수많은 상이한 컴퓨터들로 구성되었지만, TCl 라는 표
준통신규약을 채택힘 으로써 상화 연동이 기능하였다.
1980 년대 중반에는 미 과학재단(National Science Foundation: NSF) 의 지원
하에 ARPANET을 이어 받는 NSFNET가 구축되었다. 처음에는 개의 슈퍼컴
퓨터들이 56kbps 의 데이터 전송속도로 연결되었지만, 그 후 여러 연구네트워크
가 추가되고 전송속도도 향상되어, 1988 년에는 137R 의 지역 네트워라 연결된
이른바 NSFNET backbone" 이 완성되었다. 전체 네트워크는 대학과 같은 개별
기관은 지역네트워크에 연결되고 이들이 다시 백본에 연결되는 계층(hierarchy)
구조를 이룬다.

1990년대 이르러 인터넷의 이용범위는 대학과 연구기관의 경계를 뛰어넘어 비
즈니스와 일반 소비지듬 영역으로까지 확대되었다. 이에 따라 NSF는 인터넷 백
본에 대한 연방정부의 지원을 줄이고 민간사업자툴 스스로 백본을 구축할 것을
권장하였다. 그리하여 1995 년 월 NSFNET 백본에 대한 연방정부의 지원은 중
단되었다.
연방정부의 지원이 중단된 이후에도 언터넷은 계속 발전하고 었다. 거대한 민
간 백본제공업자들은 NAP(Netwo바 Access Point) 와 같은 네트워크 접속점에서
데이터의 원활한 소통을 위한 다자간 협정 및 두 사업자간의 쌍무적 협정을 체
결하고 있으며, 새로운 회사듬이 등장하여 전국적인 백본을 구축하고 있다.

인터넷 세계화이 국제적 함의와 쟁점 - 서울대학교 S-Space

[인터넷 거버넌스] NSI vs ICANN

[인터넷 거버넌스] NSI vs ICANN

| 성명서
2000/03/30
※ 2000년 3월 24일 한국인터넷정보센터 아이켄 포럼(ICANN FORUM) 자료

NSI vs ICANN


1. NSI와 ICANN 간의 분쟁의 연원

지난 1999년 9월, DoC, NSI, ICANN 간의 협정이 체결되었다. 이로서 수년간 끌어오던 도메인 네임 등록 업무에 대한 다툼이 일단락되었다. 이 협정은 그동안 논의되어 오던 도메인 등록 업무의 경쟁을 본격적으로 도입하여 새로운 gTLD(generic Top Level Domain)를 둘러싼 논란의 해결을 위한 기반을 마련한 것으로서 의미를 가진다. 이 협정이 체결되기까지의 과정을 간략하게 살펴봄으로써 그동안의 쟁점과 이 협정의 의미를 살펴보기로 한다.

잘 알려진 바대로 초기에 인터넷 도메인 네임과 주소 관리는 University of Southern California의 Information Sciences Institute(USC-ISI)에서 Jon Postel이 DARPA의 지원을 받아 행하고 있었다. 바로 이 활동이 Internet Assigned Numbers Authority(IANA)라고 불리는 업무였다.
존 포스텔이 15년동안 수행해오던 임무는 1987년 USC의 ISI로부터 SRI-NIC(Defense Data Network Information Center at SRI International/Stanford Research Institute Network Information Center)로 위탁되었다.
또한 1991년 도메인 네임 등록 권한이 다시 SRI-NIC에서 Government System Inc.(GSI)로 이관되었다. 이 등록 업무(DNS 서버의 운영과 TLD의 등록 업무)는 1993년 1월 1일부터 NSF(National Science Foundation)와 NSI(Network Solutions Inc.) 간의 협력협정에 의하여 GSI에서 NSI로 이관되어 수행되었다.
즉 NSI의 등록업무는 당초에는 IANA의 기능의 일부였다가 SRI-NIC으로, 다시 GSI로 이관되었다가 1993년 NSI로 이관된 것이다. 여기서 문제는 NSI의 업무가 단지 도메인 네임 등록에만 국한되는 것이 아니라 Internet Registry의 기능까지도 포함되는 것이었다는 점이다. 애초에는 도메인 네임 등록업무와 IR 기능이 분리된 것이 아니라 동시에 수행되는 것이었기 때문이다. 바로 이것이 현재까지 NSI가 유일한 등록처로서 독점적 지위를 유지하게 되는 역사적 연원인 셈이다.
1994년, 도메인 네임의 등록은 민간 영리기업인 Network Solutions(NSI)가 National Science Foundation(NSF)와의 5년 협력 협정을 통해 맡게 되었다. 원래 NSI는 매년 1백만 달러의 고정금을 가지고 이 일을 수행하기로 되어있었지만, 도메인 네임이 폭발적으로 증가했고 상업적 등록업무에 대한 연방 지원금을 배정하는 것이 부적절했기 때문에 NSI는 도메인 네임 등록에 대해 요금을 부과하도록 해줄 것을 요청하게 되었다. 연방 정부는 이 요청을 받아들여 1995년 7월부터 .com, .net, .org 도메인 네임 사용자에 대해 연간 50달러의 요금이 부과되었다.
요금 덕에 북미에서 도메인 네임 사업을 독점적으로 하던 NSI는 수백만 달러의 수입을 올리는, 상업적으로 대단히 매력적인 사업을 차지하게 되었다. 도메인 네임 등록은 (북미에서는) NSI에 의해 성공적으로 상업화되었다.


2. 인터넷 관리의 민간이관 시도

1994년 7월, 포스텔은 IANA 기능을 USC-ISI와 정부 간의 계약으로부터 Internet Society에 넘길 것을 제안하는 charter를 준비했다. 이것이 인터넷 가버넌스에 대한 민간이양의 최초의 시도였다. IANA의 민영화를 위한 포스텔의 시도는 1996년 10월, International Ad Hoc Committee(IAHC)를 통해 구체화되었다.
IAHC는 ISOC, 이전까지 ISOC에 대한 반대자였던 ITU, 지적 재산권과 관련된 이해당사자인 WIPO, INTA(International Trademark Association), NSF, IETE/ISOC에서 개개인이 참여한 연합체의 성격을 띄었다. IAHC는 11월 발표한 최종 보고서에서 도메인 네임에 경쟁 도입을 위한 제안을 했다.
도메인 네임 등록에 본격적인 경쟁을 도입함과 동시에 도메인 네임 스페이스는 '공적 자원'이라는 인식을 바탕으로 도메인 네임 등록 업무에 관한 안을 제시했다. 새로운 도메인 네임 등록처의 운영은 민간 영리 회사에게 맡긴다. 이에 따르면 NSI와 같은 회사가 여럿 생겨서 경쟁을 하게 된다. 즉 도메인 네임 등록을 완전히 시장의 경쟁에 맡긴다는 것이다. 또한 등록처의 데이터베이스를 비영리 독점인 것으로 인식하여, 등록처 데이터베이스의 'wholesale' 운영의 기능(등록처)과, 등록의 'retail' 기능(네임 등록, 청구, 컨택트 정보 유지 등; 등록대행처)을 분리한다.
즉 등록처 간, 등록대행처 간에는 경쟁이 도입되지만, 하나의 등록처는 경쟁하는 여러 등록 대행처에 의해 비영리 기반으로 운영되며 TLD에 대한 정보를 공유한다는 것이다.
도메인 네임에 있어서 상표권 보호를 적극적으로 도입했다. 이는 새로운 TLD의 도입에 대한 상표권자의 반대를 잠재우고 도메인 네임 등록에 있어서 상표권자에게 힘을 부여하기 위한 조치였다. 주요한 내용은 도메인 네임에 대한 60일 간의 대기 기간의 도입, WIPO에 의한 검토와 분쟁 해결, '유명한' 상표에 대한 등록 배제 등이었다.
또한 IAHC는 포스텔이 제안했던 새로운 '서술적' TLD 중에서 7개{{) .web, .info, .nom, .firm, .rec, .arts, .store의 일곱 개이다.
}}만을 다시 제안했다. 이 역시 지나치게 많은 새로운 TLD의 도입으로 상표권 보호에 어려움을 생길 것을 두려워하는 상표권자의 요구를 받아들인 타협적인 제안이었다.
IAHC는 나아가 Generic Top Level Domain Memorandum of Understanding(gTLD-MoU)라는 새로운 가버넌스의 구조를 제안했다. 등록대행처는 매스터 데이터 베이스의 서버를 고나리하는 비영리 조직인 Council of Registrars(CORE)에 가입하게 되는데, 가입비로 2만 달러, 월 2천달러, 그리고 도메인 네임 당 일정 요금을 내도록 했다.
한편 NSI는 gTLD-MoU를 통한 공동등록모델(shared registry model)이 IAHC의 시도가 NSI의 .com 도메인에 대한 통제권을 겨냥한 것이라고 보았다.
논의에 관심을 가지기 시작한 미 정부가 공식적으로 개입하기 시작했다. 1997년 7월, 민간이양의 책임은 NSF로부터 물려받은 상무성 산하 NTIA(National Telecommunication and Information Administration)는 도메인 네임 정책에 관한 Notice of Inquiry를 내놓았다. 미 정부는 루트 서버에 대한 최종적 권위를 주장했고, 전세계적인 이해당사자들이 참여하는 방식으로 권위를 넘겨주려는 의도를 내비쳤다. 이제 IAHC의 주장은 받아들여지지 않았고 미 상무성 주도의 민간이양 작업이 본격적으로 시작된 것이다.

3. ICANN의 설립과 NSI와의 갈등

1998년 2월, 미 상무성은 "A Proposal to Improve Technical Management of Internet Names and Addresses"("Green Paper")를 발표하였다. 이 제안에서는 미국에 본부를 두고 DNS와 IP 주소를 관리하는 국제적 민간 비영리 법인을 구상했다. 미 정부는 또한 경쟁을 위해 제한없이 등록대행처를 새로 만들 것을 권고하고, 2000년 9월 30일까지 새로운 시스템으로의 전환을 감독하는 역할을 맡기로 했다.
1998년 6월, 미 상무성은 최종 보고서인 White Paper(Management of Internet Names and Addresses)를 발표했다. 미 정부의 최종안('White Paper')은 도메인 네임 등록사업을 경쟁적이고 시장 중심적인 사업으로 전환한다는 계획을 확인했다. 그러나 새로운 TLD를 즉각 추가한다거나 새로운 등록처를 인가한다거나 하는 미 정부의 직접 개입은 배제되고, 이런 일들은 새로운 민간 부문 조직으로 넘겨졌다. 한편 World Intellectual Property Organization(WIPO)에게 도메인 네임 상표분쟁에 대한 해결 방안 권고안을 제시하도록 했다. 이를 통해 상표권에 대한 우려를 누그러뜨리고자 했다. 그런데 여기서 새로운 민간 부문 조직(New Co)을 위한 리더쉽을 발휘하도록 미 정부가 염두에 둔 것은 포스텔의 IANA와 ISOC/gTLD-MoU이었던 것으로 보인다.
한편 Internation Forum on the White Paper(IFWP)은 White Paper의 '민간부문 이양' 발표에 고무되어 자발적으로 구성된 국제적 모임이었다. 그러나 IFWP를 통해서도 IANA, NSI 등 이해가 상충하는 당사자들을 모두 불러 모으지 못했고, 따라서 광범위한 합의를 이끌어내는 데에도 성공을 거두지 못했다. 결국 New Co인 ICANN은 IANA/ISOC을 중심으로 구성되었고 임시 이사회는 사실상 포스텔이 지명한 인사들로 채워지게 되었다.{{) 특히 CEO/President인 Roberts는 ISOC의 멤버였다.
}}
NSI는 강력한 반대자로서의 지위를 지켰다. NSI는 도메인 네임과 루트 서버 시스템에 대한 통제력을 장악하고 있었다. 앞서 언급했듯이 NSI는 NSF(National Science Foundation)과의 '협력협정' 하에서 DNS에 대한 관리를 수행했다. NSI는 .com, .net, .org 데이터베이스를 독점적으로 통제하는 등록처의 역할과 함께, 사용자로부터 도메인 네임 등록을 받는 '등록대행처' 업무를 동시에 수행했다. 따라서 가격과 계약 내용을 통제할 수 있었다.
그런데 White Paper에서는 NSI의 지위와 관련된 내용이 명확하게 언급되어 있지 않았다. 단지 DNS는 민간 부문에 의해 이해 당사자 간의 자발적인 계약을 바탕으로 운영되어야 한다고 모호하게 언급했을 따름이었다. 여기서 그 책임과 역할을 맡는 민간 부분이란 New Co이며 이는 사실상 ICANN이다. 따라서 DNS의 관리는 정부와의 계약 관계 하에서 ICANN이 되는 NewCo 간의 계약 관계 하로 이행한다는 것이다. 이 이행과정에서 NSI는 '걸림돌'이었으나, 이를 해결할 수 있는 구체적인 과정은 모호했다.
1999년 6월, ICANN과 DoC간의 MoU에서는 DNS 업무의 민간이양(즉 ICANN으로의 이양)을 구체화했다. MoU에서는 특정 DNS 기능에 대한 관리 책임을 민간으로 이전하는 데 필요한 메카니즘, 방법, 절차를 협력적으로 발전시키고 시험할 것"을 합의했다.
또한 상무성과 NSI는 계약을 갱신하여 NSI는 DNS에 대한 '등록 공유 시스템(Shared Registration System; SRS)을 구축하도록 했다. 등록 공유 시스템을 통해 복수의 경쟁하는 회사가 동등한 자격으로 등록을 수행할 수 있다. 그러나 NSI는 여전히 등록처와 등록대행처로서의 역할을 동시에 수행했다.
1999년 3월, ICANN에서는 등록대행처의 '인가'에 대한 기준 발표했다. '등록 인가 가이드라인(registration accrditation guidelines)'에서는 등록대행처의 재정적 사업적 자격을 명시했다. 등록처는 ICANN에 5천 달러의 요금과, 도메인 네임 당 1달러의 요금을 내야 한다. 또한 WIPO의 권고안에 따라, 인가 계약은 상표권을 보호하기 위한 여러 조치가 포함되었다. 선지불 등록, 특정 도메인 네임에 대한 ICANN의 유보 권한 등이 포함되었다. 1999년 4월, ICANN은 등록 공유의 시험기간동안 5개의 등록대행처를 인가하여 본격적으로 경쟁에 참여하도록 했다.
같은 달, Doc와 NSI 간에는 인가 등록대행처와 NSI 간의 관계를 규제하는 합의가 도출되었다. 상무성은 등록의 'wholesale' 가격을 도메인 네임당 9달러로 규제하였다. 등록대행처는 SRS 소프트웨어를 설비를 위해 NSI에 1만 달러의 요금을 지불하도록 했다. NSI는 여전히 gTLD 등록을 쥐고 있었고 등록처와 등록대행처 서비스를 동시에 행했다.
결과적으로 ICANN이 도메인 네임 루츠 서버에 대한 통제권을 가지고, NSI를 포함한 다른 모든 등록처는 ICANN으로부터 면허(license)를 받으며 모든 등록대행처는 ICANN에 의해 직접 승인(accredit)되는 것이다.
다시 말해 NSI가 ICANN의 인가계약을 승인하는 것은 gTLD 등록에 대한 독점적 통제권을 포기하는 것을 의미했다. 언제 승인하는가 또한 문제가 되었다. 왜냐하면 일단 승인을 하고 나면 NSI의 협상력은 제거되기 때문이다. 따라서 ICANN, DoC, NSI 간의 갈등은 격화되었다.
NSI는 당연하게도 ICANN과의 계약을 거부했다. 원래의 협력협정은 모호한 점이 있었는데, 협정이 끝나면 NSI는 데이터베이스의 사본을 상무성/NTIA에 제출하기로 되어있었으나 이것이 NSI의 등록업무의 종결을 의미하는 것인지는 명확하지 않았기 때문이다. 상무성과 ICANN의 입장에서 보면 등록업무는 재할당되어야 하는 것이었지만, NSI의 입장에서 보면 zone files의 보유자로서 정부의 감독없이 gTLD를 계속 운영할 수 있는 것이었다.
ICANN은 50개 이상의 등록대행처에게 허가를 내주었으나 NSI와의 계약이 난항을 겪는데다가 책임을 회피하려는 NSI의 의도와 기술적인 문제까지 겹쳐, 6월에 완료될 예정이었던 시험기간은 계속 연장되어 경쟁 도입은 지연되었다. NSI는 원래 1999년 6월까지 도메인 등록 업무의 시장경쟁의 기반이 되는 Shared Registration System; SRS을구축하기로 했으나 1999년 9월말까지로 연장되어 도메인 등록 업무의 경쟁 도입은 실질적으로 지연되었다.
또한 상표권과 관련된 중요한 문제로서, NSI가 ICANN의 등록대행처 인가계약을 승인하여 여러 등록대행처 중 하나가 되는 과정이 지연됨으로써, 등록대행처 인가계획에 포함되어 있는 UDRP에 관한 내용 또한 실효를 발휘하지 못하였다.
NSI의 로비로 하원 상업위원회 청문회가 열리기도 했고, ICANN의 지도력에 대해 지속적으로 문제가 제기되었다. ICANN에 대한 비판자들과 NSI는 선출되지 않은 임시 이사회가 너무 많은 중요한 결정을 비밀스럽고도 성급하게 내리려고 한다고 비난했다. 결국 ICANN은 이사회를 공개하고, '세금'이라는 비난에 휩싸여 네임당 1달러의 요금을 부과하려는 시도를 선출된 이사회의 결정으로 미뤄 사실상 철회했다.

4. DoC, NSI, ICANN 간의 협정

1999년 8월, NSI는 DoC와 이른바 'DotComDirectory'에 대한 접근과 사용에 대한 제한을 없애기로 합의하는 등의 협상 과정을 통해 도출된 1999년 9월의 DoC, ICANN, NSI 간의 협정은 이전의 구상과는 크게 달라진 타협적인 것이었다.
이 협정은 NSI의 독점을 종결시키는 중요한 걸음을 내딛었다는 데 의미가 있다. NSI는 경쟁업체가 등록업무를 위해 확보해야 하는 메인 디렉터리에 대한 'wholesale price'를 내릴 고, 도메인 네밍 사용자가 등록을 옮길 때 아무런 페널티가 없도록 하기로 합의했다.
그러나 DoC와 NSI 간의 계약은 또다시 연장되었고 NSI는 등록처와 등록대행처의 지위를 여전히 유지하여 여전히 독점적인 지위를 누릴 수 있게 되었다. 합의에 따르면, NSI는 4년 동안 .com, .net, .org 도메인에 대한 등록처로서의 계약을 유지하고, 만약 등록처 업무를 포기하고 등록대행처로 기능할 경우 계약을 4년간 더 연장시키기로 했다.
또한 ICANN이 NSI에게 새로운 정책을 부과할 때는 이 문제를 다루는 ICANN의 회원 조직의 2/3 이상의 동의가 있어야 한다고 못박았다.
ICANN으로서도 ccTLD의 문제, 새로운 gTLD의 도입 등의 문제를 본격적으로 다루기 시작하기 위해서라도, 6백만 도메인 네임의 확보하고 있는 NSI와의 문제를 어떤 식으로든 해결해야 했기 때문에 타협은 불가피했다고 볼 수 있다. 일단 NSI가 ICANN을 인정하고 완전 경쟁이 도입된다면 ICANN의 지위는 한층 안정화되고 상승될 수 있기 때문이다.
이 협정은 '잠정적'이라는 단서가 붙은 것에서도 알 수 있듯이 최종적인 것은 아니다. 결국 도메인 네임 관리의 민간이양이라는 미 정부의 계획은 DoC와 NSI 간의 계약이 연장됨으로써 실현되지 못했다. 타협안의 핵심은 DoC와 NSI 간의 계약 연장이었으며, NSI의 의무가 무엇인가에 대한 타협안을 도출하는 데 있어서 ICANN의 역할은 상대적으로 약했다. 이 협정은 민간 법인으로서 ICANN의 권위는 미 정부를 통해서만 이루어질 수 있다는 것을 보여준 역설적인 사례였다.

협정의 주요 내용은 다음과 같다.
= NSI는 ICANN을 인정하고 ICANN과의 '등록처 협정'에 따라 gTLD 등록처(.com, .net, .org)를 운영하기로 합의했다. ICANN은 4년 동안 NSI를 gTLD 등록처로서 면허하기로 합의했다. NSI가 18개월 이내에 등록대행처 기능으로부터 등록 업무(registry)를 완전히 제거하면, 등록처 계약은 4년간 연장된다.
= NSI는 ICANN 공인 등록대행처로부터만 도메인 네임 등록을 받아들이기로 합의했다.
= NSI는 얼터너티브 DNS 루트 서버 시스템을 만들지 않기로 합의했다.
= NSI의 wholesale 등록처 가격은 2000년 1월 15일부터 네임당 1년 9달러에서 6달러로 인하된다.
= NSI의 'retail' 등록대행처 가격에 대한 규제는 해제된다(Cooperative Agreement에 의해 네임당 1년 35달러로 고정되어 있었다).
= NSI는 ICANN에 125만 달러의 등록처 대행처 요금을 선납한다.
= NSI는 미 상무성의 지시에 따라 authoritative 루트 서버 시스템을 계속 운영한다.

또한 새 협정에 따른 ICANN의 의무는 다음과 같다.
= ICANN은 새로운 권위를 행사하는 데 있어서 특정한 절차적 제한을 따라야 한다. 많은 결정은 지원기구 평의회의 2/3 이상이 필요하다.
= gTLD 등록처에 대한 ICANN의 정책 권위는 다른 등록처들을 집중화된 계약 체제 내로 끌어들여서 그 결과 NSI가 경쟁에서 불이익을 갖지 않는다면 종결될 수 있다. 아마도 이는 ccTLD 등록처에도 해당된다.
= ICANN이 등록대행처에 부과하는 요금은 '공평하게 분배'되어야 하며 요금의 2/3를 내는 등록대행처에 의해 승인되어야 한다. 이는 NSI에게 ICANN의 '과세' 정책에 대한 상당한 영향력을 부여할 것이다.
= NSI가 ICANN에 지불하는 등록대행처 요금액은 200만 달러를 넘지 못한다.

여기서 미정부(상무성)은 루트 서버에 대한 최종적인 통제권('정책적 권위')을 여전히 보유하게 되었다. 민간이양에 대한 애초의 계획은 여전히 실현되지 않은 셈이다. 따라서 계약 내용은 '자율규제'에 대해 심각한 의문을 제기했다. 이 계약은 최장 8년까지 연장될 수 있었는데, 따라서 이 기간동안 상무성은 등록비를 직접 정하는 등의 통제권을 행사할 수 있게 되었다. 이는 그동안 미 정부의 비일관된 정책 결정의 결과물이었다. 규제에서 탈피하여 민간 부문으로 이양한다는 말과 핵심 데이터의 재산권에 대한 통제를 유지하는 것은 명백히 모순된다.
쟁점은 결국 쟁점은 매스터 데이터베이스에 대한 소유권을 누가 갖는가, NSI의 재산인가 아니면 DoC의 궁극적인 통제권을 갖는가, ICANN이 새로 넘겨받는 것인가에 관한 것이다. 미 정부와 DoC는 인터넷의 '안정성'을 확보하려는 고육지책으로 DNS에 대한 직접적인 개입을 사실상 연장했고, NSI, ICANN과 타협적인 협정을 맺은 것이라고 볼 수 있다. 
  
분류 : 인터넷 거버넌스 | 일반자료 |
덧글(0) 트랙백(0) 이 문서의 주소: http://acton.jinbo.net/zine/view.php?board=policy&id=26&page=163

인터넷의 역사

인터넷의 역사
[ 2006.12.16 03:55:47 ]
글쓴이  
표창우
조회수: 67
홈페이지  
Email Homepage
1957년 소련의 스프트닉 위성 발사 성공은 미국에 매우 강한 도전이었다. 당시 냉전 체제에서는 이 사건을 기술적 측면보다는 군사적 측면에서 받아들였던 것 같다. 한 나라의 기술력은 국방력과 직결되기 때문일 것이다. 위기 의식이 팽배한 가운데 아이젠하워 대통령은 ARPA(Advanced Research Project Agency)라는 연구 기획 및 관리 조직을 만든다.

ARPA는 후에 DARPA(Defense Advanced Re-search Project Agency)로 확대 개편되는데, 역시 국방을 목적으로 하는 민간 부문 연구를 총괄하였다. ARPA는 18개월 만에 성공적으로 위성을 띄우는 업적을 보였으며, 컴퓨터와 통신 기술을 군사적으로 이용하는 데 관심이 많았다. 초기 인터넷은 MIT 대학 소속의 인물들의 주도로 잉태된다. 1962년 APRA의 컴퓨터 부문 연구 프로그램 책임자로 부임한 MIT의 릭리더(Licklider)는 인터넷에 대한 확고한 신념을 가진 예언자적인 인물이었다.

그는 미국내의 컴퓨터들이 모두 연결되어 멀리 떨어진 곳의 컴퓨터의 데이터와 프로그램을 사용하고 주고 받는 환상을 갖고 있었다. 릭리더는 자신뿐만 아니라 후임 책임자들에게도 자신의 꿈을 전했고 확신시켰다. 컴퓨터 사용을 활성화하기 위하여 군수용 민간 부문에 대한 연구 투자를 대학 중심으로 이전하는 큰 변화를 주도한다.

릭리더가 ARPA를 이끌기 직전인 1961년 MIT의 클라인락은 인터넷의 기술적 초석인 패킷 스위칭에 관한 논문을 발표한다. 패킷 스위칭은 컴퓨터 사이의 정보 교환을 위한 연결 방식이다. 전화망이 사용하는 회선 스위칭(circuit switching)은 통화 당사자들의 두 지점을 고정적으로 연결해 놓지만, 패킷 스위칭은 정보를 패킷 단위로 인접 컴퓨터에 전달하는 방식으로 우체국을 통해 우편물이 전달되는 방식과 매우 흡사하다.

1965년 매사츄세츠주와 캘리포니아주를 전화선으로 연결한 원거리 컴퓨터 네트워크가 실험되었는데, 이 실험을 통해 원거리 컴퓨터 통신의 가능성과 패킷 스위칭의 필요성이 확인된다.

1966년부터 DARPA를 이끌었던 MIT의 로버츠는 인터넷의 시초가 되는 알파넷(ARPANET)에 대한 계획을 수립하는데, 밥 칸에 의해 그 설계가 구체적으로 진행된다. 릭리더의 환상은 1969년에 첫 결실을 본다. 미국 UCLA 대학으로 자리를 옮긴 클라인락이 세운 네트워크 측정 센터가 알파넷의 첫 노드로 선정된다.

클라인락의 패킷 스위칭 분야의 공로가 인정받은 것이다. UCLA 다음으로 스탠포드 연구소(SRI, Stanford Re-search Institute), 뒤이어 유타 대학과 UCSB(산타바바라 소재 캘리포니아 대학)가 추가되어 알파넷의 최초 연결이 선보이게 된다.

1972년에 열린 컴퓨터 커뮤니케이션 컨퍼런스에서 칸의 주도로 알파넷이 처음 시연된다. 워싱턴 힐튼 호텔의 지하실에서 미국 전역에 있는 컴퓨터를 사용하여 응용 프로그램을 실행하여 보였다. 1972년에는 이메일이 개발된 해이기도 하다. 이메일의 역사는 인터넷의 역사와 함께하고 있다.

1972년부터 1983년은 현재 인터넷의 기본 구조를 이루는 TCP/IP 통신 프로토콜이 완성되는 시기이다. 통신 프로토콜은 두 컴퓨터 사이의 약속된 정보 교환 절차를 뜻한다. 1972년 DARPA로 옮긴 칸은 1973년 스탠포드 대학의 빈스 서프에게 서로 분리되어 있는 컴퓨터 네트워크들을 연결하기 위한 개방형 통신 체계에 대하여 연구하기 위한 팀의 구성을 제안한다.

최종적인 목표는 서로 연결되어 있지 않고 분리되어 형성되어 있는 컴퓨터 네트워크를 하나로 연결하는 데 있었다. 각각의 지역 네트워크의 내부 구조에는 아무런 변화를 주지 않으면서도 미국 전역에 산재해 있는 컴퓨터 네트워크끼리 연결하는 일종의 컴퓨터 네트워크의 네트워크를 구성하는 체계를 수립하고자 하였다. 그 결과 인터넷의 기술적 초석인 TCP/IP 라는 개방형 통신망 프로토콜이 완성된다.

우체국을 통해 해외의 친구에게 우편물을 보낼 때 내용물을 준비하여 봉투에 담고, 주소를 기록하여 동네 우체국에서 부치면, 집중국으로, 중앙 우체국으로 전달되고 상대국 우편망을 통해 친구 가정까지 배달되면, 친구는 우편물 받아 개봉하여 읽고 고맙다는 답장을 보내오는 과정이 바로 TCP/IP 프로토콜이 관장하는 절차와 거의 같다. 인터넷이란 단어의 뜻은 네트워크와 네트워크 사이란 뜻인데, 네트워크와 네트워크를 연결해야 했던 당시의 상황에서 유래되었다.

1983년 1월 1일 알파넷의 프로토콜은 전부 TCP/IP 프로토콜로 바뀐다. 알파넷에 연결된 모든 컴퓨터의 프로토콜 담당 프로그램을 일시에 바꾼 것이다.

TCP/IP 프로토콜이 무르익는 동안에 인터넷은 구성에도 큰 변화를 겪는다. 80년대 초에 널리 보급되기 시작한 PC와 동급의 소형 컴퓨터들은 지역적으로 가까이 있는 것들끼리 지역망(LAN, Local Area Network)을 형성하기 시작한다. PC와 LAN 기술의 발전은 인터넷의 구성을 소수의 대형 컴퓨터 중심의 연결 체계에서 다수의 PC급 소형 컴퓨터들이 분산되어 있는 형태로 변화시켰고 그와 같은 추세로 지금까지 발전되고 있다.

지역망은 대학이나 연구소뿐만 아니라, 일반 사무 환경에까지 컴퓨터 네트워크를 확산시켜, 이메일, 전자 결제, 전화 교환 같은 사무 자동화와 네트워크를 통한 시스템 통합(SI, System Integration), MIS(Manage-ment Information System) 발전의 촉매 역할을 하였다. LAN은 다시 라우터라고 불리는 장비를 통해 인터넷과 연결되는데, 여러 가지 형태와 기술을 사용하는 다양한 LAN들을 서로 연결하는 데에는 TCP/IP와 같은 개방형 통신 프로토콜이 반드시 필요했고, 따라서 인터넷 확산이 가속되었다고 볼 수 있다.

1986년부터 1995년까지 사이에는 인터넷의 형성이 완성되는 시기이다. 미국 과학 재단(NSF, National Science Foundation)은 이 기간에 2억불 가량의 돈을 쏟아 부으며 컴퓨터 네트워크 연구와 사용 활성화를 주도하였다. 1985년 연방 정부의 지원 없이도 독자 생존이 가능한 교육과 연구 기관들을 연결하는 정보 인프라 구성을 목표로 NSFNET 프로그램이 진행된다. NSFNET 프로그램은 TCP/IP 프로토콜을 표준으로 삼아 더욱 확산되게 하였고, 독자 생존을 위해서는 인터넷의 상업화 전략을 추진하였다.

1995년 NSF는 NSFNET의 기간망의 지원을 중단하고, 민간 회사에서 구축하는 망을 사용하게끔 유도하는 일로 상업화 작업을 일단락 시킨다.

요즘은 인터넷을 월드 와이드 웹(WWW, World Wide Web)을 가장 많이 접하고 있어 WWW에 대해 간단히 언급하려고 한다. “인터넷 = 홈페이지” 등식으로 알고 있는 사람들이 많이 있는데 잘못 알고 있는 것이다. 홈페이지는 WWW에서 어떤 한 개인이나 기관의 대표 하이퍼텍스트 파일(hypertext file)을 뜻하는 것이다.

WWW는 하이퍼텍스트 교환 프로토콜(HTTP) 기반의 정보 인프라로서 인터넷 위에 구축되어 있다. 하이퍼텍스트는 일반 파일과 달리 구조를 갖는다. 구조를 갖는다는 의미는 서로 관련있는 부분들이 상호 연결되어 있고, 다른 파일과도 연결되어 있어, 원하면 연결을 따라 관련 정보로 이동할 수 있음을 뜻한다. 한편 일반 파일은 실과 같이 한 줄로 이어진 문자열로 볼 수 있다. 원래 하이퍼텍스트는 같은 컴퓨터 안에서 활용하게끔 의도되었는데, 이런 지역적 한계를 넘어서게 한 것이 WWW이다.

즉, 어떤 파일과 연관 있는 다른 파일이 인터넷 어느 곳에 있든지 하이퍼텍스트 연결이 가능하게 한 것이 WWW이다. 넷스케입이나 익스플로러는 이런 하이퍼텍스트를 인터넷 상에서 뒤지는 (browsing) 도구이다. WWW는 유럽의 CERN(핵물리 연구소, 소립자 연구가 활발하다)을 중심으로 개발되어 발전되었다.

미국의 TCP/IP 기반 인터넷 기술이 유럽에 들어 가면서, 세계 각처의 과학자들을 엮어 협동 연구를 하고, 연구 문서 교환을 원활히 하기 위해 1989년 팀 버너스-리가 제안하고 주도하여 WWW가 시작되었다. 지금은 온 인터넷을 웹이 덮고 있다고 해도 과언이 아닐 정도가 되었다.

1995년 10월 24일 미국 연방 네트워킹 평의회(FNC, Federal Net-working Council)는 인터넷 (Internet) 이란 용어의 정의를 결의한다. 문자적으로 보면 소문자 i로 시작하는 인터넷에서 대문자 I로 시작하는 인터넷으로 바뀐 것이지만, 엔지니어들 사이에서 네트워크와 네트워크의 상위 개방형 연결 형태를 지칭하던 것에서 거대한 통신망의 실체를 지칭하는 용어로 다시 태어난 것이다.

인터넷은 1992년 체제를 갖춘 ISOC (Internet Society)에 의해 운영된다고 보면 된다. ISOC에는 IAB(Inter-net Architecture Board)와 IETF(Internet Engineering Task Force) 두 개의 주요 조직이 있는데, IAB는 교회의 당회에 해당하는 조직이고, IEFT는 집사회에 대응된다. 프로토콜 표준화와 같이 기술적인 내용을 IETF가 연구하여 결정하면, IAB에서 이를 인정하고 널리 쓰이게 한다.

미래의 인터넷은 어떤 모습일까? 미래 사회의 기술 주역은 컴퓨터, 통신, 이동성, 멀티미디어 (Computer, Communication, Mobility, Multi-media) 이렇게 네 개의 단어로 요약할 수 있다. CCMM으로 기억하면 잊지 않는다. 컴퓨터와 통신의 기술은 이미 결합된 지 오래고, 계속 발전할 것이다. 사용자들은 움직이며 통신하며, 컴퓨터를 사용하기 원하고 있다. 발달된 통신망에는 문자위주의 정보뿐만 아니라 목소리, 음악, 영상, 동영상 등의 멀티미디어 정보들을 주고 받기를 원하고 있다.

요즘 휴대폰에는 이런 요소들이 이미 다 들어와 있다. 휴대폰은 무선 통신 능력이 있는 일종의 컴퓨터이다. 언제, 어디에서든지 원하는 서비스를 받을 수 있는 CCMM 체계가 발달되고 있으며, 인터넷이 그 하부구조를 이루고 있다. 더 나아가 TV, 냉장고, 주방기구, 세탁기와 같은 가정의 모든 가전 제품들과 수도, 가스, 전기 관련 기기들도 인터넷에 연결되는 날이 올 것이다. 집 밖에서 거실의 에어컨을 작동시켜 들어 갔을 때 적당한 온도를 유지하게 하고, 조리 기구를 가동시키고, 원하는 TV 프로그램을 녹화하는 것이 가능할 것이다.

냉장고 문을 열었을 때 우유가 없으면, 냉장고 문에 붙은 컴퓨터 화면을 통해 인근 수퍼에 배달 주문을 인터넷으로 보낼 수 있다. 수도, 전기, 가스 검침원은 역사 기록에서나 찾아 볼 수 있을 것이다.

사이버 공간에서의 많은 일이 실세계의 일과 연결되어 진행될 것이다. 지금까지는 현실 세계에서 사람과 사람이 만나 진행했던 일이 인터넷 상에서 간단히 해결된다. 인터넷 뱅킹이 대표적인 예이며, 이미 널리 이용되고 있다. 인터넷 뱅킹의 극단을 생각해 보면, 돈의 개념에 중요한 변화가 오게 된다. 사이버 머니는 형태는 없지만, 구매력을 지닌 엄연한 돈의 한 형태이다.

인터넷을 통한 서비스 제공, 전자 상거래, 사이버 머니를 이용한 결재 등은 세상을 변화시킬 엄청난 잠재력을 갖고 있음에 틀림이 없다고 생각한다. 자본주의의 꽃인 돈의 모습과 흐름 과정을 통째로 바꾸어 놓는 것이다. 제대로만 된다면, 가진 자들이 돈 갖고 장난 치는 일과 탈세하는 일을 막을 수 있을 것이다. 그러나 사람이 하는 일에는 완벽한 것이 없기 때문에 또 어떤 허점이 발견될지는 두고 볼 일이다.

CCMM 기술의 발전과 인터넷의 확산이 밝은 미래만 약속하는 것은 아니다. 보안, 개인 정보와 사생활 보호, 윤리적 문제 등이 벌써 문제 되고 있다. 과거에는 집 문 단속만 잘하면 안심하고 잘 수 있었는데, 요즘에는 누가 내 컴퓨터에 침투하여 내 은행 계좌 번호를 알아내어 돈을 빼가지 않는지, 내 사적인 중요한 정보를 빼내가지 않았는지, 어제 작성한 보고서를 적대적 관계에 있는 회사에서 빼내가지 않았는지 걱정해야 하는 세상이 되었다. 자녀들이 게임 중독에 걸리지 않았는지, 불건전한 채팅에 빠져 있지 않는지, 포르노물에 빠져 있지는 않나 걱정해야 한다.

인터넷은 군사적인 동기와 목적으로 시작되어, 대학과 연구 기관에 의해 발전되었으며, 상업화를 통해 대중 일반에게 활짝 열렸다. 그 중심에는 협업 (collaboration) 이라는 소중한 가치가 깃들여 있다. 발전 단계에서도 그랬고, 현재 사용하는 목적도 그렇다. 서로 합력하여 선을 이루는 데 유용한 매체이다. 모이기를 힘쓰는 데에도 아주 유용한 공간을 제공하고 있다.

그러나 인터넷은 바벨탑일 수도 있고, 아직 정복되지 않은 가나안 땅일 수도 있다. 예수님께서 땅끝까지 복음 전파하라고 말씀하셨을 때 사이버 공간까지 의미하셨을까 궁금해 진다. 답을 듣지 않아도 가만히 있을 수는 없는 상황이다. 들어가 싸워 점령해야 하는 우리의 지경으로 하나님께서 주셨음을 인정해야 한다. 하나님의 자녀들, 그리스도의 제자들이 음란한 세력과 우상 숭배자들과 한판 승부를 내야 하는 곳이다.

복음 전파의 강력한 매체로서 활용해야 한다. 정보 인프라가 없는 국가는 경쟁력이 떨어져 도태될 수 밖에 없음을 세계의 지도자들은 너무나도 잘 안다. 그래서 어떤 나라의 지도자들이라도 인터넷이 제공하는 정보 인프라를 수용하는 데 이견이 없다. 이슬람 원리주의자들 의견은 어떤지 모르겠는데, 정치 경제 문제 때문에 인터넷을 무시 못 할 것이다. 할렐루야! 인터넷을 타고 국경을 넘을 때에는 비자가 필요 없다.

인터넷 위에 구축되고 있는 가상 현실 세계와 실세계가 점점 구분이 없어지고 있는 이 때에 우리의 선교적 사명을 위해 과감히 사이버 공간을 침노해야 할 때가 무르익었다.