Introduction

The X Window System is a network-based graphics window system that was developed at MIT in 1984. With X you can work with multiple programs simultaneously, each in a separate "window". One of the strengths of a window system such as X is that you can have several processes going on at once in several different windows (perhaps even on different machines). These windows are controlled by a resident "window manager".

Most window systems are closely tied to the machine's operating system and can only run on that system. The X Window System, however, is not part of any operating system, but is comprised entirely of user-level programs.

The architecture of the X Window System is based on a "client-server" model. The system is divided into two distinct parts: "display servers" and "client programs". Acting as intermediaries between client application programs and local display hardware, display servers provide capabilities and keep track of user input. Client programs are those application programs that perform specific tasks and make requests communicated to the hardware display by the user.

This division within the X architecture enables the client programs and display servers to work together on the same machine or to reside on different machines connected by a network...

Some (very little) Background Information

Transmission Control Protocol/Internet Protocol (TCP/IP) is truly one of the unsung heroes of the Internet Age. Vaguely understood by the masses, TCP/IP simply goes about its business of enabling computers and networks to communicate with each other, as it has for decades.

Computer networks existed before TCP/IP, but each used proprietary techniques for communicating between machines, and different networks couldn't talk to each other. Networks of IBM machines, for example, used IBM's communication methods, and Unisys networks used their own methods, but neither could communicate with the other.

Recognizing this problem, the Department of Defense (DoD) wanted to build a network to connect a number of military sites, no matter what vendor's technology each used. And since the DoD's research arm, the Advanced Research Projects Agency (ARPA), undertook this project with the Cold War mindset of the 50s, 60s and 70s, a key requirement was that the network remain operational in the event that a large portion (7/8 in the original spec) of the sites or machines were not functional.

As a result of the DoD's design requirements, TCP/IP enabled robust, completely decentralized networks with multiple, redundant data paths, and the built-in intelligence to reroute data if a path became a dead end. In addition, ARPA had the foresight to design TCP/IP with the flexibility to accommodate applications that wouldn't be thought up for decades (read: World Wide Web), and to make it an open standard available for anyone to use and build upon. User Datagram Protocol is the other protocol principle means of sending information between computers.

Enabled by TCP/IP, the original "network of networks," the ARPANET, became operational in 1972. The rest is Internet history.

How it works

Before we can talk about how the X server/client works, we must first understand a little about networking. Not that history is required to understand technology, but sometimes understanding where something started, helps explain why it is the way it is. As with many componients with networking, Digital Equipment Corporation, (DEC) provided many of the starting points if not the idea for the final product, i.e., IS-IS, etc. The term Ethernet gets its origin from; Ether - A medium of great elasticity and extreme tenuity, once supposed to pervade all space, the interior of solid bodies not excepted, and to be the medium of transmission of light and heat; and; network - Any system of lines or channels interlacing or crossing to interconnect each other or others. Well this Ethernet has had many forms over time Xerox started with an need to connect a remarkable innovation called the "Personal Computer" within offices. The story is at http://inventors.about.com/library/weekly/aa111598.htm The (IEEE) started standardizing versions of Ethernet and network connectivity in February 1980, building on the work of Digital, Intel and Xerox (DIX Ethernet) producing standards refered to by that date (802 - 1980 and second month) and the particular chapter. So when some of this seems confusing just remember, it has been run by committee for sometime and will be in the future. The outcome of all this Ethernet work was to produce a two part system to deliver electronic signals between computers. This method of delivery does not rely on the reciepient being remote or local. The controlling protocol for the transmission is only looking to have a valid address for the electronic signal, hense, transmission control protocol/internet protocol. At its most basic level, TCP/IP works like postal system here in the United States. When you want to send a letter to someone, you can address it, put a stamp on it and place it in the mailbox, either at home or at work, then wait for it to be picked up and processed by the "Postal System" to deliver it as addressed. Network addresses work in much the same way. When you connect your computer to a network, the computer sends out "letters" though its "mailbox" to any "Postmaster" asking which are servicing as well as letters to all the neighbors anouncing its arrival. Just like the human counter part, some reply and some do not, but most importantly, if all is working correctly, the computer will find out where it can send mail, and what its address is on this network. That is Dynamic Host Control Protocol (DHCP), the most common method used today. So you have moved into your new location and you are sending letters just fine. You even send letters to everyone in your old location but they are all letters so they all fit in the mailbox in front of your house. To this point you have not used the significant part of TCP, the guaranteed delivery. Imagine that each of these letters were sent Return Reciept Request. It is an extra bit of work for you but you know if it is delivered. Let's say you ordered a complete set of cooking recipes. And rather than sending the full set at once, the publisher mails envelopes of 10 each individually by registered mail. Now they can fit in your mailbox but you have to worry about receiving all envelopes. Further if there are sequencial instructions you need to be able to reorder them when they arrive. Using the nine-digit zip code and your house number, the post office is able to quickly deliver it to your door. Once you sign for the envelope, the publisher knows that it arrived at the right place in one piece. If a envelope gets lost or is damaged, the publisher will send another copy. Because the publisher numbered each envelope as they separated the recipe cards, you know the correct order to place them on re-assembly. This process repeats until you have the complete set. This difference in guaranteed delivery and non-guaranteed is the difference between TCP and UDP. Many things are sent using UDP because the program calling the UDP transfer can control the resend rather than the network protocol. Much like you can call your Aunt to ensure she got the pictures you sent rather than sending it Return Receipt Request. Besides your Aunt likes to hear your voice.

Similarly, TCP disassembles a larger message (Web page, e-mail, etc.) into small pieces of data called packets, which are easier to send over networks. Attached to each packet of data is the unique IP address for the host (recipient). Network routers read this IP address and send the packets to another router closer to the host machine. This continues until it reaches a router connected to the host's home network. (Different packets may very well travel different paths to get from sender to host.)

Once received by the host, TCP sends an acknowledgement back to the sender. If a packet is lost or damaged in transit, another related protocol, the Internet Control Management Protocol (ICMP), reports this error to the sender and requests that the packet be resent. On the host's end, TCP reassembles the packets into a copy of the complete file that can then be used by an application.

IP and TCP Internet Protocol (IP) is responsible for addressing and routing packets of data between networks. Each unique IP address contains three parts: the network ID, the subnet ID, and the host ID. To revisit the postal metaphor, the first five digits of your zip code would be like the network ID, the subnet ID would be the last four zip code digits (with which the post office can pinpoint your block), and your house number would be like the host ID, the identifier for a particular machine. Internet routers reference databases of IP addresses like the post office references zip codes to send packets where they need to go.

Currently, the most widely used standard is IPv4, which uses 32-bit numbers written as 4 bytes separated by periods. These range from 1.0.0.1 to 223.255.255.255. In the late 1990s, the Internet Engineering Task Force (IETF) established a newer 128-bit IP standard (IPv6), which allowed for a greater number of IP addresses. But adoption of IPv6 has so far been slow.

To ensure that numbers don't overlap, the Internet Assigned Numbers Authority (IANA) assigns higher-level IP addresses to organizations such as universities, large businesses, and ISPs. They in turn assign subnet IDs and host IDs to the networks and computers that connect to the Internet through their gateway.

TCP is responsible for disassembling files into packets on the sender's end and for acknowledging receipt and reassembly at the host end. For streaming media and other applications, where speed of delivery is more important than receiving every packet, a different protocol, User Datagram Protocol (UDP), is often used. Unlike TCP, there are no acknowledgements or packet-resending mechanisms built into UDP.

Solid foundation TCP/IP is more than simply TCP and IP; it refers to a suite of protocols that over the years have grown up around them. Because TCP/IP is an open standard, many changes, improvements, and additions have been grafted onto it over time. The Internet Activities Board (IAB) ratifies proposed standards through a drafting process called "Request for Comment [RFC]."

The Domain Name System (DNS) is one example of a standard adopted through the RFC process that improves upon TCP/IP. DNS is the protocol that established the practice of using domain names (i.e., www.hp.com) for the World Wide Web rather than requiring users to enter the long number strings of actual IP addresses.

In addition, TCP/IP provides the foundation for many of the Internet-age standards we typically take for granted. Hypertext Transport Protocol (HTTP) for Web pages, Simple Mail Transport Protocol (SMTP) for e-mail, and File Transfer Protocol (FTP) for file transfer are only three of the widely used higher-level protocols built upon TCP/IP.

Want to learn more? This Tech-Pro tutorial offers a more technical introduction to TCP/IP.

And to explore this subject in even greater detail, check out Professor Gary Kessler's comprehensive TCP/IP overview on his website.