Computer Networking: Networks And The Internet

· 43min · someone

Computer Networks and the Internet: An Overview

Chapter 1, "Computer Networks and the Internet," provides a broad overview of the fundamental concepts and components of computer networking, using the Internet as the primary example. The chapter aims to establish a foundation for understanding the more detailed topics covered in subsequent chapters.

Problems Raised and Their Solutions:

  • Understanding the complexity of the Internet: The chapter acknowledges that the Internet is a massive and complex engineered system. It proposes that by focusing on guiding principles and structure, it is possible to understand how it works. The book adopts a top-down approach and focuses on the Internet's architecture and protocols to simplify this complexity.
  • Need for communication among diverse components: The chapter implicitly raises the problem of enabling communication between heterogeneous end systems, communication links, and switches. The solution lies in the use of protocols. The Transmission Control Protocol (TCP) and the Internet Protocol (IP) are highlighted as key protocols. The chapter emphasizes that two or more communicating entities must run the same protocol to accomplish a task.
  • Data delivery in a packet-switched network without resource reservation: The chapter contrasts packet switching with circuit switching. A problem in packet switching is the potential for congestion and delay because network resources are not reserved. While the chapter describes the occurrence of queuing delay and packet loss as a consequence, it doesn't offer specific solutions at this stage, indicating that these are complex issues that will be explored later.
  • Security vulnerabilities in the Internet: The chapter introduces the problem of network security, noting that the Internet faces attacks from "bad guys" who aim to damage systems, violate privacy, and disrupt services. It mentions various prevalent security-related problems like malware, denial of service (DDoS), sniffing, source masquerading, and message modification. The chapter states that the Internet was originally designed based on mutual trust, which explains its inherent insecurities. Solutions and defenses against these attacks will be a central topic later in the book, particularly in Chapter 8.
  • Managing the interconnection of numerous networks: The chapter describes the Internet as a "network of networks" and raises the challenge of how access ISPs interconnect so that all end systems can communicate. It explains that a naive mesh design is too costly, and the actual Internet has evolved a more complex structure driven by economics and national policy, involving regional and Tier 1 ISPs and Internet Exchange Points (IXPs).

Aspects Covered:

  • Basic terminology and concepts: The chapter introduces fundamental terms such as hosts, end systems, communication links, packet switches, packets, protocols, and transmission rate.
  • Nuts and bolts of the Internet: This includes the hardware and software components like end systems (hosts), communication links (coaxial cable, copper wire, optical fiber, radio spectrum), and packet switches (routers and link-layer switches). It also describes how data is segmented into packets for transmission.
  • Services provided by the Internet: The chapter explains that the Internet can be viewed as an infrastructure that provides services to distributed applications. It uses the analogy of the postal service providing various delivery options. The specific services will be detailed in Chapter 2.
  • The Network Edge: This part focuses on end systems (like desktops, smartphones, servers) and access networks (like DSL, cable internet, FTTH, dial-up, wireless) that connect end systems to the first router. It also discusses the physical media used in these access technologies.
  • The Network Core: This section delves into the mesh of packet switches (routers and link-layer switches) and links that interconnect end systems. It contrasts packet switching (data sent in packets without resource reservation) with circuit switching (dedicated circuit established for the duration of communication). The chapter also describes the Internet as a hierarchy of networks, including access ISPs, regional ISPs, Tier 1 ISPs, and content provider networks connected through IXPs.
  • Delay, Loss, and Throughput in Packet-Switched Networks: This aspect examines the performance characteristics of computer networks. It covers different types of delay (nodal processing delay, queuing delay, transmission delay, propagation delay), packet loss due to buffer overflow, and the concept of end-to-end throughput as the rate at which data can be transferred between end systems. Bottleneck links and the impact of intervening traffic on throughput are also discussed.
  • Protocol Layers and Their Service Models: The chapter introduces the concept of layered architecture as a way to organize the complexity of network protocols. It describes the Internet's five-layer protocol stack: application, transport, network, link, and physical layers. The role of each layer and the concept of encapsulation (headers being added at each layer) are explained. The placement of complexity at the edges of the network (end systems implementing all five layers, while routers and switches implement lower layers) is highlighted.
  • Networks Under Attack: This section surveys common security threats facing computer networks, including malware (viruses, worms, botnets), denial-of-service (DoS and DDoS) attacks, sniffing, IP spoofing (source masquerading), and the interception and modification/deletion of messages. The lack of inherent security in the original Internet design is discussed.
  • History of Computer Networking and the Internet: The chapter provides a brief historical overview, starting with the development of packet switching in the 1960s, the emergence of proprietary networks and internetworking in the 1970s, the proliferation of networks in the 1980s, the Internet explosion in the 1990s, and developments in the new millennium, including the rise of cloud computing.

Key Points to Remember:

  • The Internet is a complex system but can be understood through its underlying principles and structure.
  • Communication in computer networks relies on protocols that define the format, order of messages, and actions taken upon their transmission or receipt.
  • The Internet consists of the network edge (end systems and access networks) and the network core (packet switches and links).
  • Packet switching and circuit switching are two fundamental approaches for transporting data. The Internet uses packet switching.
  • Performance in packet-switched networks is characterized by delay, loss, and throughput, which are affected by link capacities and network congestion.
  • The Internet architecture is organized into five protocol layers: application, transport, network, link, and physical, each providing specific services.
  • Computer networks face various security threats due to the Internet's original design based on mutual trust.
  • The Internet evolved from early packet-switched networks and has undergone significant transformations over the decades.
  • A top-down approach, starting with the application layer, will be used throughout the book to understand networking concepts.
  • The Internet's architecture has a "narrow waist" at the IP layer, which has been crucial for its growth by allowing diverse underlying technologies to interoperate.

Chapter 1 serves as a crucial introduction, laying the groundwork for the more in-depth discussions of each layer and various networking topics in the subsequent chapters. It encourages the reader to develop an intuition for network components and vocabulary.

Understanding the Internet: Nuts, Bolts, and Services

Section 1.1 of "Computer Networks and the Internet," titled "What Is the Internet?", aims to provide an initial understanding of this complex system by presenting it from two perspectives: a "nuts-and-bolts description" and a "services description".

Problems Raised and Their Solutions:

  • Understanding the complexity of the Internet: The chapter acknowledges the sheer scale and complexity of the Internet, referring to it as "arguably the largest engineered system ever created by mankind". It highlights the "hundreds of millions of connected computers, communication links, and switches" and "billions of users" connecting through diverse devices. Given this vastness and variety of components and uses, the question arises: "is there any hope of understanding how it works?". The proposed solution is to focus on "guiding principles and structure that can provide a foundation for understanding such an amazingly large and complex system". The book aims to provide a "modern introduction to the dynamic field of computer networking, giving you the principles and practical insights you’ll need to understand not only today’s networks, but tomorrow’s as well". Section 1.1 sets the stage by offering two fundamental descriptions to begin demystifying this complexity.
  • Enabling communication between diverse computing devices: The section implicitly raises the challenge of how a multitude of different computing devices, now including not just traditional computers and servers but also smartphones, tablets, and various "Internet 'things'" like TVs, game consoles, and cars, can communicate with each other. The solution, introduced in this section and elaborated throughout the book, lies in the use of computer networks, specifically the Internet, as the infrastructure that interconnects these billions of devices. These devices, collectively called "hosts" or "end systems," are connected by a "network of communication links and packet switches".
  • Managing the flow of data across interconnected networks: Section 1.1 introduces the idea that the Internet is not a single network but rather "a computer network that interconnects billions of computing devices throughout the world" and also "a network of networks". This raises the problem of how these individual networks, particularly the "ISPs that provide access to end systems," are interconnected to enable global communication. The section briefly explains that lower-tier ISPs connect through "national and international upper-tier ISPs," which are themselves directly connected. This interconnection forms a complex structure driven by "economics and national policy".
  • Establishing rules for communication: A fundamental problem in enabling communication between diverse and distributed entities is the need for agreed-upon rules. Section 1.1 introduces the concept of a "protocol" as the solution to this problem. It uses a human analogy of asking for the time to illustrate how protocols define the sequence of actions and messages in communication. In the context of computer networks, protocols are essential for controlling the "sending and receiving of information within the Internet". The Transmission Control Protocol (TCP) and the Internet Protocol (IP) are highlighted as two of the "most important protocols in the Internet," with IP specifying the format of packets. The collection of these principal protocols is known as TCP/IP.
  • Ensuring interoperability in a globally distributed system: Given the reliance on protocols, a problem arises: how can one ensure that different systems and products can communicate effectively? Section 1.1 introduces the concept of "standards" as the solution. It explains that Internet standards are developed by the Internet Engineering Task Force (IETF). These standards are documented in "requests for comments (RFCs)," which define protocols like TCP, IP, HTTP, and SMTP. Other standards bodies, such as the IEEE 802 LAN Standards Committee, also specify standards for network links. Adherence to these standards ensures that different implementations can interoperate.
  • Providing a structured way for applications to utilize the network: The section also presents the Internet as an "infrastructure that provides services to applications". This implies a need for a defined way for application programs running on end systems to request and receive these network services. The solution is the "socket interface," which provides a "set of rules that the sending program must follow so that the Internet can deliver the data to the destination program". An analogy with the postal service is used to explain this interface.

Aspects Covered:

  • Two fundamental descriptions of the Internet: Section 1.1 introduces two complementary ways of viewing the Internet. The first is a "nuts-and-bolts description," focusing on the physical and logical components that make up the network. The second is a "services description," which views the Internet as an infrastructure that provides services to distributed applications.
  • Nuts-and-bolts components: This description details the basic hardware and software elements of the Internet. These include:
    • End systems (hosts): Computing devices connected to the Internet, ranging from traditional desktops and servers to smartphones, tablets, and various "Internet 'things'".
    • Communication links: The physical media that connect end systems and packet switches, such as coaxial cable, copper wire, optical fiber, and radio spectrum. These links have different "transmission rates" measured in bits per second.
    • Packet switches: Network devices, primarily routers and link-layer switches, that forward data in the form of "packets". When sending data, end systems "segment the data and add header bytes to each segment," creating these packets.
    • Internet Service Providers (ISPs): Networks that provide access to end systems. These ISPs are interconnected in a hierarchical manner, involving lower-tier (access) ISPs and upper-tier (national and global) ISPs. Content provider networks (e.g., Google) are also part of this interconnected structure. Internet Exchange Points (IXPs) facilitate the interconnection of ISPs.
    • Protocols: Rules that govern the sending and receiving of information. TCP and IP are highlighted as crucial protocols, with TCP and IP collectively known as TCP/IP. The IP protocol specifies the format of packets.
  • Services provided by the Internet: This perspective describes the Internet as an "infrastructure that provides services to distributed applications". These "distributed applications" involve multiple end systems exchanging data and run on the end systems themselves, not within the packet switches. Examples of such applications include email, web surfing, mobile apps, streaming media, and online social media.
  • The concept of a protocol: Section 1.1 introduces the fundamental idea of a protocol using a human analogy and provides a formal definition: "A protocol defines the format and the order of messages exchanged between two or more communicating entities, as well as the actions taken on the transmission and/or receipt of a message or other event". The importance of protocols in accomplishing different communication tasks in computer networks is emphasized.
  • Standards: The role of standards in ensuring interoperability among Internet components is discussed, with the IETF and its RFCs being primary examples. The IEEE 802 LAN Standards Committee is also mentioned for specifying standards for network links.
  • The socket interface: This interface is presented as the means by which an application program on one end system instructs the Internet infrastructure to deliver data to a specific program on another end system. It is described as a "set of rules that the sending program must follow", drawing an analogy with the postal service interface. The existence of multiple services provided through this interface is noted.

Key Points to Remember:

  • The Internet can be understood as a network connecting billions of diverse computing devices globally.
  • Communication on the Internet relies on a hierarchy of interconnected networks managed by different ISPs.
  • Data is transmitted over the Internet in the form of packets, which are segments of larger messages with added header information.
  • Protocols are essential for controlling the exchange of information between communicating entities, defining the format, order, and actions related to messages. TCP and IP are fundamental protocols.
  • Internet standards, developed by organizations like the IETF, ensure that different network components and applications can interoperate.
  • The Internet provides services to distributed applications running on end systems through a socket interface, which acts as a set of rules for how applications can utilize the network infrastructure.
  • Understanding both the "nuts and bolts" (hardware and software components) and the "services" (infrastructure for applications) is crucial to grasping the nature of the Internet.
  • The book adopts a top-down approach, starting with the application layer, to simplify the study of computer networks. Section 1.1 lays the groundwork for this approach by introducing fundamental concepts and terminology.
  • The Internet is a complex engineered system, but by understanding its structure and principles, it becomes possible to comprehend its operation.
  • The initial design of the Internet was based on a model of mutual trust.

The Network Edge: End Systems and Access Networks

Based on the sources provided, Section 1.2, "The Network Edge," delves deeper into the components of the Internet that are most familiar to users, namely the end systems and the access networks that connect these end systems to the network core. While Section 1.1 provides an overview of the Internet with "nuts-and-bolts" and "services" descriptions, Section 1.2 seems to elaborate on the "nuts and bolts" at the periphery of the network.

Problems Raised and Their Solutions (Implicit):

While Section 1.2 doesn't explicitly state "problems raised" in the same way Section 1.1 might touch upon the inherent complexity of the Internet, it implicitly addresses the challenges associated with connecting diverse end systems to the vast Internet infrastructure. The solutions are presented through descriptions of the different types of access networks and their characteristics.

  • Connecting end systems to the first router: The section implicitly raises the challenge of how a wide variety of end systems (computers, smartphones, etc.) connect to the initial router on their path to the rest of the Internet. The solution is the existence of various "access networks". These access networks physically link an end system to the "edge router" (the first router on the path to a distant end system). Different access technologies cater to different environments and user needs.
  • Providing varying levels of bandwidth and sharing characteristics: The section implicitly addresses the problem that different users have different bandwidth requirements and access environments (home, enterprise, mobile). The solution lies in the diversity of access technologies available, each offering different transmission rates and varying models of shared or dedicated access. For example, DSL, cable modem (HFC), and fiber-to-the-home (FTTH) are mentioned as residential access technologies with different transmission rate ranges and sharing characteristics. Wireless technologies also provide connectivity with their own characteristics.
  • Managing the physical connection and transmission: The section implicitly touches upon the physical layer aspects of connecting end systems. While not a "problem" in the sense of a failure, the variety of physical media and their transmission characteristics need to be understood to appreciate how end systems gain access. The solution is the underlying physical layer technologies that support the different access networks, using media like copper wire, coaxial cable, optical fiber, and radio spectrum.

Aspects Covered:

Section 1.2 primarily covers the components at the "edge" of the network, focusing on end systems and the access networks that connect them to the network core.

  • End Systems (Hosts): The section reiterates that the computers and other devices connected to the Internet are often referred to as "end systems". This terminology emphasizes their location at the periphery of the Internet. Figure 1.3 illustrates the interaction of end systems with other parts of the network, including content provider networks and national or global ISPs. The section highlights the familiarity of these devices to everyday users (computers, smartphones, etc.). Review Question R1 in Chapter 1 further clarifies the definition of a host and an end system and asks for examples. A web server is also identified as an end system.
  • Access Networks: This is a central aspect covered in Section 1.2. It explains that an access network is the network that physically connects an end system to the first router (edge router) on the path to other distant end systems. Figure 1.4 illustrates various types of access networks and their relationship with content provider networks and national/global ISPs. The section likely goes on to describe different access technologies (though specific details are spread across other parts of Chapter 1, such as HFC, DSL, FTTH, and various wireless technologies). Review Question R4 asks for a list of four access technologies and their classification (home, enterprise, etc.). Question R9 specifically asks for the transmission rates and sharing characteristics of HFC, DSL, and FTTH. Question R10 prompts a description of used wireless technologies and their characteristics.
  • Physical Connections: While not explicitly a subsection in the provided table of contents, the discussion of access networks inherently involves the physical links that enable these connections. Different access technologies utilize different physical media (e.g., copper for DSL, coaxial cable for HFC, fiber for FTTH, radio waves for wireless). The transmission rates of these links are also a key characteristic discussed. The physical layer, responsible for moving individual bits across these links, is briefly mentioned in relation to the link layer.

Key Points to Remember:

  • The "network edge" comprises the end systems (hosts) that users directly interact with and the "access networks" that connect these end systems to the broader Internet.
  • End systems sit at the edge of the Internet and include a wide range of devices like computers, smartphones, tablets, and IoT devices. Web servers are also considered end systems.
  • Access networks are crucial for providing the physical connection from an end system to the first router on its Internet path (the edge router).
  • There are various types of access technologies (e.g., DSL, cable modem, FTTH, wireless) that cater to different environments (home, enterprise, mobile) and offer different transmission rates and sharing models.
  • Understanding the characteristics of access networks is fundamental to comprehending how users connect to and interact with the Internet.
  • The transmission rate of the access network can be a bottleneck affecting the end-to-end throughput experienced by users.

In essence, Section 1.2 bridges the initial high-level description of the Internet in Section 1.1 with the more technical details of how individual users and devices gain access to this global network through various access technologies. It sets the stage for understanding the subsequent sections that delve into the network core and other aspects of computer networking.

The Internet Network Core: Packet and Circuit Switching

Section 1.3, "The Network Core", delves into the fundamental mechanisms of how data is transported across the vast Internet after it leaves the network edge. This section builds upon the overview provided in Section 1.2 about end systems and access networks by focusing on the intermediary infrastructure responsible for connecting these edges.

Problems Raised and Their Solutions:

Section 1.3 implicitly addresses the core problem of efficiently and effectively moving data between the numerous end systems connected to the Internet. Given the scale and heterogeneity of the network, several inherent challenges arise:

  • How to transport data between distant end systems? The sheer number of interconnected computers, communication links, and switches necessitates a systematic approach to guide data from a source to a destination. Section 1.3 introduces two fundamental solutions to this problem:
    • Packet Switching: This approach addresses the challenge of sharing network resources and handling variable data traffic. In packet switching, long messages are broken down into smaller units called packets. These packets then travel through communication links and packet switches (routers and link-layer switches) independently. Each packet is transmitted over a link at the full transmission rate of that link, taking $L/R$ seconds to transmit a packet of $L$ bits over a link with a transmission rate of $R$ bits/sec. Packet switching allows for statistical multiplexing, where resources are used on demand by different data streams, offering flexibility and efficiency for bursty data traffic.
    • Circuit Switching: This approach provides a dedicated path for communication. In circuit-switched networks, the necessary resources along a path (buffers, link transmission rate) are reserved for the entire duration of the communication session between the end systems. This is analogous to a phone call where a dedicated circuit is established. Once the circuit is established, data can flow without the need for addressing information to be included in each unit of data. Circuit switching is well-suited for applications requiring guaranteed bandwidth and low delay variation.
  • How to manage the complexity of a massive, interconnected network? The Internet is not a single monolithic entity but rather a "network of networks". This interconnected structure involves numerous independently managed networks, such as access ISPs, regional ISPs, and Tier 1 ISPs, as well as content provider networks and enterprise networks. Section 1.3.3 implicitly addresses the problem of interconnecting these heterogeneous networks so that end systems in one network can communicate with end systems in another. The solution is the hierarchical and interconnected nature of the Internet Service Providers (ISPs). These different tiers of ISPs connect at Internet Exchange Points (IXPs), allowing traffic to transit between networks. This "network of networks" model enables global connectivity and scalability.

Aspects Covered:

Section 1.3 primarily covers the internal structure and operational principles of the network core, the part of the Internet responsible for forwarding data between the edges.

  • Packet Switching (1.3.1): This subsection details the core concept of packet switching. It explains how end systems exchange messages, and how these messages are segmented into packets for transmission. It introduces the role of packet switches (routers and link-layer switches) in forwarding these packets through communication links. The concept of transmission rate and the time it takes to transmit a packet ($L/R$) are also covered.
  • Circuit Switching (1.3.2): This subsection presents an alternative approach to data transfer: circuit switching. It contrasts circuit switching with packet switching, highlighting the reservation of resources and the dedicated path established between end systems for the duration of communication. The analogy of reservation-based versus non-reservation-based restaurants is used to illustrate the key differences.
  • A Network of Networks (1.3.3): This subsection describes the Internet's architecture as an interconnection of numerous networks managed by different Internet Service Providers (ISPs). It likely explains the hierarchy of ISPs (access, regional, Tier 1) and how they connect at Internet Exchange Points (IXPs) to facilitate global communication. Figure 1.10 and Figure 1.15 (although their descriptions are in relation to other sections) likely visually represent this interconnected structure of the network core. The concept of content provider networks is also likely touched upon in the context of this interconnected landscape.

Key Points to Remember:

  • The network core is the central part of the Internet comprised of packet switches and links that interconnect the Internet's end systems. It is responsible for transporting data across the network.
  • There are two fundamental ways to move data through a network: packet switching and circuit switching. The Internet primarily uses packet switching.
  • In packet switching, data is divided into packets that are routed independently through the network, allowing for efficient sharing of resources.
  • In circuit switching, a dedicated communication path is established and resources are reserved for the duration of the session.
  • The Internet is a network of networks, meaning it's an interconnected collection of various independently operated networks (ISPs, content providers, enterprises) that work together to provide global connectivity.
  • Understanding the network core is crucial for comprehending how data travels from one end of the Internet to another, building upon the concepts of the network edge.

Section 1.3 lays the groundwork for understanding the more detailed aspects of network functionality discussed in subsequent chapters, such as routing algorithms within the network core (covered in Chapter 5) and the delay, loss, and throughput characteristics of packet-switched networks (covered in Section 1.4). It sets the stage for appreciating the complexity and the underlying principles that enable the Internet to function as a global communication infrastructure.

Packet Network Delay, Loss, and Throughput

Section 1.4, "Delay, Loss, and Throughput in Packet-Switched Networks", addresses the inherent limitations and performance characteristics of packet-switched networks. It raises the fundamental problem that the ideal of instantaneous, lossless data transfer with unlimited throughput is unachievable in reality. Instead, computer networks introduce delays, can lose packets, and constrain the rate at which data can be transferred. This section aims to examine and quantify these aspects.

Problems Raised and Their Solutions (or Lack Thereof in this Section):

This section primarily focuses on identifying and explaining the sources of performance limitations rather than providing direct solutions. The problems raised are the unavoidable realities of data transmission in packet-switched networks:

  • How does delay occur in packet-switched networks? The movement of a packet from source to destination involves traversing multiple nodes (hosts and routers) and links, each contributing to the overall delay. Section 1.4.1 breaks down this problem into its constituent parts, identifying four main types of delay at each node:
    • Nodal Processing Delay ($d_{proc}$): This is the time taken by a node (router) to process the packet header, determine the next hop, and perform error checking. Processing delays are typically in the order of microseconds or less in high-speed routers.
    • Queuing Delay ($d_{queue}$): This is the time a packet spends waiting in the output buffer of a router before it can be transmitted onto the link. Queuing delay is highly variable and depends on the level of congestion and the arrival rate of packets compared to the link's transmission rate. If multiple packets arrive at a buffer, they will experience different queuing delays depending on their position in the queue. Statistical measures like average queuing delay are often used to characterize this delay.
    • Transmission Delay ($d_{trans}$): This is the time required to push all the bits of the packet onto the communication link. It is calculated as the packet length (L) in bits divided by the link's transmission rate (R) in bits per second ($d_{trans} = L/R$). Transmission delays can range from microseconds to milliseconds depending on packet size and link rate.
    • Propagation Delay ($d_{prop}$): This is the time it takes for a bit to travel from one end of the link to the other. It depends on the physical distance (m) between the two routers and the propagation speed (s) of the medium ($d_{prop} = m/s$). Propagation delay is influenced by the speed of light in the transmission medium and is independent of packet length or transmission rate. The total nodal delay ($d_{nodal}$) is the sum of these four delays: $d_{nodal} = d_{proc} + d_{queue} + d_{trans} + d_{prop}$. Section 1.4.3 discusses how these nodal delays accumulate along the end-to-end path to contribute to the end-to-end delay.
  • Why and how does packet loss occur? Packet loss happens when a router's buffer becomes full and cannot accommodate arriving packets. This typically occurs during periods of congestion when the arrival rate of packets to a link temporarily exceeds the link's transmission rate. When the buffer is full, an arriving packet (or sometimes an already-queued packet, depending on the buffer management policy) will be dropped. From an end-system perspective, a lost packet appears as if it was sent into the network core but never reached the destination. The fraction of lost packets increases with increasing traffic intensity.
  • What limits the rate of data transfer (throughput) between end systems? Section 1.4.4 addresses the concept of throughput, which is the rate (in bits/sec) at which the receiving host receives the data. The instantaneous throughput varies over time, while the average throughput represents the average rate over a longer period. Several factors can limit end-to-end throughput:
    • Bottleneck Link: The link in the end-to-end path with the lowest transmission rate often acts as a bottleneck, limiting the overall throughput to the rate of this link. If there is no other traffic, the throughput can be approximated by the minimum transmission rate along the path.
    • Intervening Traffic: Even if a link has a high transmission rate, the throughput for a particular flow can be reduced if many other data flows are also using that link, leading to congestion and queuing. The available bandwidth is shared among the competing flows.

Aspects Covered:

Section 1.4 covers the following key aspects of performance in packet-switched networks:

  • Detailed Breakdown of Delay Components: It provides a structured explanation of the four main types of delay experienced by packets at each node in the network path: processing delay, queuing delay, transmission delay, and propagation delay. Figure 1.16 visually illustrates these nodal delays.
  • Queuing Dynamics and Packet Loss: It explains how queuing occurs when packet arrival rates exceed link capacities, leading to potential buffer overflow and packet loss. It highlights the variability of queuing delay based on network conditions.
  • End-to-End Delay Calculation: It implicitly addresses how the total delay experienced by a packet traveling from source to destination is the sum of the delays encountered at each hop (node and link) along the path.
  • Throughput Definition and Limiting Factors: It defines instantaneous and average throughput and identifies the transmission rates of links along the path, particularly the bottleneck link, and the impact of competing traffic as key factors that determine the end-to-end throughput. Figures 1.19 and 1.20 illustrate how link capacities and shared links affect throughput.

Key Points to Remember:

  • Delay in packet-switched networks is composed of processing delay, queuing delay, transmission delay, and propagation delay. The queuing delay is the most variable and can be significant during congestion.
  • Packet loss occurs due to buffer overflow at routers, typically when the arrival rate of packets exceeds the departure rate.
  • End-to-end delay is the cumulative delay along the path from the source to the destination.
  • Throughput is the rate of data transfer and is limited by the bottleneck link (the link with the minimum capacity) and the intervening traffic sharing the network resources.
  • The performance of Internet applications is significantly affected by these network delays, losses, and throughput limitations.
  • Understanding these fundamental performance characteristics is essential for comprehending how packet-switched networks operate and for designing protocols and applications that can function effectively despite these limitations.

Section 1.4 provides a foundational understanding of the performance trade-offs inherent in packet-switched networks, setting the stage for more in-depth discussions in later chapters about how protocols and network designs attempt to mitigate the negative effects of delay, loss, and limited throughput.

Network Protocol Layering and Service Models

Section 1.5, "Protocol Layers and Their Service Models," addresses the inherent complexity of the Internet and proposes a structured approach to understanding its architecture through the concept of protocol layering.

Problems Raised:

The primary problem raised in the context of network architecture is the sheer complexity of the Internet. The Internet comprises numerous applications, protocols, diverse end systems, packet switches, and various types of link-level media. This immense complexity makes it challenging to organize, discuss, and manage such a system. The section implicitly asks: How can we bring order and understanding to this intricate web of components and interactions?

Solutions:

The fundamental solution presented to manage the complexity of network protocols and hardware/software is protocol layering. Network designers organize protocols into distinct layers, with each protocol belonging to one layer. This layering provides a structured way to discuss system components. Furthermore, modularity, a key advantage of layering, makes it easier to update system components without affecting other parts of the system. The airline system is used as a helpful human analogy to illustrate this point: changing the implementation of gate functions does not necessitate changes in baggage handling or airplane routing as long as the gate layer continues to provide the same core service of loading and unloading passengers. This ability to change the implementation of a service without affecting other components is crucial for large, constantly evolving systems.

Aspects Covered:

Section 1.5 delves into several critical aspects of protocol layering:

  • Layered Architecture: The core idea is to organize network functionalities into a hierarchy of layers. The Internet protocol stack is presented as a five-layer model: the physical, link, network, transport, and application layers. The textbook itself is largely organized around these layers, following a top-down approach, starting with the application layer and moving downwards. This approach is motivated by the idea that understanding applications first provides context for the necessary network services and their implementation.
  • Service Models: Each layer offers specific services to the layer directly above it, defining the service model of that layer. A layer provides its service by performing actions within that layer and by utilizing the services offered by the layer below it. For example, layer n might offer reliable message delivery by using the unreliable message delivery service of layer n-1 and adding its own mechanisms for error detection and retransmission. The network layer's service model defines the characteristics of end-to-end delivery of packets between sending and receiving hosts.
  • Actions Within Layers and Use of Lower-Layer Services: As mentioned above, a layer's functionality is realized through its own internal processes and by leveraging the capabilities of the layer beneath it. This creates a dependency chain where each layer builds upon the foundation provided by the lower layers.
  • Encapsulation: The section explains the process of encapsulation, where a network-layer datagram is placed within a link-layer frame for transmission over a single link. Figure 1.24 illustrates the physical path of data down the sending host's protocol stack, through intermediate devices like link-layer switches and routers, and up the receiving host's stack. As data moves down, each layer adds its own header (and sometimes a trailer) to the data, a process known as encapsulation. At the receiving end, this process is reversed through de-encapsulation.
  • Protocol Stacks: The collection of protocols implemented at each layer in an end system or a network device is referred to as the protocol stack. Different devices implement different sets of layers. End systems (hosts) typically implement all five layers of the Internet protocol stack. Routers, being network-layer devices, commonly implement the physical, link, and network layers (layers 1-3) to forward packets. Link-layer switches, which operate at the link layer, usually implement only the physical and link layers (layers 1-2) to forward frames within a local area network.
  • Conceptual and Structural Advantages: Layering provides a structured way to discuss system components. The modularity inherent in layering simplifies system updates. This conceptual clarity is a significant advantage in managing the complexity of networking.
  • Potential Drawbacks: While layering offers numerous benefits, the section also mentions potential drawbacks. One is the possibility of duplication of functionality across layers, such as error recovery being implemented at both the link and transport layers. Another potential issue is that a layer might need information (e.g., a timestamp) that is only available in another layer, which can violate the goal of separation of layers. Some researchers and engineers hold strong opposition to layering due to these potential inefficiencies.

Key Points to Remember:

  • The Internet's architecture is organized into a series of protocol layers to manage its inherent complexity.
  • Each layer provides a specific service model to the layer above it.
  • Layers achieve their functionality by performing actions within the layer and by using the services of the layer below.
  • Encapsulation is the process of adding headers (and sometimes trailers) at each layer as data moves down the protocol stack.
  • Different network devices (hosts, routers, link-layer switches) implement different subsets of the protocol stack based on their functionality. Hosts typically implement all five layers, routers implement the bottom three, and link-layer switches the bottom two.
  • The collection of protocols at each layer is called the protocol stack.
  • While layering offers significant advantages like structure and modularity, it can also have potential drawbacks such as duplication of functionality and information dependencies between layers.

Understanding protocol layering and service models is fundamental to comprehending how data is transmitted and managed across the Internet. This layered approach allows for the development and evolution of network technologies in a more manageable and organized fashion.

Networks Under Attack: An Introduction to Security Threats

Section 1.6 of the textbook, titled "Networks Under Attack", introduces the critical issue of security in computer networks, highlighting the problems that exist and hinting at future solutions that will be explored in more detail later in the book, particularly in Chapter 8.

Problems Raised:

This section raises the fundamental problem of the inherent insecurity of the Internet. The Internet has become essential for many institutions and individuals, yet it faces a "dark side" where malicious actors ("bad guys") attempt to disrupt services, damage computers, and violate privacy. The core question posed is: How are computer networks vulnerable?. Furthermore, the section implicitly asks how these vulnerabilities can be addressed through defensive measures or by designing inherently more secure network architectures. It also reflects on the historical context, questioning why the Internet was originally designed in a way that lacks robust security.

Solutions:

While Section 1.6 serves primarily as an introduction to the problems, it does touch upon the concept of network security as the field dedicated to understanding attacks and defending against them, or ideally, designing systems that are immune to such attacks. It briefly mentions firewalls and intrusion detection systems (IDSs) as "popular defense mechanisms to malicious packet attacks". The text encourages the reader to think about potential defenses against specific attacks, such as DoS attacks and the need for end-point authentication. It explicitly states that mechanisms for end-point authentication will be explored in Chapter 8, and that Chapter 8 will delve into secure communication and defending against attacks. Therefore, Section 1.6 sets the stage by identifying the problems, with the promise of exploring solutions in greater detail in subsequent chapters.

Aspects Covered:

Section 1.6 provides a survey of some of the more prevalent security-related problems in today's Internet. These aspects include:

  • Malware Attacks: The section discusses how malicious software (malware) can be introduced into a user's host via the Internet. This can happen alongside legitimate data like social media posts or streaming media. Once installed, malware can perform various harmful actions, such as deleting files and installing spyware to collect private information (e.g., social security numbers, passwords, keystrokes) and send it back to the attackers.
  • Botnets: Compromised hosts can be enrolled in networks of similarly infected devices, collectively known as botnets. These botnets are controlled by attackers and can be leveraged for malicious activities like distributing spam e-mail or launching distributed denial-of-service (DDoS) attacks. Figure 1.25 illustrates the concept of a DDoS attack using a botnet.
  • Denial-of-Service (DoS) Attacks: These attacks aim to make targeted hosts or services inoperable. The section outlines three common types of DoS attacks:
    • Vulnerability Attack: This involves sending specifically crafted messages to exploit vulnerabilities in applications or operating systems running on a target host, potentially causing the service to stop or the host to crash.
    • Bandwidth Flooding: Attackers send an overwhelming volume of packets to the target host, saturating its access link and preventing legitimate packets from reaching the server.
    • Connection Flooding: Attackers establish a large number of half-open or fully open TCP connections at the target host. The host becomes overwhelmed by managing these bogus connections and may stop accepting legitimate connections. The section encourages readers to consider how network designers can defend against these types of DoS attacks.
  • Packet Sniffing: The section highlights the vulnerability created by wireless Internet access (via WiFi or cellular). By placing a passive receiver within the transmission range, an attacker can intercept and obtain a copy of every transmitted packet. These packets can contain sensitive information like passwords, social security numbers, and private messages. Such a passive receiver is called a packet sniffer. The section also notes that sniffing can occur in wired environments, particularly in broadcast environments like many Ethernet LANs or cable access technologies. Attackers who gain access to an institution's router or Internet access link can also plant sniffers.
  • The Internet's Insecure Origins: The section concludes by reflecting on why the Internet was initially so insecure. The fundamental reason is that it was designed based on a model of "a group of mutually trusting users attached to a transparent network". In this model, security was not a primary concern, and this is reflected in the default functionalities, such as the ability for any user to send a packet to any other user without explicit permission or authentication of identity.

Key Points to Remember:

  • The Internet faces significant security threats that can disrupt services, compromise systems, and violate privacy.
  • Common attack vectors include malware, denial-of-service attacks, and packet sniffing.
  • Botnets amplify the impact of attacks like DDoS.
  • Wireless communication introduces particular vulnerabilities to sniffing.
  • The Internet's original design lacked strong security considerations due to an assumption of mutual trust.
  • Network security is a crucial field dedicated to understanding and mitigating these threats.
  • Section 1.6 provides an initial overview of these problems, with the promise of more detailed discussions on security principles and defense mechanisms in later chapters.

History of Computer Networking and the Internet

Section 1.7, titled "History of Computer Networking and the Internet", primarily outlines the evolution of computer networking from its early stages to the modern Internet. While it doesn't explicitly raise "problems" in the same way as the vulnerabilities discussed in Section 1.6, it implicitly addresses the limitations of previous communication methods and the need for new approaches to support computer-to-computer communication.

Problems Raised (Implicit):

  • Inefficiency of Circuit Switching for Bursty Data: The section begins by noting that in the early 1960s, the telephone network, which used circuit switching, was the dominant communication network. Circuit switching is appropriate for voice transmission, which occurs at a constant rate. However, the emerging traffic from timeshared computers was "bursty," characterized by intervals of activity followed by inactivity. This implies that circuit switching, which establishes a dedicated end-to-end connection for the duration of a call, would be inefficient for this type of intermittent data traffic, leading to underutilization of resources during idle periods.
  • Need for Computer-to-Computer Communication: The increasing importance of computers in the early 1960s and the advent of timeshared systems created a need to connect computers so they could be shared among geographically distributed users. The lack of an "effective way" for computers to communicate with each other at the time is implicitly presented as a problem that needed a solution.
  • Interoperability of Different Networks: As various packet-switching networks began to emerge (ARPAnet, Cyclades, Tymnet, GE Information Services network, IBM’s SNA), the need for a way to connect these disparate networks together became apparent. The proliferation of independent networks created a problem of isolation, preventing seamless communication between users and resources on different networks.

Solutions (Key Developments):

  • Packet Switching: The primary solution to the inefficiency of circuit switching for bursty data was the invention of packet switching. Three research groups (Leonard Kleinrock, Paul Baran, and Donald Davies/Roger Scantlebury) independently began developing this concept in the early 1960s. Kleinrock's work demonstrated its effectiveness for bursty traffic using queuing theory. Packet switching breaks long messages into smaller "packets" that are then transmitted through communication links and packet switches (routers and link-layer switches). This allows for more efficient sharing of network resources.
  • Internetworking and TCP/IP: To address the problem of connecting different networks, Vinton Cerf and Robert Kahn pioneered work on "internetting," creating "a network of networks". This led to the development of the Transmission Control Protocol (TCP) and the Internet Protocol (IP). The early versions of TCP initially combined reliable delivery and forwarding, but later IP was separated out to handle packet forwarding, while TCP provided reliable, in-sequence delivery. UDP (User Datagram Protocol) was also developed as an unreliable, non-flow-controlled transport service. The TCP/IP suite became the foundational set of protocols for the Internet.
  • Early Packet-Based Networks: The development and experimentation with early packet-switching networks like the ARPAnet, the first multiple-access radio network ALOHAnet, and the wire-based shared broadcast network Ethernet provided practical experience and furthered the understanding of networking principles.
  • Development of Network Architectures: The 1980s saw the growth of various networks linking universities, such as BITNET, CSNET, and NSFNET. NSFNET, in particular, grew to become a primary backbone linking regional networks. The official deployment of TCP/IP as the standard on ARPAnet in 1983 was a crucial step towards a unified Internet architecture.
  • Key Internet Technologies: The late 1980s and 1990s witnessed the development of essential Internet technologies like host-based congestion control in TCP and the Domain Name System (DNS) for mapping human-readable names to IP addresses. The 1990s also saw the emergence of the World Wide Web and related technologies like HTTP and browsers, which fueled the "Internet Explosion".
  • Evolution to a Hierarchical Structure: The Internet evolved into a complex "network of networks" with a hierarchical structure involving access ISPs, regional ISPs, and Tier 1 ISPs. This structure, driven by economics and national policy, allowed the Internet to scale globally.

Aspects Covered:

  • Early History (1961-1972): This section covers the initial research and development of packet switching as a response to the limitations of circuit switching for computer communication. It highlights the independent work of Kleinrock, Baran, and Davies. Figure 1.26 shows an early packet switch.
  • Proprietary Networks and Internetworking (1972-1980): This period saw the emergence of various proprietary packet-switching networks alongside the ARPAnet. The key aspect covered is the groundbreaking work by Cerf and Kahn on "internetting" and the development of TCP/IP, which aimed to create a way for these heterogeneous networks to interoperate. The separation of IP and TCP and the development of UDP are also discussed.
  • Proliferation of Networks (1980-1990): This section details the significant growth in the number of networks, particularly those linking universities, such as BITNET, CSNET, and NSFNET. The adoption of TCP/IP as the standard protocol for ARPAnet in 1983 is a key event. Extensions to TCP for congestion control and the development of DNS also occurred during this period.
  • The Internet Explosion (The 1990s): This part describes the rapid expansion of the Internet, driven by the development of the World Wide Web, browsers, and e-commerce. The transformation of the Internet to support a wide range of applications is highlighted.
  • The New Millennium: This briefly touches on the continued evolution of the Internet into the new millennium, including the rise of high-speed wireless internet access and the trend of running applications in the "cloud".

Key Points to Remember:

  • The development of packet switching was a fundamental shift from circuit switching, enabling more efficient communication for the bursty nature of computer data.
  • The concept of internetworking, pioneered by Cerf and Kahn, and the development of the TCP/IP protocol suite were crucial for connecting disparate networks to form the Internet.
  • The ARPAnet served as a vital experimental network that contributed significantly to the development of Internet technologies.
  • The 1980s saw a significant proliferation of networks and the crucial standardization on TCP/IP.
  • The 1990s marked the "Internet Explosion" driven by the World Wide Web and the increasing number of users and applications.
  • The initial design of the Internet prioritized connectivity and mutual trust, which explains some of the inherent security vulnerabilities discussed in Section 1.6.
  • The Internet has evolved from a small research network to a complex, global "network of networks" due to technological advancements and economic and policy factors.

In essence, Section 1.7 provides the historical context for understanding the technologies and architectures that underpin the modern Internet discussed throughout the rest of the book. It illustrates how the solutions to early challenges in computer communication laid the foundation for the vast and complex network we use today.