How HTTP and OSI Model Works ??
HTTP (Hypertext Transfer Protocol) is the underline communication protocol used for the World Wide Web (invented by Tim Berners-Lee ). HTTP functions as a request-response protocol in a client-server architecture model. Internet Engineering Task Force (IETF) and World Wide Web Consortium (W3C) are responsible for the establishment of standards in HTTP. These standards are published in the Requests for Comments (RFCs). HTTP has four versions as HTTP 0.9, HTTP 1.0, HTTP 1.1, and HTTP 2.0.
HTTP / 0.9
- The initial version of HTTP.
- Telenet-friendly protocol.
- Request nature: Single-Line (method + path for requested document).
- Only supported GET Methods.
- Only accepts Hypertext Response types.
- Connection nature: Terminated after receiving the response.
- No HTTP headers, No status or error codes, No URLs.
HTTP / 1.0
- Browser-Friendly protocol.
- Metadata about each request and response was embed in the HTTP headers (version number, status code, content type, accept).
- The responses were not only limited to hypertext.
- Supported GET, HEAD, POST Methods.
- Still, the connection terminated immediately after the response
🔸 Both HTTP 0.9 and HTTP 1.0 open up a new connection for each request and then close it immediately after sending the response. The TCP three-way handshake occurred every time a connection is established, and there is some time wasted for connection termination as well. Therefore as a solution to this, HTTP 1.1 was introduced with persistent connections.
HTTP / 1.1
- The current version of HTTP commonly in use.
- Introduce important features and enhancements like persistent and pipelined connections, chunked transfers, compression/decompression, content negotiations, virtual hosting, faster response, and great bandwidth savings by adding cache support.
- Provide support for the GET, HEAD, POST, PUT, DELETE, TRACE, and OPTIONS methods.
- Long-lived connection. Therefore connection is not terminated after each request and response.
The persistent connection is also known as HTTP keep-alive or HTTP connection reuse. It enables the use of a single TCP connection to send and receive multiple HTTP requests/responses.
HTTP pipelining allows multiple HTTP requests to be sent on a single TCP connection without waiting for the corresponding responses.
🔸 Although we can send multiple requests without waiting for responses HTTP 1.1 is not asynchronous. Because HTTP 1.1 done these modifications on top of HTTP 1.0 since there are no breaking changes. Therefore even though several successive requests are sent to the server, it replies to them in the order they were received.
Apart from the above, we have a secure version of HTTP 1.1 called HyperText Transfer Protocol Secure (HTTPS). It uses SSL/TLS for secure encrypted communications and is a topic for different times. HTTP 2.0 is a breaking change enhancement on HTTP 1.1 that uses Server Side Events (SSE) and Websockets that have changed how traditional HTTP works. We will talk about HTTP 2.0 in a future article. Current HTTP 3.0 is in the draft state, which will use HTTP over QUIC.
The Open Systems Interconnection (OSI) model describes seven layers that computer systems use to communicate over a network. It was the first standard model for network communications, adopted by all major computer and telecommunication companies in the early 1980s. OSI was introduced in 1983 by representatives of the major computer and telecom companies and was adopted by ISO as an international standard in 1984.
1. Application Layer
At the top of the OSI model, we find the Application Layer implemented by the network applications like Google Chrome, Firefox, Skype Messenger, and Gmails. These applications produce the data, which has to be transferred over the network. The application layer is used by end-user software such as web browsers and email clients. It provides protocols to send and receive information and display data to the user. Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), Post Office Protocol (POP), Simple Mail Transfer Protocol (SMTP), and Domain Name System (DNS) are some of the protocols available under the application layer.
2. Presentation Layer
The Presentation Layer receives data from the application layer, and this data is in the format of characters and numbers. The presentation layer converts these numbers and characters to machine-understandable binary format. In other words, conversion of ASCII to EBCDIC. This process is called Translation. Before the data is being transmitted, the presentation layer will reduce the number of bits that are used to represent the original EBCDIC format. This process is called Data Compression. Apart from that presentation layer is responsible for the encryption and decryption of messages using SSL/TLS. This is known as the Encryption process.
3. Session Layer
The session layer is responsible for the creation of sessions between devices for the communication between them. It is responsible for the establishment of connections, maintenance of sessions, authentication, and security. Some of the main functionalities of the session layer are;
- Session establishment, maintenance, and termination: This allows the two processes to establish, use and terminate a connection.
- Synchronization: This layer allows a process to add checkpoints (synchronization) points into the data. These synchronization points help to identify the error so that the data is re-synchronized properly and avoids data losses.
- Dialog Controller: The session layer allows two systems to start communication with each other in half-duplex or full-duplex.
4. Transport Layer
The transport layer takes data from the session layer and breaks it into segments for transmission purposes. It is also responsible for reassembling the segments on the receiving end. The transport layer carries out the flow control and error control mechanisms too. The main responsibility of the transport layer is the End-to-End Delivery of the complete message.
- Segmentation and Reassembly: Most networks have a limitation on the amount of data that can be included in a single PDU. The transport layer divides application data into blocks of data that are of an appropriate size. At the destination, the transport layer reassembles the data before sending it to the destination application or service. Ports are used to make sure the message is delivered to the correct process. Each segment is assigned a sequence number that uniquely identifies the segments and their order.
- Flow control: Network hosts have limited resources, such as memory or bandwidth. Some protocols can request that the sending application reduce the rate of data flow. This is done at the Transport layer by regulating the amount of data the source transmits as a group. Flow control can prevent the loss of segments on the network and avoid the need for retransmission.
- Error control: TCP receiver uses checksum bits for error detection. If there are no errors, it sends an acknowledgment to the sender. If errors are found, the receiver does not send an acknowledgment to the sender. Therefore the sender will again send the segment that was not acknowledged after some time.
The two most common transport layer protocols in the transport layer are Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). Both protocols manage the communication of multiple applications. The differences between the two are the specific functions that each protocol implements.
User Datagram Protocol (UDP)
UDP is a simple, connectionless protocol, described in RFC 768. It has the advantage of providing for low overhead data delivery. The pieces of communication in UDP are called datagrams. These datagrams are sent as best-effort by this Transport layer protocol. Applications that use UDP include are Video Streaming, Voice over IP (VoIP), etc.
Transmission Control Protocol (TCP)
TCP is a connection-oriented protocol, described in RFC 793. TCP incurs additional overhead to gain functions. Additional functions specified by TCP are the same order delivery, reliable delivery, and flow control. Each TCP segment has 20 bytes of overhead in the header encapsulating the Application layer data, whereas each UDP segment only has 8 bytes of overhead. Applications that use TCP are Web Browsers, E-mail, File Transfers, etc.
5. Network Layer
The network layer is responsible for the transmission of data from source to destination located in the same or different networks. It takes the segment from the transport layer and adds the network layer header which, contains the source and destination IPV4 address and some other details, thereby converting it into a Packet. Therefore it can select the shortest path to transmit the packets from the number of routes available. Some main functionalities of the network layer are;
- Routing: The network layer protocols determine which route is suitable from source to destination.
- Logical Addressing: Identify each device on the internetwork uniquely by defining an addressing scheme. The sender & receiver’s IP address are placed in the header by the network layer. IP address distinguishes each device uniquely and universally.
6. Data Link Layer
For Network layer packets to be transported from source host to destination host, they must traverse different physical networks. These physical networks can consist of different types of physical media such as copper wires, microwaves, optical fibers, and satellite links. Network layer packets do not have a way to directly access these different media. It is the role of the OSI Data Link layer to prepare Network layer packets for transmission and to control access to the physical media. Therefore, the data link layer is responsible for the node-to-node delivery of the message in a given physical medium. The Data Link layer prepares a packet for transport across the local media by encapsulating it with a header and a trailer to create a Frame. It consists of Data, Header, and Trailer.
- Data: The packet from the network layer.
- Header: Contains the source and the destination addresses of the frame and the control bytes.
- Trailer: Contains the error detection and error correction bits. It is also called a Frame Check Sequence (FCS).
The main function of this layer is to make sure data transfer is error-free from one node to another over the physical layer. When a packet arrives in a network, it is the responsibility of the data link layer to transmit it to the host using its MAC address. Data Link Layer is divided into two sub-layers;
- Logical Link Control (LLC): This places information in the frame that identifies which network layer protocol is being used for the frame.
- Media Access Control (MAC): This provides the data link layer addressing of data according to the physical signaling requirements of the medium and the type of Data Link layer protocol in use.
7. Physical Layer
Upper OSI layer protocols prepare data from the human network to be transmitted to its destination. The Physical layer controls how data is transmitted on the communication media. The role of the OSI Physical layer is to encode the binary digits that represent Data Link layer frames into signals and transmit and receive these signals across the physical media such as copper wires, optical fiber, and wireless, that connect network devices.
For further more clarification check these resources;