HTTP/1.1
HTTP/1.1 is the first standardized version of the HyperText Transfer Protocol. First published in January 1997 as RFC 2068, only eight months after HTTP/1.0 (RFC 1945), the protocol introduced persistent connections, mandatory virtual hosting, chunked transfer encoding, and a comprehensive Caching model.
A version change was necessary because many applications labeled themselves "HTTP/1.0" without fully implementing the specification. Declaring a new version allowed clients and servers to accurately communicate their capabilities.
Baseline: Widely available
Supported in all major browsers. webstatus.dev
RFC history
HTTP/1.1 has been revised multiple times. Each revision clarified ambiguities and improved precision without changing the core protocol.
| RFC | Date | Notes |
|---|---|---|
| RFC 2068 | January 1997 | First HTTP/1.1 spec |
| RFC 2616 | June 1999 | Major revision, widely referenced |
| RFC 7230–7235 | June 2014 | Six-part decomposition replacing RFC 2616 |
| RFC 9110–9112 | June 2022 | Current specs (Internet Standard) |
The June 2022 restructuring separated HTTP semantics (RFC 9110, shared across all versions) from HTTP/1.1 message syntax (RFC 9112) and Caching (RFC 9111). Roy T. Fielding, who co-authored every HTTP/1.1 RFC, co-edited the final revision with Mark Nottingham and Julian Reschke.
Key features
Persistent connections
HTTP/1.0 closed the TCP connection after every
response. HTTP/1.1 reverses this: connections stay
open by default. Sending Connection: close opts out.
Persistent connections eliminate TCP handshake overhead for subsequent requests and allow the TCP congestion window to grow, improving throughput on high-latency links. Browsers typically open up to six parallel TCP connections per origin.
Mandatory Host header
Every HTTP/1.1 request requires a Host header. A server returns 400 Bad Request if the header is missing.
Virtual hosting
Before HTTP/1.1, each website needed its own IP address. The mandatory Host header enabled name-based virtual hosting, multiple domains sharing a single IP address and port. This reduced IPv4 address consumption and made shared hosting practical.
Chunked transfer encoding
The Transfer-Encoding: chunked mechanism allows a
server to stream a response without knowing the total
size upfront. Each chunk includes its size in
hexadecimal followed by the data. A zero-length chunk
signals the end of the body.
4\r\n
Wiki\r\n
6\r\n
pedia \r\n
0\r\n
\r\n
Optional trailer Headers follow the final chunk, carrying values computed after generating the body (such as checksums). Chunked encoding is critical for dynamic content (CGI output, database queries, and server-generated pages) where buffering the entire response before sending is impractical.
HTTP/1.1 only
Chunked transfer encoding is specific to HTTP/1.1. HTTP/2 and HTTP/3 use DATA frames with built-in stream framing and do not use Transfer-Encoding.
Pipelining
HTTP/1.1 introduced pipelining: sending multiple requests on a single connection without waiting for each response. Responses must arrive in the same order as requests.
In practice, pipelining failed. Head-of-line blocking meant a slow response stalled all subsequent responses on the same connection. Many proxies and servers handled pipelined requests incorrectly, and only idempotent methods (GET, HEAD) qualified. Firefox removed pipelining support entirely, and no major browser shipped pipelining enabled by default (Opera was the exception). HTTP/2 multiplexing replaced pipelining with true concurrent streams.
100 Continue
The Expect: 100-continue mechanism lets a client
check whether the server will accept a request body
before sending the body. The server responds with
100 Continue (proceed) or an error status (abort).
This saves bandwidth when the server plans to reject a
large upload based on headers alone.
Range requests
The Range header enables partial content
retrieval, with the server responding with
206 Partial Content. This mechanism supports
resumable downloads, video seeking, and parallel
chunk downloads. Servers advertise support through
Accept-Ranges: bytes.
New methods
HTTP/1.0 formally defined GET, HEAD, and POST. HTTP/1.1 standardized additional HTTP methods:
- PUT: create or replace a resource
- DELETE: remove a resource
- OPTIONS: query supported methods
- TRACE: diagnostic loop-back
- CONNECT: establish a tunnel (used for HTTPS through proxies)
Improved caching
HTTP/1.1 replaced the HTTP/1.0 date-based
caching model with Cache-Control
directives (max-age, no-cache, no-store,
must-revalidate, public, private). ETag
headers and conditional requests
(If-None-Match,
If-Match) provided precise revalidation
beyond date-based comparison. The
Vary header enabled cache differentiation by
request characteristics.
Content negotiation
Full content negotiation through Accept, Accept-Language, Accept-Encoding, and Accept-Charset headers. Quality factors (q-values) from 0 to 1 express client preferences for media types, languages, and encodings.
Head-of-line blocking
HTTP/1.1 operates strictly in sequence on each connection: one request, one response, then the next. A slow response blocks all subsequent requests on the same connection, even with pipelining.
At the TCP level, a lost packet stalls the entire byte stream until retransmission completes. Subsequent packets already received wait in the kernel buffer, unusable by the application. This transport-level head-of-line blocking affects all TCP-based protocols and is independent of HTTP.
HTTP/2 solves the HTTP-level problem through multiplexed streams but remains vulnerable to TCP-level blocking. HTTP/3 solves both by running on QUIC, where packet loss affects only the individual stream.
Performance workarounds
Developers adopted several techniques to work around HTTP/1.1 connection limits and per-request overhead:
- Domain sharding: splitting resources across
subdomains (
img1.example.re,img2.example.re) to open more parallel connections - CSS sprites: combining multiple images into
one and using CSS
background-position - Script and CSS bundling: concatenating files to reduce request count
- Data URI inlining: embedding small images as Base64 strings directly in HTML or CSS
HTTP/2 anti-patterns
These workarounds become counterproductive with HTTP/2. Domain sharding prevents the single-connection advantage. Bundling defeats granular caching. The techniques are specific to HTTP/1.1 connection constraints.
Security
HTTP/1.1 is a plaintext protocol with no built-in encryption. TLS is added separately as HTTPS.
Request smuggling
HTTP/1.1 provides two mechanisms for indicating
message body length: Content-Length
and Transfer-Encoding: chunked. When both are
present, intermediaries and servers sometimes
disagree on
which takes precedence. This ambiguity enables request
smuggling attacks. An attacker crafts a request
the front-end proxy and back-end server parse
differently,
allowing the attacker to inject requests.
RFC 9112 Section 6.1 clarifies Transfer-Encoding takes precedence and Content-Length must be removed when both are present.
Other concerns
The TRACE method was exploitable for Cross-Site Tracing (XST) attacks and is now disabled on most servers. Slowloris attacks exploit persistent connections by sending partial headers slowly, exhausting server connection limits.
Current status
HTTP/1.1 remains the universal fallback protocol. Every HTTP/2 and HTTP/3 implementation supports HTTP/1.1 as a baseline. The text-based format makes the protocol straightforward to debug with tools like curl and netcat.
Modern CDN traffic is predominantly HTTP/2 and HTTP/3, but legacy infrastructure, corporate proxies, internal services, and IoT devices continue to rely on HTTP/1.1.
Example
A complete HTTP/1.1 exchange over a persistent connection. The client requests an HTML page and then an image on the same TCP connection.
Initial request
GET /index.html HTTP/1.1
Host: www.example.re
User-Agent: Mozilla/5.0 (Windows NT 5.0; rv:1.1)
Accept: text/html
Accept-Language: en-US, en; q=0.5
Accept-Encoding: gzip, deflate
Response
HTTP/1.1 200 OK
Server: Apache
Date: Thu, 01 Jan 1998 12:01:00 GMT
Connection: Keep-Alive
Keep-Alive: timeout=5, max=500
Content-Encoding: gzip
Content-Type: text/html; charset=UTF-8
Last-Modified: Mon, 29 Dec 1997 12:15:00 GMT
Transfer-Encoding: chunked
<html>
Welcome to the <img src="/logo.gif"> example.re
homepage!
</html>
Second request (same connection)
GET /logo.gif HTTP/1.1
Host: www.example.re
User-Agent: Mozilla/5.0 (Windows NT 5.0; rv:1.1)
Accept: image/gif
Accept-Language: en-US, en; q=0.5
Accept-Encoding: gzip, deflate
Response
HTTP/1.1 200 OK
Age: 8450220
Cache-Control: public, max-age=315360000
Connection: Keep-Alive
Content-Type: image/gif
Content-Length: 5000
Date: Thu, 01 Jan 1998 12:01:01 GMT
Last-Modified: Sun, 01 Jan 1995 12:01:00 GMT
Server: Apache
<Binary data for a 5K GIF image>
Both requests share the same TCP connection. The server
keeps the connection open until the client sends
Connection: close or the idle timeout expires.
Takeaway
HTTP/1.1 turned HTTP into a production-grade protocol. Persistent connections, the mandatory Host header, chunked transfer encoding, and fine-grained Caching made the web scalable. The text-based format and universal support keep HTTP/1.1 relevant as the baseline protocol for every HTTP implementation.
See also
- RFC 9110: HTTP Semantics
- RFC 9111: HTTP Caching
- RFC 9112: HTTP/1.1
- HTTP/0.9
- HTTP/1.0
- HTTP/2
- HTTP/3
- HTTP Explained