A brief history of HTTP and current pitfalls of HTTP/3

The history of HTTP, the current issues with HTTP/3, and why I'm discontinuing support for HTTP/3 for now

First off, I'd like to say that I don't think HTTP/3 or QUIC is a waste of time or has no real meaningful benefits. I do plan to support it again in the future but right now, it's too much work.

So, what's wrong with it? Well a few things. In short in no particular order:

  • No native support in OpenSSL
  • No native support in Nginx
  • UDP is inefficient
  • Not much support
  • Not a lot of documentation

Now, let me explain...

First, a brief overview of the history of HTTP. HTTP 1.0 was released way back in 1994 and is basically just a TCP connection over port 80. Want security? Establish an SSL connection on port 443 and run HTTP over that.

Diagram of SSL/TLS layer

A year later HTTP 1.1 came out with improvements for caching control plus compression, persistent connections, chunked transfer, virtual hosts, byte-range, and a lot more. Nothing completely different. That's why it was a point release. And we still use that bad boi to this very day.

However, in 2015 there was a new kid on the block known as HTTP/2. This version of HTTP was based on Google's SPDY protocol. This didn't change any of the high-level semantics but what it did change was how data was transported between the server and the client.

Let me back up a bit. With HTTP 1.0 every request was a new TCP connection. So, you open a web page and it loads a logo, a midi file, and an image of goatse. Well, each of those requests requires a completely new HTTP connection which also requires first establishing a TCP connection to the server. This is very inefficient and consumes a lot of resources for both the user and the server. Throw SSL on top of that and it becomes even worse. This was one of the things HTTP 1.1 improved upon. With persistent connections you can have more than one request/response on the same HTTP connection. So, you can establish a single HTTP connection and download the html, logo, midi file, and goatse all over the same HTTP connection increasing speed and lowering resource requirements. You can also keep idle connections open so you don't have to redo the SSL/TLS handshake when the user clicks on a link that takes them from the goatse midi page of your site to the lemonparty mp3 page of your site. You can reuse the already existing HTTP connection to transfer all of the files.

However, there is a downside to this. You can only transfer 1 file at a time. So, you'd first download the HTML, then the logo, then the midi file and finally the goatse. Not too big of an issue for a simple site like that but the web evolved a lot since then. Now you have sites like Facebook, YouTube, etc. where you have to download several JavaScript files, several CSS files, a lot of json, before you can even start to render the page and then a lot of images and whatnot. You can do things like put your JS and CSS directly in the HTML cutting down on the total number of requests but that can lead to security issues. The other option is to establish multiple HTTP connections to download files in parallel but now we're back to the same inefficiencies as before. Though, to a lesser extent but still not ideal. Additionally, doing that, at least back in the day could get your IP banned from sites because the server saw a bunch of HTTP connections coming from the same IP and would think it was a DoS attack. Computers got faster and we eventually settled on multiple HTTP connections at once because with the added computing power it was still faster even if it used more resources.

But... What if we could transfer multiple files at once over a single connection? This is exactly what SPDY and later HTTP/2 do. I'm not going to go into the specifics but essentially you can request multiple files at once and the web server groups them together so you can download them all at once making things much more efficient and pages load faster.

HTTP/2 really only works over TLS although it's not technically required. That's mostly due to all the big browsers only supporting it over HTTPS. Additionally, HTTP 1.1 had been around since forever and routers in internet exchanges knew what traffic over port 80 was. It was HTTP traffic. They had a lot of experience with it and would optomised HTTP traffic. However, when you'd throw HTTP/2 traffic at them in best cases it'd work as expected. Sometimes they'd try to optomise it as if it were HTTP 1.1 traffic which would cause it to be even slower than HTTP 1.1 and in worst cases could completely break it. That's one of the main reasons all the browsers only support HTTP/2 over TLS. They wanted to avoid literally breaking the internet.

So now we move on to HTTP/3. HTTP/3 is an even bigger change than HTTP/2 was. HTTP/3 runs on QUIC instead of TCP.

Okay, so after we started using HTTP/2 the next limiting factor became the TCP protocol itself. TCP is kind of slow. The server sends a packet to the user then the user sends a packet back saying, "Okay, I got it now send the next one." This inefficiency gets amplified with with your distance to the server because the time it takes to send packets back and forth increases two fold since it's having to make a round trip. These days packet loss isn't anything like it used to be. Most connections have virtually no packet loss. However, developing a new protocol at that level takes many years because it has to be tested and then included at the kernel level of operating systems. What google did was take UDP and build a protocol on top of it named QUIC. It's also important to note that thing we call QUIC now is different than Google's QUIC which got renamed to "Google QUIC" or "gQUIC" a is pretty much abandoned now. So now we've got QUIC which guarantees packet delivery more efficiently and with less overhead that's built on top of UDP and HTTP/3 runs on top of that.

Aside from running on a very different network stack there are a few other changes. One of the most notable is that to get HTTPS you don't establish a TLS connection and then run HTTP through it. Security and encryption is built in to HTTP/3. Encryption is part of the protocol. There's no such thing as unencrypted HTTP/3.

This brings me to my point about OpenSSL. Encryption works differently in HTTP/3 so you need to use a library that supports it. Currently the only one I'm aware of that supports HTTP/3 is BoringSSL which is Google's fork of OpenSSL. However, BoringSSL doesn't support some of the stuff that OpenSSL does.

Second, we've got Nginx. Nginx is my web server of choice but it doesn't officially support HTTP/3 yet. There is a patch by Cloudflare called quich that adds support for HTTP/3 to Nginx but honestly, it's kind of jank. Someone was maintaining it on the AUR but the package was taken down so I've been having to maintain it myself.

Third, UDP is inefficient. I don't mean to say that it's somehow slower than TCP on a fundamental level. No, quite the opposite. It just hasn't gotten 30+ years of software optimisations. For pretty much the entirety of the Internet the Internet has been almost exclusively TCP. SSH, Telnet, XMPP, IRC, FTP, SMTP, POP3, IMAP, Gopher, SFTP, CUPS, NFS, SMB, and most importantly HTTP (until now) all use TCP so getting it to perform as good as possible has been a high priority for network stacks. The same can't be said for UDP and serving HTTP/3 traffic uses around 2.5~3 times as much CPU to handle the connections because of it.

For support, I'm not saying that since there isn't a lot of support currently it's not worth trying to add support. No, far from it. I'm just saying that at least for now, shutting down HTTP/3 support won't affect much because not much supports it. When things on the software side that I mentioned previously are better I plan to add support back but for now it's too much and people smarter than me are working on it so I'll just let them do their thing and use it when they're done.

And finally, the documentation. There just isn't a lot of stuff about it because it's so new. I mean, when I first set it up I wasn't sure if I did it right. I found two websites that'd test a domain for HTTP/3 support. One said it was working fine and the other said it was configured incorrectly (I later found out it was setup correctly). Once it becomes more common there will be more information available and setup will be a lot easier but for right now that isn't the case.

When I started writing this I didn't intend for it to be so long. It was going to be just more of an announcement but here we are...

thank you for reading my blog post