There has been a bubbling stream of interest in HTTP/2 and what this move means for SEO. There is still quite a bit of time before the protocol gets implemented widespread on the internet. In the meantime, I would like to point out some observations so far. This is a consolidation of the research I have done and a smattering of my own opinion.
What is HTTP/2?
HTTP/2 is a protocol, or a set of rules governing the format of a data set. Ultimately, this is the means two entities use to communicate. When you talk on a CB Radio, you have to have some type of common understanding between the two parties that you will let them know when you are done speaking and it is their turn to speak. Commonly, you would use “over” at the end of your sentence to indicate you are done speaking. “Why, yes, I believe that is chopped up fine enough, Medea, over!”
This is, at its core, what a protocol does. HTTP/2 is simply an advancement of the HTTP protocol. There are a handful of servers who are supporting this such as Apache, NGINX and IIS (Windows 10 and Windows Server 2016 only at this time).
Will HTTP/2 work with HTTP/1 and HTTP/1.1?
Absolutely. It is the client (web browser) that suggests protocol, not the web server. If the client says they can use HTTP/2, the web server will respond with that protocol. If the client specifies HTTP/1 or HTTP/1.1, it simply serves that up as well. The web server can serve all three protocols natively. Clients will not need multiple web servers to serve the different protocols, if the web server supports HTTP/2, it will support HTTP/1 and HTTP/1.1.
What Needed Advancement?
HTTP was originally written in 1989. A time when web pages didn’t exist in their current incarnation and no one could predict the evolution they were going to take. Web pages didn’t contain the level of graphics, functionality or jazz-hands that we currently enjoy. There are four main issues that HTTP/2 takes on that we will address:
- Binary Headers
- Header Compression
- Server Push
Multiplexing vs. Head-of-Line Blocking
This means if you have six large images being downloaded, all other downloading is stopped until one of those connections finishes and you can start a new one. This is the Head-of-Line Blocking issue that HTTP/1.1 tried to fix. With HTTP/2, this entire sequence is changed to a multiplex download rather than 4-6 individual downloads.
If you have ever run Site Latency reports and looked at download waterfalls, this makes even further sense.
Each of the horizontal lines represents a single asset that needed to be downloaded, and the length of the line denotes how much time it took to download. Looking above, you can only have 4-6 of those downloads happening at once. Whether you have a homepage with 89 requests (Walgreens), or 158 (Wal-Mart), or 156 (Anthropologie), it doesn’t matter: the downloads are done in parallel, but only 4-6 at a time.
It is hard to visualize something that is virtual, but think about having four boxes that fill up with sand at slightly various speeds. You have lots of boxes sitting there waiting while you fill up each of the first four. Once a box is filled, it is moved and the next one starts. The boxes are independently filled, but each has to fill and complete before another one can start. All the while, header messages are going back and forth on each asset as well as individual TCP connections. This is how it works today.
Multiplexing is more like having a single box that’s as large as the combined bandwidth of the client’s and web server’s connection, not an individual asset. You can use as many sandbags to fill this as you want as long as it doesn’t exceed the box length and width (length and width in this example is to show the network bandwidth available).
This means you can download a lot of things all at once, which is much more efficient to your operating system, bandwidth and resources. The downloads and the header messages are all moving at the same time rather than one-at-a-time. This should improve the speed at which pages load and render. There is also a huge benefit by having a single, multiplexed TCP connection versus the default, individual connections. We will talk about the headers in the next section, but there is also a huge improvement in speed for them, too.
Binary Headers and Compression
HTTP headers are decidedly clunky. It is a hodgepodge of formats and commands, all done in clear text (there is a small caveat to HTTPS here, but it is essentially the same). Clear text means you have to use some type of UTF-8 character set bringing further overhead into it. HTTP/2 uses binary headers instead of clear text. These headers are significantly smaller and have much less overhead.
Secondly, HTTP/2 uses a different type of compression than the normal GZIP or BZIP compression algorithms. The task force group decided to move away from the DEFLATE algo’s to HPACK, something specifically designed for headers. It has been found that this type of savings is particularly beneficial for mobile transactions.
This is an undoubtedly interesting aspect and perhaps the hardest piece to predict how it will work. The idea is that a web server could push assets to the client (browser) without receiving a request first. This might sound like an AJAX call, but AJAX has a request in the background that usually results in an information exchange that updates the DOM.
This is the oddest part of this discussion because there was massive disagreement in the task force for HTTP/2 to use HTTPS only and not offer a non-secure option.
There was enough good argument to not require it from the protocol standpoint. Peculiarly, though, the only browsers that support HTTP/2 at this time (Safari, Firefox, Chrome, IE 11 on Windows 10) will only accept a secure HTTP/2 connection. So while the protocol itself doesn’t require it, the current browsers do. Ipso facto it is used entirely as a secure protocol from that standpoint.
There isn’t anything in this protocol that is necessarily SEO specific. There are certainly SEO benefits from reducing site latency though. The improvements with multiplexing, the compressed binary headers, and the server push should cause pages to load faster without having to change much or any code. This is not any type of silver-bullet, but another tool in our chest to help with tiered recommendations. One word of caution; this upgrade is almost always done by the web development/engineering group. It isn’t a small task and can take a lot of planning to make this happen. It will take many years for HTTP/2 to fully settle in as the default HTTP protocol.
If you would like to read more about the speed advantages between HTTP/1.1, SPDY and HTTP/2 please read the article below.