This page aims to explore the theoretical improvement among HTTP/1.1, HTTP/2 and HTTP/2 Server Push.
You can find out the source code in GitHub - lawrenceching/jetty-http2-example
A use case is:
/api/posts/:id
that id is from 0 to 20 to simulate loading blog postsTo simulate network slowness, I add 5s delay for all apis.
Browsers have their limit for concurrent TCP connections. Such as in Chrome, only 6 connections are allowed per domain.
Below screenshot is the waterfall when I loads my demo page in HTTP/1.1
It clearly shows you that:
/api/posts/:id
If I switch to HTTP/2, in theory, I can trigger all API calls at once.
And the waterfall proved it.
It take 10 seconds to load the page(5s to load index.html, 5s to load all posts).
Server Push allows you to push assets to client without waiting browser requires it.
So, while browser is loading the index.html, server can kick off the process to push blog posts to browsers. Therefore, it can cut 5 second away in page load time.
Above page is the waterfall if I enabled HTTP/2 Server Push and push all posts while browser was fetching index.html.
You can see a huge improvement here because calling /api/posts/:id
only need around 1 or 2 ms.
If you look at the Initiator column, all /api/posts/:id
requests were initialized by Server Push. When index.html was loading, Chrome started to receive /api/posts/:id
data, and store in cache. So after the index.html completed and start to call /api/posts/:id
, Chrome gave back the response from cache without trigger a real network connection.