cloud cloud cloud cloud cloud

http1 brought in the modern age of website browsing e.g images and video in 1997, in 2015 http2 was built and massively improved speed and security of connections made between browser and servers, in the next few years we will see http3 which is set to increase the speed again, however adoption been slow even after 5 years of http2 hardly any servers have updated their protcols to http2. 

Dont be stuck in the past

Speed matters, Analytics tell us websites that take longer than 5 secounds on first load lose half the people visting the website, it’s proven that websites that load within 3 secounds on first load have a higher rention rate and ulimately a better conversion rate.

Speak with us to see how we can improve your loading speeds and user experience. 

 

The onset of HTTP/2 is upon us. Now over 75% of your users browsers support the low-latency transfer protocol, yet adoption has been slow. – Patrick Hamann

Protocol – a set of rules to follow

TCP Protocol – a set of rules computers follow to talk to each other with accuracy. If I send it, I’ll make sure you’ve gotten it even if I have to stop the world. you’ll get it or I’ll die trying.

UDP Protocol – another set of rules for computers to talk to each other. I’ll send it, I don’t care if you get it. But if you do get it, it’ll be way faster.

HTTP Protocol – a set of rules built on top of TCP for doing all the stuff with HTTP Like request web pages and images.

HTTP/1.0 – new TCP connection (a connection based on the TCP protocol) for EVERY request/response.

HTTP/1.1 – can “keep alive” the same TCP connection for every request/response, but each request/response must still be sent serially. Since modern browsers need to send multiple requests/responses in parallel (i.e. sending CSS/JS in parallel), they’d bypass this limitation by … making multiple TCP connections. This defeats the purpose of HTTP/1.1 and makes “keep alive” more like “walking dead.”

HTTP/2 – using ONE TCP connection, it can send multiple request/responses in parallel! But, TCP can’t tell the difference between any given request/response and is paranoid about sending everything accurately. Therefore if one data packet goes missing, everything comes to a halt until that packet is resent. This means you could have 3 files being sent, and packet loss on one file will hold up all 3 from transmitting. In networks with “high” packet loss (2%), HTTP/2 can be slower than HTTP/1.1.

HTTP/2 TCP Head of Line Blocking: this problem where TCP stops the party to find the lost packet before sending anything else is known as “head of line blocking.” TCP’s benevolent heroism in finding lost packets is finally interfering with its day job.

QUIC : a new protocol built on top of UDP. Since we can’t deal with TCP’s paranoia, we’ll go with the risk-taking UDP. But, the QUIC protocol makes UDP trade in its black leather jacket and motorcycle and start checking to see if things are actually sent and received. By the way, QUIC doesn’t stand for a damn thing, it’s just quick without the K. Naming problems in programming persist.

QUIC Streams : in this new protocol, using a single UDP connection, we can send parallel requests/responses! Even better, it can tell the difference between the parallel requests/responses. This means that if one request/response experiences packet loss, the others aren’t stopped. UDP has also started wearing a suit and tie now.

HTTP/3 : HTTP/3 is the HTTP protocol re-implemented on top of the QUIC protocol (which is on top of UDP, the former). This means parallel request/responses without the head of line blocking problem. This means faster requests/responses in general.

Leave a Reply

Your email address will not be published. Required fields are marked *