What happened to OC? - CLOSED Carnage?!
Sign in to follow this  
Followers 0
Solaris

HTTP/2, the first update to HTTP in 16 years, has been finalized

13 posts in this topic

142423602813739.jpg

 

Today, the next major version of HTTP took a big step toward becoming a reality; it’s been officially finalized and now moves towards being fully standardized.

 

According to a blog by Mark Nottingham, the chair of the IETF HTTP Working Group, the standard was completed today and is on its way to the RFC Editor to go through editorial processes before being published as a standard.

 

HTTP/2 is a huge deal; it’s the next big version of the Hypertext Transfer Protocol, marking the largest change since 1999 when HTTP 1.1 was adopted.

 

The new standard brings a number of benefits to one of the Web’s core technologies, such as faster page loads, longer-lived connections, more items arriving sooner and server push.

 

HTTP/2 uses the same HTTP APIs that developers are familiar with, but offers a number of new features they can adopt.

 

One notable change is that HTTP requests will be ‘cheaper’ to make. The Web community has often told developers to avoid adding too many HTTP requests to their pages, which lead to optimization techniques like code inlining or concatenation to reduce the requests. With HTTP/2, a new multiplexing feature allows lots of requests to be delivered at the same time, so the page load isn’t blocked.

 

HTTP/2 also uses significantly fewer connections, hopefully resulting in lower load for servers and networks. Nottingham previously published a number of other improvements coming to the standard on his blog.

 

The new HTTP standard was based on Google’s SPDY protocol, which is used today by some technologies to manipulate traffic which helps improve latency and security, delivering faster page load times. Google announced just a few days ago that it plans to switch fully to HTTP/2 in Chrome.

 

Developers wishing to test HTTP/2 before it becomes official can already do so now in Firefox and Chrome, along with downloadable test servers to try improvements for themselves. More information is available in the HTTP/2 FAQ.

 

It should be a relatively short time before the standard is passed through the Request-For-Comments Editor and published for use in its final form.

 

Source

Kavawuvi, Takka and Skeezix the Cat like this

System Administrator (Well Rounded) | AWS | Azure | Microsoft 365

Share this post


Link to post
Share on other sites

Tiddy-bits:

IPv4 is decades older, yet no love for IPv6 :(

I don't get how it relates...


Oddly, this is familiar to you... as if from an old dream.  

Share this post


Link to post
Share on other sites

Pretty sure 002 is asking why IPv6 isn't seeing this kind of welcoming adoption.  Sure, it's around, but IPv4 is still used 90% of the time.

That's an issue with ISP's and hosting providers not making the investment, rather than a technological limitation.


Oddly, this is familiar to you... as if from an old dream.  

Share this post


Link to post
Share on other sites

Pretty sure 002 is asking why IPv6 isn't seeing this kind of welcoming adoption.  Sure, it's around, but IPv4 is still used 90% of the time.

More like almost 100% of the time.

That's an issue with ISP's and hosting providers not making the investment, rather than a technological limitation.

Most people (I say most, because there might be that one guy) who have IPv6 are using dual-stack, meaning they have both an IPv6 and an IPv4 address. The reason for this is because a lot of ISPs are still using IPv4, so being only on IPv6 means you exclude these people. For that, it's both a technological limitation and that ISP/hosting providers aren't putting enough effort into making IPv6 a more important thing.

Floofies likes this

Share this post


Link to post
Share on other sites

Most people (I say most, because there might be that one guy) who have IPv6 are using dual-stack, meaning they have both an IPv6 and an IPv4 address. The reason for this is because a lot of ISPs are still using IPv4, so being only on IPv6 means you exclude these people. For that, it's both a technological limitation and that ISP/hosting providers aren't putting enough effort into making IPv6 a more important thing.

Oh wow, I had no idea about that. I always thought there was some sort of abstraction layer that made them semi-interoperable with additional nodes. I mean, they probably exist in special applications, but I didn't know they weren't in public use.

Share this post


Link to post
Share on other sites
Sign in to follow this  
Followers 0
  • Recently Browsing   0 members

    No registered users viewing this page.