Baking acceleration into the web itself

In my last blog post, I argued that concurrency is the root of web slowness. The good news is the solution to low concurrency already exists. It's called server hints. Server hints are information that is sent along with the primary object of a web page to optimize the loading of subsequent nested objects. Exactly what we're looking for! Unfortunately only Chrome has implemented a rudimentary version of server hints, and it's currently broken.

So, I built my own version of Chromium with code changes to work around the reported bug to show that by adding the following hints to my simple.html

subresources

I was able to improve concurrency and page load time by roughly 2 times.

solution

Note in the above waterfall chart that objects blue.js, green.js, and yellow.js are fetched at the same time and that the page load time has gone from 320 milliseconds to 165 milliseconds.

For a more dramatic example, I created this page consisting of 100 nested objects, again fetched one at a time, and compared it to the performance of this page with hints using my modified Chromium browser. The page with hints loaded more than 10 times faster over FIOS. So that you don't have to only take my word for it, here is the output of the page for objects 95-100 with no hints:

without_hints

And here is the output for objects 95-100 for the same page with hints:

with_hints

Of course, this page was expressly constructed to demonstrate the benefit of improved concurrency but it is still exciting to see that a modern browser can process so many simultaneous objects without major degradation in performance. Assuming Chrome fixes the above mentioned bug, and that the other browsers follow suit in implementing server hint functionality, we have a big part of the solution to make the web dramatically faster in hand.

But where will our hints come from? Are we to expect web developers to manually edit their html files like I did to include the objects they expect their pages to need? That would be a full time job for sites with frequently changing content, which includes most popular ones. And the mashup presents an even more difficult hurdle -- how can a primary site provide hints about the nested objects needed to render third party content it does not manage?

Clearly we need a real-time source of data that accounts for all of the objects needed to render the page. We really want to know exactly what happened inside the browser when the page was loaded. We want the waterfall. Consider the following:

learning_hinting_loop

This diagram describes a feedback loop where the browsers upload waterfall data back to the primary web server when they are finished loading the page. Next, in what I call "web learning," the web server combines that data with waterfall data from other users' visits to generate hints. The server then injects those hints into the page so the next user can experience a faster page load. The feedback loop allows the web server to know exactly what is happening when the page is loaded and to suggest optimizations back to the browser. It makes the web self-optimizing.

What needs to happen to realize this concept?

  • Add support for server hints into all web browsers
  • Modify the open source PageSpeed module, which is already re-writing pages to improve web performance, to inject javascript into the primary page that will read waterfall data on page done and upload it back to the server.
  • Further modify PageSpeed to retrieve this uploaded waterfall data and push it to a web learning module.
  • Lastly modify PageSpeed to query the web learning module when a primary page is requested by the browser to obtain and inject server hints.

To some extent, server hints with web learning bakes the intelligence found in web acceleration proxies, which incidentally is what I've been working on for the past 15 years, into the web itself. And, by making it more self-optimizing, it allows web designers to focus more on making great web sites and worry less about performance.

Author

Peter Lepeska

Impatient, over-caffeinated web surfer and software engineer who has spent much of the last 15 years thinking about how to make the web faster.

Loading Google+ Comments ...