Quantcast
Channel: Hacker News
Viewing all articles
Browse latest Browse all 10943

Why are pages slow? - Programming Adventures

$
0
0

Comments:"Why are pages slow? - Programming Adventures"

URL:http://themvcblog.typepad.com/programming/2012/12/why-are-pages-slow.html


Announcing my new book - Practical ASP.NET MVC 4

Why are pages slow to load? Consider the following culprits:

• Number of HTTP requests

• Byte count

• Blocking rendering while waiting

• Latency

• Poor cacheability

The WebKit timeline

Aside from a good text editor, the most important tools for mobile web developers are the WebKit Developer Tools. You can get a lot of work done just using Chrome or Safari on your desktop before you get serious about testing and optimizing on an actual device. Using the device can be unwieldy, so particularly in the building stage of development, WebKit is a great tool. If you have a Mac and iOS 6, the Safari Web Inspector becomes amazingly powerful (more on this later). When diagnosing performance issues, start in the network pane. In Safari, this is hiding in the Instrument view called Network Requests

There’s a lot of great information here, but for now we’ll focus on the Network tab, which features a beautiful waterfall graph that shows us everything we need to know about the loading of the page. If you look closely at the image below, you can see how a sample site called the Birds gets loaded.

The light color in the bars represents latency and the dark represents download. Looking at this chart you can see that none of the external resources were loaded until the browser parsed that part of the page. You can also see how it didn’t start fetching the image until after it had downloaded jQuery. It took 1.4 seconds on my fast connection at home to get the page loaded. If you look at the top bar you can see that latency (back-end performance) wasn’t bad. But this page was pretty slow for such a simple page.

Number of HTTP requests

Every external resource on your page requires a separate HTTP request. An HTTP request isn’t as simple as just downloading the data; there’s a certain amount of overhead in every request. So if all requests were made one after another, many small files would be much, much slower than one large file.

Browsers, of course, can download multiple files in parallel. If you look back at the image, you’ll see that most of the assets were downloaded in parallel. The HTTP/1.1 spec recommends two parallel downloads and modern browsers can download many more. Safari on iOS supports up to six requests in parallel per hostname. By adding additional hostnames (perhaps by setting up aliases or subdomains) you can download even more files in parallel. Nevertheless, each request must still pay the penalty of the HTTP overhead.

It may seem bizarre that parallelism doesn’t help here. Parallelism doesn’t ultimately overcome the cost of overhead because executing two downloads simultaneously isn’t twice as fast as just one. Not only does creating a new request have cost, but each download has cost in terms of CPU and memory.

For larger files, such as large images, the equation changes. Because the bulk of the request time on these files is generally the download, more parallelism is better. For that reason (and some others) it makes sense to serve images and assets from separate domains from your site. At Yahoo!, Steve Souders and the YSlow team found that creating two aliases for a domain to allow more parallel downloads resulted in a distinct performance improvement for large files.

As you can imagine, because there is still a parallel request limit, at some point the browser must wait for requests to finish before starting the next downloads. That means that if your site is all served from the same domain it is necessarily slower at first load than one spread across domains. However, because each extra domain requires an extra DNS lookup, adding domains eventually makes things slower. Using at least two but less than five domains is the rule of thumb from YSlow.

Another consideration is browser cookies. If a cookie matches the domain or path of the request, it is sent (that is, uploaded) with every request. So if you set several kilobytes of cookies on your domain with the first request, every other request to that domain will include those bytes be sent, uncompressed, with the header of that request. The server also has to read those cookies before it can read the body of the request. Cookies can turn a tiny request into a very large one.

SPDY and HTTP pipelining

You may have asked yourself why you have to pay the HTTP overhead penalty for each request. If all the requests are to the same domain, why not just leave the connection open and stream down more data?

If you did, you’re not the only one. Two competing solutions are emerging. One, SPDY (pronounced “speedy”) is a new protocol developed by Google that is intended to replace HTTP. The other is pipelining, specified in HTTP/1.1, but not implemented in all browsers yet. Both would allow the browser to use the same connection for multiple assets, overcoming the limitations of parallelism and making multiple smaller files almost certainly superior to the current best practice of fewer, larger files.

Byte count

The next thing that slows pages down is probably the most obvious: the size of the download. Pages always start small, but when you add JavaScript libraries, styles, and most of all images, pages can get orders of magnitude larger. Anything you can do to reduce the overall size of the files downloaded is time well spent.

Blocking rendering while waiting

Some things don’t actually speed things up, but they make things seem much faster to the user. User feedback is critical to responsiveness. When the user is staring at a blank page while the browser loads, he doesn’t know what’s happening. He can’t tell if the connection has been lost or if the page is slow. If you can make sure that the user can see your page, even if all the assets aren’t loaded, the perceived performance will be much better. For example: script tags block rendering of the HTML that follows it until the script has been fetched, parsed, and executed. When you put four or five external lines of JavaScript in the head, the user is forced to wait until all of those scripts are loaded before he sees anything at all.

Latency

Network connections are measured by bandwidth (bits) and latency (milliseconds). Latency is the delay added to a request by the connection. A typical home network connection might have a download speed of 8 megabits per second and a latency of around 15ms. A typical 3G connection might have a 500 kbps download speed and 100ms of latency. So not only is the download speed much slower, but the latency also is much higher.

Latency is annoying for users because although once a download starts it might be reasonably fast, the wait for the download to start can be quite painful. High latency dramatically increases the problems caused by large numbers of requests by adding a lot of time to each round trip. A header redirect, for example, might not be noticeable on a good broadband connection. But on a mobile device over 3G it might add 200ms to the page load. 200ms is a very noticeable delay.

Poor cacheability

We’ll talk more about optimizing caching in a new article. For now, just remember that several of the PageSpeed and YSlow rules are designed to make sure that your cache is set up right so the browser doesn’t end up re-fetching data it already has.


Viewing all articles
Browse latest Browse all 10943

Trending Articles