Page Speed

The goal of all of this is to make pages/​sites fast for the user.

There are many ways to do that…

Page Speed

Goals, in the order I worry about them:

  1. Minimize the number of bytes needed.
  2. Deliver the bytes/​responses quickly.
  3. Minimize the number of HTTP requests needed.
  4. Make sure the browser draws the page quickly.

HTTP Caching

Both user agents (browsers) and proxy servers can cache content to save network traffic and time later.

That is, they can save content from URLs they fetched in memory or on disk. Loading from there later is faster than a network request.

HTTP Caching

A proxy cache is generally shared by users in a building/​campus/​ISP. Must be manually configured, so uncommon.

Caching in the browser and a proxy server
* * *

HTTP Caching

Browser's cache is more commonly used: browser checks its cache in memory/​disk on each requst for the requested content.

HTTP Caching

The server can communicate lots of info about how the resource can be cached:

HTTP/1.1 200 Okay
Last-modified: Wed, 1 Sep 2021 13:00:00 GMT
Expires: Wed, 1 Sep 2021 19:00:00 GMT
Etag: "53cde564015c0"
Vary: accept-encoding,accept-language
Cache-control: max-age=21600, public
Content-type: text/html; charset=utf-8

<!DOCTYPE html>
<html><head>…

HTTP Caching

The user agent can communicate what it has cached with the If-modified-since and If-none-match headers.

GET /~ggbaker/test.html HTTP/1.1
Host: cmpt470.csil.sfu.ca
If-modified-since: Wed, 1 Sep 2021 12:00:00 GMT
If-none-match: "3e3073913b100", "53cde564015c0"

HTTP Caching

Best case when “requesting” a resource: cached copy came with an Expires header that is in the future.

No request necessary: zero bytes transferred. Use cached copy. Hooray!

HTTP Caching

Next best case: request is made with an If-​modified-​since or If-none-match header from cached version. Server can respond and confirm that the cached copy is okay.

HTTP/1.1 304 Not Modified
Last-modified: Wed, 1 Sep 2021 13:00:00 GMT
Expires: Wed, 1 Sep 2021 23:00:00 GMT
Etag: "53cde564015c0"
Vary: accept
Cache-control: max-age=21600, public

Response has no message body, just the headers: only a few hundred bytes. Cache can record new expiry time.

HTTP Caching

Worst case: nothing in cache or cached copy isn't the current version.

Response will be a 200 Okay with new contents. Cache can store that.

HTTP Caching

The response header Vary can be used by a shared cache to determine if different users get the same cached content.

Vary: accept-encoding,accept-language
Cache-control: max-age=21600, public

The these headers are different in the next request, the cached copy cannot be used.

HTTP Caching

For static content, things are fairly easy. You can configure the server to handle everything.

Set a reasonable expiry time. The server will handle Etags and 304 responses for you.

HTTP Caching

Dynamic content is not cached by default: there's no way for the server to guess how long it's safe to do so.

The programmer can set an Expires header where possible. Can also generate 304 responses.

Frameworks often provide support for caching. e.g. Django caching sets HTTP headers and caches server-side. Can be combined with a reverse proxy cache for huge speedup (more later).

Static Assets

Remember that even “completely dynamic” sites will have static resources shared by many pages (or every page).

e.g. course front page is <10 kB of dynamically-generated HTML, and ≈1 MB of static assets (compressed JS, CSS, images, fonts). The static resources change very infrequently.

Static Assets

Also remember that many pages may (should) share the same static resources. As users navigate around your site, they will already have those loaded. Those should be cached with expiry time: 0 bytes on subsequent pages.

e.g. every other CourSys page has very similar static assets (usually exactly the same). Expiry one year in the future.

Static Assets

Important implicaton: both your CSS and JS code should be external:

<link rel="stylesheet" href="style.css" />
<script src="code.js"></script>

Those can be cached across pages. These in your template cannot:

<style>h2 { font-weight: bold; }</style>
<script>function foo() { … }</script>

Static Assets

One option: common assets may already be in the user's cache before they ever visit your site.

Probably not cached: http://mysite.example.com/jslibs/jquery-3.6.0.js

Probably cached for most people:

  • https://cdnjs.cloudflare.com/ajax/libs/jquery/3.6.0/jquery.min.js
  • https://ajax.googleapis.com/ajax/libs/jquery/3.6.0/jquery.min.js

Static Assets

It's easy to end up with static assets many times larger than your main (HTML) content, generating dozens of HTTP requests for users.

e.g. <script> for jQuery, jQueryUI, jQuery plugin 1, 2, 3, your site behaviour code, your module behaviour. Seven separate requests.

Static Assets

Also, the code we write tends to be written verbosely for humans.

function my_function(number) {
    var good_variable_name = 7;
    /* do some arithmetic */
    return good_variable_name + number;
}

This could be minified to lot fewer bytes, but be equivalent to a compiler:

function my_function(a){var b=7;return b+a}

Static Assets

Asset management tools can solve both problems.

The idea: we work with the n CSS and JS files we want, then it concatenates and minifies them before delivering to the user.

Static Assets

Result: the codebase is still nice, but user makes ≈1 HTTP request for the very-compact code.

Bonus: asset manager can create output file names unique to the content. If content changes, file name changes, so you can cache forever without worry.

Images

Don't spend hours setting up an asset management toolchain to minify 50 kB of JavaScript, but ignore a 2 MB JPEG.

Images are majority of the bytes needed to display many pages.

Images

The format and how you deliver it matters a huge amount.

  • Use SVG for simple-shape, vector-like things. Minify and compress for transport.
  • Use JPEG for photos. Choose a the highest compression where you're happy with the quality.
  • Use PNG for other bitmap images. Choose lowest colour depth where you're happy with the quality.

Images

There are several tools that can help minimize image size.

  • OptiPNG: lossless PNG recompressor.
  • pngquant: lossy reduction in PNG palettes and recompression.
  • gzip: standard for compressing SVG: image.svgimage.svgz. Or compress with HTTP content encoding.
  • SVGO and svgcleaner: optimize SVG images.
  • jpeg2png: for rescuing images that have been abused with JPEG compression.
  • Squoosh: an online image compression tool.

Images

Make sure you scale (bitmap) images to the size you need on the page: sending an image larger than you need is a waste.

JPEG image from my phone camera: 4608×3456 and ≈6 MB.

Scaled to 1280×960 (still large for the web) and saved with quality 50 (I couldn't tell the difference): ≈180 kB.

Images

My experience: I rarely find a page (with images) that I can't shrink without noticable quality loss. Often by several times.

You can’t be a web performance expert without being an image expert. @tobint, according to @grigs

Page Drawing

The browser must download CSS and JS files linked from the <head> before drawing the page.

If that's slow to download, then the page won't be displayed for the user for a long time.

Page Drawing

First, minimize amount and number of CSS and JS to download. Remove if possible. Minify. Combine.

Page Drawing

If possible don't load JS until the end of the page: have <script> tag at the end of the <body>.

Good: page can draw before JS downloads.

Bad: page is drawn without any updates from the JS code, if there are any. Users see a flash of non-modified (or non-interactive) content.

Page Drawing

I tend to worry more about getting the content to the browser quickly. A fast server + compression + caching should get the HTML, CSS, and JS there in very little time.

Seeing an incomplete page and waiting for the JS functionality to arrive is frustrating.

Page Drawing

Slow page loading gets worse with more JS in the way.

Building a page with React (or similar) + many AJAX requests for page content is almost certainly going to be slower than just rendering HTML.

Delivering Bytes

Getting the bytes to the user quickly is a bigger question.

  • Have enough bandwidth.
  • Make your dynamic pages fast enough.
  • Have enough servers.
  • Have a server “close” to the user.

We'll talk about some of these later.

The Goal

Remember that what we're trying to do is get users the pages fast. There are many steps in the system that control (or slow) that.

The goal? Under 4 seconds? 3 seconds? 2 seconds?

In any case: You want the page delivered quickly, but have plenty of time if you don't screw anything up.

Checking Pages

There are several tools that check many useful things about your page delivery.

While you're at it:

Deploying Servers

In order to have a web application deployed, you need a server configured for it.

One way: get a server. Install what you need. Edit some config files. Copy over your code. Go.

Deploying Servers

Two problems with that:

  • You'll forget what you actually did. Can't replicate the config, or figure out why it failed yesterday.
  • In general, you need multiple servers and need to scale up/down automatically.

Deploying Servers

Configuration management tools help with both.

The idea: you write a recipe for how your server gets deployed. That recipe lives in your codebase. You can run it as often as you need to.

Changes are versioned so you know what happened yesterday. New identical servers can be created automatically (or easily).

Deploying Servers

There are many configuration management tools: Chef, Puppet, Fabric, Ansible. All express thoughts like:

  1. Install package nginx.
  2. Put my Nginx config file (from my code repo) in /etc/nginx/sites-available/default (on the server).
  3. Run a command to tell Nginx to reload its config.
  4. Install package gunicorn and its config file.
  5. Create directory /opt/data with permissions 0755.

Deploying Servers

Can run that recipe to do the initial deployment, or update the configuration on all servers after a change.

Recipes should be idempotent so re-running is safe.

Deploying Servers

Docker handles configuration and deployment differently, but with conceptually the same goals. Services (web server, application server, database, etc) each run in their own container. The configuration is, for each service…

  1. Start with a container image that includes (most of) the software you want (Nginx, Postgres, Python, etc).
  2. Modify it as necessary so it does what you need: set environment variables, update config files, etc.
  3. Start it and destroy/​restart if you need to change anything.

Deploying Servers

While developing, you (and all group members) can run/​re-run the recipe on a VM (or collection of containers) and have a consistent setup.

Could later deploy to Amazon EC2 with many (auto-scaled) servers. Could have several different recipes (for database, application, caching servers, development VM).