A while back I was able to shave off ~33 seconds from my page load time by fixing how I load fonts.

Surprising, I know! In my previous attempts to follow cutting-edge best practices I made an honest mistake:

<link rel="preload" href="/fonts/rubik.woff2"/>
<link rel="preload" href="/fonts/rubik-bold.woff2"/>
<link rel="preload" href="/fonts/domine.woff2"/>
<link rel="stylesheet" href="/css/fonts.css" media="print" onload="this.media='all'">

Looks fast, right? I’m preloading my font files so they show up fast as possible and then do a little asynchronous switcheroo to prevent my @font-face rule stylesheet from blocking. Well… I fell into a trap. Preloading three fonts meant other things wouldn’t load as quickly and this became a bottleneck that I didn’t notice due to Service Workers or my fast connection. So I burned it all down and went back to plain ol’ CSS font-face with font-display: swap.

@font-face {
	font-family: "Rubik";
	font-display: swap;
  src: url("/fonts/rubik.woff2") format("woff2"),
       url("/fonts/rubik.woff") format("woff");
}

...

The font-display property is an amazing new tool that didn’t exist the last time I worked on my site. font-display lets me specify how and when I’d like my fonts to be applied. The swap value means my fonts will “swap” in whenever they’re ready without blocking the page load, which is what I was trying to do with that lazy CSS trick. Now that my fonts are lazily applied, I didn’t even need the separate stylesheet anymore.

It’s a bit counterintuitive, but backing away from preloading tricks resulted in my site becoming much faster, with fewer requests, and became easier to maintain. It’s totally possible layering in preload again would produce a tangible benefit, but I learned a lesson to take a more iterative approach with better monitoring.

This is all to say…

This story is less about webfont performance and is actually framing for another point I’m trying to make. I, Dave Rupert, a person who cares about web performance, a person who reads web performance blogs, a person who spends lots of hours trying to keep up on best practices, a person who co-hosts a weekly podcast about making websites and speak with web performance professionals… somehow goofed and added 33 SECONDS to their page load.

I find that Web Performance isn’t particularly difficult once you understand the levers you can pull; reduce kilobytes, reduce requests, unblock rendering, defer scripts. But maintaining performance that’s a different story entirely…

Over time on any project I’ve ever worked on, uncontrollable factors start to creep in; TTFB slow down from your web host, Marketing goes on a tracking script spree in Google Tag Manager, those show third parties have slow third-parties, CPUs have major vulnerabilities, clearing technical debt gets pushed to the backlog, and browsers make minor tweaks to loading algorithms.

“Best Practices”, I tend to feel, change every 6~12 months:

  • “Use preload” obviously didn’t scale for me.
  • “Use blurry images” could probably be “Use loading="lazy" with height and width” now, but some people feel that loading="lazy" is too eager! 🤷
  • “Do {X,Y,Z} for fonts” is a never-ending enterprise.
  • “Bundle your JavaScript” is currently morphing into “Only bundle for IE11”…
  • Oh, and you should be writing all your non-UI code in workers now…

The list goes on. It’s also worth noting Web Performance fixes usually fall under the umbrella of architecture. Updating a best practice tends to be a deep fix and involves updating a build process, a global template, or low-level/highly-depended-on component. Doing and undoing those types of performance changes takes effort.

In my experience, 99% of the time Web Performance boils down to two problems:

  1. “You wrote too much JavaScript.”
  2. “You have unoptimized images.”

Which, okay. Fine. The JavaScript is pretty hard to dig out of and most companies simply won’t. That’s unfortunate for their users and the self-justification from developers is just white noise to me at this point. The images issue, however, is a fairly ubiquitous one and there’s three ways out of that jam:

  1. Pay some service forever.
  2. Concoct some ginormous build process.
  3. Constant vigilance and manual labor to ensure best quality per kilobyte.

And doesn’t that just sum up nearly every problem in web development? Use a third-party, build some crufty thing yourself, or spend massive amounts of energy establishing a weak process-based culture. Sometimes I wonder what life would be like if I always chose Option #1.

Automating away some problems

I like doing web performance manually, so that I have leathered hands that understand the problem. But given that web performance is a shifting problem and a garden you must tend to, perhaps automating away what I can is a smart choice.

I have a short todo list of automated processes I’d like to try and setup and evaluate before passing these recommendations onto a client. Perhaps I can carve some time out in these dreary times to experiment and play, but that’s asking a lot.

I’m looking at adding Lighthouse CI as a Github Action. This way, I could solve the “constant vigilance” problem and move the needle from manual monitoring to proactively be notified of any performance regressions. It would also be cool to use Lighthouse as a Netlify Build Plugin since I typically use Netlify as a CI, but I don’t think it collects historical data at this point.

Another route would be using something like SpeedCurve or Calibre to introduce monitoring. While these services are great, they’re priced more for larger (more competitive) businesses than my personal site. $20~$60 per month for my blog would be a 2.5x~5x increase in operation costs.

I’m also looking at Using Thumbor as an Image CDN to solve the unoptimized image problem. I’ve long dreamed of being able to self-host my own media server that does all the lossless optimization and adaptive serving (like WebP when supported) for me. This adds a ~$5/month on Digital Ocean maintenance cost just to slim down images. I also just learned that Netlify Large Media has image transformation but requires moving over to GitLFS and doesn’t appear to do adaptive serving.

I feel if I could get these two automations ironed out, I’d have an easier time maintaining performance at a minimal cost. If you’ve had success on this, I’d love to know more about your setup.