My first Service Worker

I’ve made no secret of the fact that I’m really excited about Service Workers. I’m not alone. At the Coldfront conference in Copenhagen, pretty much every talk mentioned Service Workers.

Obviously I’m excited about what Service Workers enable: offline caching, background processes, push notifications, and all sorts of other goodies that allow the web to compete with native. But more than that, I’m really excited about the way that the Service Worker spec has been designed. Instead of being an all-or-nothing technology that you have to bet the farm on, it has been deliberately crafted to be used as an enhancement on top of existing sites (oh, how I wish that web components would follow a similar path).

I’ve got plenty of ideas on how Service Workers could be used to enhance a community site like The Session or the kind of events sites that we produce at Clearleft, but to begin with, I figured it would make sense to use my own personal site as a playground.

To start with, I’ve already conquered the first hurdle: serving my site over HTTPS. Service Workers require a secure connection. But you can play around with running a Service Worker locally if you run a copy of your site on localhost.

That’s how I started experimenting with Service Workers: serving on localhost, and stopping and starting my local Apache server with apachectl stop and apachectl start on the command line.

That reminds of another interesting use case for Service Workers: it’s not just about the user’s network connection failing (say, going into a train tunnel); it’s also about your web server not always being available. Both scenarios are covered equally.

I would never have even attempted to start if it weren’t for the existing examples from people who have been generous enough to share their work:

Also, I knew that Jake was coming to FF Conf so if I got stumped, I could pester him. That’s exactly what ended up happening (thanks, Jake!).

So if you decide to play around with Service Workers, please, please share your experience.

It’s entirely up to you how you use Service Workers. I figured for a personal site like this, it would be nice to:

  1. Explicitly cache resources like CSS, JavaScript, and some images.
  2. Cache the homepage so it can be displayed even when the network connection fails.
  3. For other pages, have a fallback “offline” page to display when the network connection fails.

So now I’ve got a Service Worker up and running on adactio.com. It will only work in Chrome, Android, Opera, and the forthcoming version of Firefox …and that’s just fine. It’s an enhancement. As more and more browsers start supporting it, this Service Worker will become more and more useful.

How very future friendly!

The code

If you’re interested in the nitty-gritty of what my Service Worker is doing, read on. If, on the other hand, code is not your bag, now would be a good time to bow out.

If you want to jump straight to the finished code, here’s a gist. Feel free to take it, break it, copy it, improve it, or do anything else you want with it.

To start with, let’s establish exactly what a Service Worker is. I like this definition by Matt Gaunt:

A service worker is a script that is run by your browser in the background, separate from a web page, opening the door to features which don’t need a web page or user interaction.

register

From inside my site’s global JavaScript file—or I could do this from a script element inside my pages—I’m going to do a quick bit of feature detection for Service Workers. If the browser supports it, then I’m going register my Service Worker by pointing to another JavaScript file, which sits at the root of my site:

if (navigator.serviceWorker) {
  navigator.serviceWorker.register('/serviceworker.js', {
    scope: '/'
  });
}

The serviceworker.js file sits in the root of my site so that it can act on any requests to my domain. If I put it somewhere like /js/serviceworker.js, then it would only be able to act on requests to the /js directory.

Once that file has been loaded, the installation of the Service Worker can begin. That means the script will be installed in the user’s browser …and it will live there even after the user has left my website.

install

I’m making the installation of the Service Worker dependent on a function called updateStaticCache that will populate a cache with the files I want to store:

self.addEventListener('install', function (event) {
  event.waitUntil(updateStaticCache());
});

That updateStaticCache function will be used for storing items in a cache. I’m going to make sure that the cache has a version number in its name, exactly as described in the Guardian’s use case. That way, when I want to update the cache, I only need to update the version number.

var staticCacheName = 'static';
var version = 'v1::';

Here’s the updateStaticCache function that puts the items I want into the cache. I’m storing my JavaScript, my CSS, some images referenced in the CSS, the home page of my site, and a page for displaying when offline.

function updateStaticCache() {
  return caches.open(version + staticCacheName)
    .then(function (cache) {
      return cache.addAll([
        '/path/to/javascript.js',
        '/path/to/stylesheet.css',
        '/path/to/someimage.png',
        '/path/to/someotherimage.png',
        '/',
        '/offline'
      ]);
    });
};

Because those items are part of the return statement for the Promise created by caches.open, the Service Worker won’t install until all of those items are in the cache. So you might want to keep them to a minimum.

You can still put other items in the cache, and not make them part of the return statement. That way, they’ll get added to the cache in their own good time, and the installation of the Service Worker won’t be delayed:

function updateStaticCache() {
  return caches.open(version + staticCacheName)
    .then(function (cache) {
      cache.addAll([
        '/path/to/somefile',
        '/path/to/someotherfile'
      ]);
      return cache.addAll([
        '/path/to/javascript.js',
        '/path/to/stylesheet.css',
        '/path/to/someimage.png',
        '/path/to/someotherimage.png',
        '/',
        '/offline'
      ]);
    });
}

Another option is to use completely different caches, but I’ve decided to just use one cache for now.

activate

When the activate event fires, it’s a good opportunity to clean up any caches that are out of date (by looking for anything that doesn’t match the current version number). I copied this straight from Nicolas’s code:

self.addEventListener('activate', function (event) {
  event.waitUntil(
    caches.keys()
      .then(function (keys) {
        return Promise.all(keys
          .filter(function (key) {
            return key.indexOf(version) !== 0;
          })
          .map(function (key) {
            return caches.delete(key);
          })
        );
      })
  );
});

fetch

The fetch event is fired every time the browser is going to request a file from my site. The magic of Service Worker is that I can intercept that request before it happens and decide what to do with it:

self.addEventListener('fetch', function (event) {
  var request = event.request;
  ...
});

POST requests

For a start, I’m going to just back off from any requests that aren’t GET requests:

if (request.method !== 'GET') {
  event.respondWith(
      fetch(request)
  );
  return;
}

That’s basically just replicating what the browser would do anyway. But even here I could decide to fall back to my offline page if the request doesn’t succeed. I do that using a catch clause appended to the fetch statement:

if (request.method !== 'GET') {
  event.respondWith(
      fetch(request)
          .catch(function () {
              return caches.match('/offline');
          })
  );
  return;
}

HTML requests

I’m going to treat requests for pages differently to requests for files. If the browser is requesting a page, then here’s the order I want:

  1. Try fetching the page from the network first.
  2. If that doesn’t work, try looking for the page in the cache.
  3. If all else fails, show the offline page.

First of all, I need to test to see if the request is for an HTML document. I’m doing this by sniffing the Accept headers, which probably isn’t the safest method:

if (request.headers.get('Accept').indexOf('text/html') !== -1) {

Now I try to fetch the page from the network:

event.respondWith(
  fetch(request)
);

If the network is working fine, this will return the response from the site and I’ll pass that along.

But if that doesn’t work, I’m going to look for a match in the cache. Time for a catch clause:

.catch(function () {
  return caches.match(request);
})

So now the whole event.respondWith statement looks like this:

event.respondWith(
  fetch(request)
    .catch(function () {
      return caches.match(request)
    })
);

Finally, I need to take care of the situation when the page can’t be fetched from the network and it can’t be found in the cache.

Now, I first tried to do this by adding a catch clause to the caches.match statement, like this:

return caches.match(request)
  .catch(function () {
    return caches.match('/offline');
  })

That didn’t work and for the life of me, I couldn’t figure out why. Then Jake set me straight. It turns out that caches.match will always return a response …even if that response is undefined. So a catch clause will never be triggered. Instead I need to return the offline page if the response from the cache is falsey:

return caches.match(request)
  .then(function (response) {
    return response || caches.match('/offline');
  })

With that cleared up, my code for handing HTML requests looks like this:

event.respondWith(
  fetch(request, { credentials: 'include' })
    .catch(function () {
      return caches.match(request)
        .then(function (response) {
          return response || caches.match('/offline');
        })
    })
);

Actually, there’s one more thing I’m doing with HTML requests. If the network request succeeds, I stash the response in the cache.

Well, that’s not exactly true. I stash a copy of the response in the cache. That’s because you’re only allowed to read the value of a response once. So if I want to do anything with it, I have to clone it:

var copy = response.clone();
caches.open(version + staticCacheName)
  .then(function (cache) {
    cache.put(request, copy);
  });

I do that right before returning the actual response. Here’s how it fits together:

if (request.headers.get('Accept').indexOf('text/html') !== -1) {
  event.respondWith(
    fetch(request, { credentials: 'include' })
      .then(function (response) {
        var copy = response.clone();
        caches.open(version + staticCacheName)
          .then(function (cache) {
            cache.put(request, copy);
          });
        return response;
      })
      .catch(function () {
        return caches.match(request)
          .then(function (response) {
            return response || caches.match('/offline');
          })
      })
  );
  return;
}

Okay. So that’s requests for pages taken care of.

File requests

I want to handle requests for files differently to requests for pages. Here’s my list of priorities:

  1. Look for the file in the cache first.
  2. If that doesn’t work, make a network request.
  3. If all else fails, and it’s a request for an image, show a placeholder.

Step one: try getting the file from the cache:

event.respondWith(
  caches.match(request)
);

Step two: if that didn’t work, go out to the network. Now remember, I can’t use a catch clause here, because caches.match will always return something: either a response or undefined. So here’s what I do:

event.respondWith(
  caches.match(request)
    .then(function (response) {
      return response || fetch(request);
    })
);

Now that I’m back to dealing with a fetch statement, I can use a catch clause to take care of the third and final step: if the network request doesn’t succeed, check to see if the request was for an image, and if so, display a placeholder:

.catch(function () {
  if (request.headers.get('Accept').indexOf('image') !== -1) {
    return new Response('<svg>...</svg>',  { headers: { 'Content-Type': 'image/svg+xml' }});
  }
})

I could point to a placeholder image in the cache, but I’ve decided to send an SVG on the fly using a new Response object.

Here’s how the whole thing looks:

event.respondWith(
  caches.match(request)
    .then(function (response) {
      return response || fetch(request)
        .catch(function () {
          if (request.headers.get('Accept').indexOf('image') !== -1) {
            return new Response('<svg>...</svg>', { headers: { 'Content-Type': 'image/svg+xml' }});
          }
        })
    })
);

The overall shape of my code to handle fetch events now looks like this:

self.addEventListener('fetch', function (event) {
  var request = event.request;
  // Non-GET requests
  if (request.method !== 'GET') {
    event.respondWith(
      ... 
    );
    return;
  }
  // HTML requests
  if (request.headers.get('Accept').indexOf('text/html') !== -1) {
    event.respondWith(
      ...
    );
    return;
  }
  // Non-HTML requests
  event.respondWith(
    ...
  );
});

Feel free to peruse the code.

Next steps

The code I’m running now is fine for a first stab, but there’s room for improvement.

Right now I’m stashing any HTML pages the user visits into the cache. I don’t think that will get out of control—I imagine most people only ever visit just a handful of pages on my site. But there’s the chance that the cache could get quite bloated. Ideally I’d have some way of keeping the cache nice and lean.

I was thinking: maybe I should have a separate cache for HTML pages, and limit the number in that cache to, say, 20 or 30 items. Every time I push something new into that cache, I could pop the oldest item out.

I could imagine doing something similar for images: keeping a cache of just the most recent 10 or 20.

If you fancy having a go at coding that up, let me know.

Lessons learned

There were a few gotchas along the way. I already mentioned the fact that caches.match will always return something so you can’t use catch clauses to handle situations where a file isn’t found in the cache.

Something else worth noting is that this:

fetch(request);

…is functionally equivalent to this:

fetch(request)
  .then(function (response) {
    return response;
  });

That’s probably obvious but it took me a while to realise. Likewise:

caches.match(request);

…is the same as:

caches.match(request)
  .then(function (response) {
    return response;
  });

Here’s another thing… you’ll notice that sometimes I’ve used:

fetch(request);

…but sometimes I’ve used:

fetch(request, { credentials: 'include' } );

That’s because, by default, a fetch request doesn’t include cookies. That’s fine if the request is for a static file, but if it’s for a potentially-dynamic HTML page, you probably want to make sure that the Service Worker request is no different from a regular browser request. You can do that by passing through that second (optional) argument.

But probably the trickiest thing is getting your head around the idea of Promises. Writing JavaScript is generally a fairly procedural affair, but once you start dealing with then clauses, you have to come to grips with the fact that the contents of those clauses will return asynchronously. So statements written after the then clause will probably execute before the code inside the clause. It’s kind of hard to explain, but if you find problems with your Service Worker code, check to see if that’s the cause.

And remember, please share your code and your gotchas: it’s early days for Service Workers so every implementation counts.

Updates

I got some very useful feedback from Jake after I published this…

Expires headers

By default, JavaScript files on my server are cached for a month. But a Service Worker script probably shouldn’t be cached at all (or cached for a very, very short time). I’ve updated my .htaccess rules accordingly:

<FilesMatch "serviceworker.js">
  ExpiresDefault "now"
</FilesMatch>
Credentials

If a request is initiated by the browser, I don’t need to say:

fetch(request, { credentials: 'include' } );

It’s enough to just say:

fetch(request);
Scope

I set the scope parameter of my Service Worker to be “/” …but because the Service Worker is sitting in the root directory anyway, I don’t really need to do that. I could just register it with:

if (navigator.serviceWorker) {
  navigator.serviceWorker.register('/serviceworker.js');
}

If, on the other hand, the Service Worker file were sitting in a folder, but I wanted it to act on the whole site, then I would need to specify the scope:

if (navigator.serviceWorker) {
  navigator.serviceWorker.register('/path/to/serviceworker.js', {
    scope: '/'
  });
}

…and I’d also need to send a special header. So it’s probably easiest to just put Service Worker scripts in the root directory.

Have you published a response to this? :

Responses

Jeff Hampton

“My first Service Worker” ift.tt/1Ow9m1Y I’ve made no secret of the fact that I’m really excited about Service Workers. I’m not a…

cancelBubble

My first Service Worker - excited about Service Workers: offline caching, background processes, push notifications bit.ly/1MMcP64

brandonrozek.com

I’m excited to say that I’ve written my first service worker for brandonrozek.com. What is a service worker? A service worker provides an extra layer between the client and the server. The exciting part about this is that you can use service workers to deliver an offline experience. (Cached versions of your site, offline pages, etc.)

Service workers are currently supported in Chrome, Opera, and Firefox nightly. You don’t have to worry too much about browser support because the Service Worker spec was written in a progressively enchanced way meaning it won’t break your existing site 🙂

Caveats

You need HTTPS to be able to use service workers on your site. This is mainly for security reasons. Imagine if a third party can control all of the networking requests on your site? If you don’t want to go out and buy a SSL Certificate, there are a couple free ways to go about this. 1) Cloudflare 2) Let’s Encrypt Service workers are promise heavy. Promises contain a then clause which runs code asynchronously. If you’re not accustomed to this idea please check out this post by Nicolas Bevacqua. Now onto making the service worker! If you want to skip to the final code scroll down to the bottom. Unless you don’t like my syntax highlighting, then you can check out this gist.

Register the service worker

Place service-worker.js on the root of your site. This is so the service worker can access all the files in the site. Then in your main javascript file, register the service worker.


if (navigator.serviceWorker) { navigator.serviceWorker.register('/serviceworker.js', { scope: '/' });
}
Install the service worker

The first time the service worker runs, it emits the install event. At this time, we can load the visitor’s cache with some resources for when they’re offline. Every so often, I like to change up the theme of the site. So I have version numbers attached to my files. I would also like to invalidate my cache with this version number. So at the top of the file I added


var version = 'v2.0.24:';

Now, to specify which files I want the service worker to cache for offline use. I thought my home page and my offline page would be good enough.


var offlineFundamentals = [ '/', '/offline/'
];

Since cache.addAll() hasn’t been implemented yet in any of the browsers, and the polyfill implementation didn’t work for my needs. I pieced together my own.


var updateStaticCache = function() { return caches.open(version + 'fundamentals').then(function(cache) { return Promise.all(offlineFundamentals.map(function(value) { var request = new Request(value); var url = new URL(request.url); if (url.origin != location.origin) { request = new Request(value, {mode: 'no-cors'}); } return fetch(request).then(function(response) { var cachedCopy = response.clone(); return cache.put(request, cachedCopy); }); })) })
};

Let’s go through this chunk of code.

  1. Open the cache called 'v2.0.24:fundamentals'
  2. Go through all of the offlineFundamental‘s URLs 9a4dcdb8d0036d5d133143336f7463a7

Now we call it when the install event is fired.


self.addEventListener("install", function(event) { event.waitUntil(updateStaticCache())
})

With this we now cached all the files in the offlineFundamentals array during the install step.

Clear out the old cache

Since we’re caching everything. If you change one of the files, your visitor wouldn’t get the changed file. Wouldn’t it be nice to remove old files from the visitor’s cache? Every time the service worker finishes the install step, it releases an activate event. We can use this to look and see if there are any old cache containers on the visitor’s computer. From Nicolas’ code. Thanks for sharing 🙂


var clearOldCaches = function() { return caches.keys().then(function(keys) { return Promise.all( keys .filter(function (key) { return key.indexOf(version) != 0; }) .map(function (key) { return caches.delete(key); }) ); })
}
  1. Check the names of each of the cache containers
  2. If they don’t start with the correct version number 3511f57ea14a5ed522d2dfb69d460fa4

Call the function when the activate event fires.


self.addEventListener("activate", function(event) { event.waitUntil(clearOldCaches())
});
Intercepting fetch requests

The cool thing about service worker’s is that it can handle file requests. We could cache all files requested for offline use, and if a fetch for a resource failed, then the service worker can look for it in the cache or provide an alternative. This is a large section, so I’m going to attempt to break it down as much as I can.

Limit the cache

If the visitor started browsing all of the pages on my site, his or her cache would start to get bloated with files. To not burden my visitors, I decided to only keep the latest 25 pages and latest 10 images in the cache.


var limitCache = function(cache, maxItems) { cache.keys().then(function(items) { if (items.length > maxItems) { cache.delete(items[0]); } })
}

We’ll call it later in the code.

Fetch from network and cache

Every time I fetch a file from the network I throw it into a specific cache container. 'pages' for HTML files, 'images' for CSS files, and 'assets' for any other file. This is so I can handle the cache limiting above easier. Defined within the fetch event.


var fetchFromNetwork = function(response) { var cacheCopy = response.clone(); if (event.request.headers.get('Accept').indexOf('text/html') != -1) { caches.open(version + 'pages').then(function(cache) { cache.put(event.request, cacheCopy).then(function() { limitCache(cache, 25); }) }); } else if (event.request.headers.get('Accept').indexOf('image') != -1) { caches.open(version + 'images').then(function(cache) { cache.put(event.request, cacheCopy).then(function() { limitCache(cache, 10); }); }); } else { caches.open(version + 'assets').then(function add(cache) { cache.put(event.request, cacheCopy); }); } return response; }
When the network fails

There are going to be times where the visitor cannot access the website. Maybe they went in a tunnel while they were riding a train? Or maybe your site went down. I thought it would be nice for my reader’s to be able to look over my blog posts again regardless of an internet connection. So I provide a fall-back. Defined within the fetch event.


var fallback = function() { if (event.request.headers.get('Accept').indexOf('text/html') != -1) { return caches.match(event.request).then(function (response) { return response || caches.match('/offline/'); }) } else if (event.request.headers.get('Accept').indexOf('image') != -1) { return new Response('<svg width="400" height="300" role="img" aria-labelledby="offline-title" viewBox="0 0 400 300" xmlns="http://www.w3.org/2000/svg"><title id="offline-title">Offline</title><g fill="none" fill-rule="evenodd"><path fill="#D8D8D8" d="M0 0h400v300H0z"/><text fill="#9B9B9B" font-family="Helvetica Neue,Arial,Helvetica,sans-serif" font-size="72" font-weight="bold"><tspan x="93" y="172">offline</tspan></text></g></svg>', { headers: { 'Content-Type': 'image/svg+xml' }}); } }
  1. Is the request for a HTML file? 241afdc59a66c944b368e1ca0758ae2c
  2. Is the request for an image? 91c8857e964d4426f3acdb54a3e0e104
Handle the request

First off, I’m only handling GET requests.


if (event.request.method != 'GET') { return;
}

For HTML files, grab the file from the network. If that fails, then look for it in the cache. Network then cache strategy


if (event.request.headers.get('Accept').indexOf('text/html') != -1) { event.respondWith(fetch(event.request).then(fetchFromNetwork, fallback)); return; }

For non-HTML files, follow this series of steps

  1. Check the cache
  2. Does a cache exist for this file?

    • Yes. Then show it
    • No. Then grab it from the network and cache it.

Cache then network strategy


event.respondWith( caches.match(event.request).then(function(cached) { return cached || fetch(event.request).then(fetchFromNetwork, fallback); }) )

For different stategy’s, take a look at Jake Archibald’s offline cookbook.

Conclusion

With all of that, we now have a fully functioning offline-capable website! I wouldn’t be able to implement this myself if it wasn’t for some of the awesome people I mentioned earlier sharing their experience. So share, share, share! With that sentiment, I’ll now share the full code for service-worker.js Update: There is a new version of this code over at this blog post.


var version = 'v2.0.24:'; var offlineFundamentals = [ '/', '/offline/'
]; //Add core website files to cache during serviceworker installation
var updateStaticCache = function() { return caches.open(version + 'fundamentals').then(function(cache) { return Promise.all(offlineFundamentals.map(function(value) { var request = new Request(value); var url = new URL(request.url); if (url.origin != location.origin) { request = new Request(value, {mode: 'no-cors'}); } return fetch(request).then(function(response) { var cachedCopy = response.clone(); return cache.put(request, cachedCopy); }); })) })
}; //Clear caches with a different version number
var clearOldCaches = function() { return caches.keys().then(function(keys) { return Promise.all( keys .filter(function (key) { return key.indexOf(version) != 0; }) .map(function (key) { return caches.delete(key); }) ); })
} /* limits the cache If cache has more than maxItems then it removes the first item in the cache
*/
var limitCache = function(cache, maxItems) { cache.keys().then(function(items) { if (items.length > maxItems) { cache.delete(items[0]); } })
} //When the service worker is first added to a computer
self.addEventListener("install", function(event) { event.waitUntil(updateStaticCache())
}) //Service worker handles networking
self.addEventListener("fetch", function(event) { //Fetch from network and cache var fetchFromNetwork = function(response) { var cacheCopy = response.clone(); if (event.request.headers.get('Accept').indexOf('text/html') != -1) { caches.open(version + 'pages').then(function(cache) { cache.put(event.request, cacheCopy).then(function() { limitCache(cache, 25); }) }); } else if (event.request.headers.get('Accept').indexOf('image') != -1) { caches.open(version + 'images').then(function(cache) { cache.put(event.request, cacheCopy).then(function() { limitCache(cache, 10); }); }); } else { caches.open(version + 'assets').then(function add(cache) { cache.put(event.request, cacheCopy); }); } return response; } //Fetch from network failed var fallback = function() { if (event.request.headers.get('Accept').indexOf('text/html') != -1) { return caches.match(event.request).then(function (response) { return response || caches.match('/offline/'); }) } else if (event.request.headers.get('Accept').indexOf('image') != -1) { return new Response('<svg width="400" height="300" role="img" aria-labelledby="offline-title" viewBox="0 0 400 300" xmlns="http://www.w3.org/2000/svg"><title id="offline-title">Offline</title><g fill="none" fill-rule="evenodd"><path fill="#D8D8D8" d="M0 0h400v300H0z"/><text fill="#9B9B9B" font-family="Helvetica Neue,Arial,Helvetica,sans-serif" font-size="72" font-weight="bold"><tspan x="93" y="172">offline</tspan></text></g></svg>', { headers: { 'Content-Type': 'image/svg+xml' }}); } } //This service worker won't touch non-get requests if (event.request.method != 'GET') { return; } //For HTML requests, look for file in network, then cache if network fails. if (event.request.headers.get('Accept').indexOf('text/html') != -1) { event.respondWith(fetch(event.request).then(fetchFromNetwork, fallback)); return; } //For non-HTML requests, look for file in cache, then network if no cache exists. event.respondWith( caches.match(event.request).then(function(cached) { return cached || fetch(event.request).then(fetchFromNetwork, fallback); }) )
}); //After the install event
self.addEventListener("activate", function(event) { event.waitUntil(clearOldCaches())
});

# Saturday, November 14th, 2015 at 12:00am

brandonrozek.com

I’m excited to say that I’ve written my first service worker for brandonrozek.com. What is a service worker? A service worker provides an extra layer between the client and the server. The exciting part about this is that you can use service workers to deliver an offline experience. (Cached versions of your site, offline pages, etc.)

Service workers are currently supported in Chrome, Opera, and Firefox nightly. You don’t have to worry too much about browser support because the Service Worker spec was written in a progressively enchanced way meaning it won’t break your existing site 🙂

Caveats

You need HTTPS to be able to use service workers on your site. This is mainly for security reasons. Imagine if a third party can control all of the networking requests on your site? If you don’t want to go out and buy a SSL Certificate, there are a couple free ways to go about this. 1) Cloudflare 2) Let’s Encrypt Service workers are promise heavy. Promises contain a then clause which runs code asynchronously. If you’re not accustomed to this idea please check out this post by Nicolas Bevacqua. Now onto making the service worker! If you want to skip to the final code scroll down to the bottom. Unless you don’t like my syntax highlighting, then you can check out this gist.

Register the service worker

Place service-worker.js on the root of your site. This is so the service worker can access all the files in the site. Then in your main javascript file, register the service worker.


if (navigator.serviceWorker) { navigator.serviceWorker.register('/serviceworker.js', { scope: '/' });
}
Install the service worker

The first time the service worker runs, it emits the install event. At this time, we can load the visitor’s cache with some resources for when they’re offline. Every so often, I like to change up the theme of the site. So I have version numbers attached to my files. I would also like to invalidate my cache with this version number. So at the top of the file I added


var version = 'v2.0.24:';

Now, to specify which files I want the service worker to cache for offline use. I thought my home page and my offline page would be good enough.


var offlineFundamentals = [ '/', '/offline/'
];

Since cache.addAll() hasn’t been implemented yet in any of the browsers, and the polyfill implementation didn’t work for my needs. I pieced together my own.


var updateStaticCache = function() { return caches.open(version + 'fundamentals').then(function(cache) { return Promise.all(offlineFundamentals.map(function(value) { var request = new Request(value); var url = new URL(request.url); if (url.origin != location.origin) { request = new Request(value, {mode: 'no-cors'}); } return fetch(request).then(function(response) { var cachedCopy = response.clone(); return cache.put(request, cachedCopy); }); })) })
};

Let’s go through this chunk of code.

  1. Open the cache called 'v2.0.24:fundamentals'
  2. Go through all of the offlineFundamental‘s URLs 9a4dcdb8d0036d5d133143336f7463a7

Now we call it when the install event is fired.


self.addEventListener("install", function(event) { event.waitUntil(updateStaticCache())
})

With this we now cached all the files in the offlineFundamentals array during the install step.

Clear out the old cache

Since we’re caching everything. If you change one of the files, your visitor wouldn’t get the changed file. Wouldn’t it be nice to remove old files from the visitor’s cache? Every time the service worker finishes the install step, it releases an activate event. We can use this to look and see if there are any old cache containers on the visitor’s computer. From Nicolas’ code. Thanks for sharing 🙂


var clearOldCaches = function() { return caches.keys().then(function(keys) { return Promise.all( keys .filter(function (key) { return key.indexOf(version) != 0; }) .map(function (key) { return caches.delete(key); }) ); })
}
  1. Check the names of each of the cache containers
  2. If they don’t start with the correct version number 3511f57ea14a5ed522d2dfb69d460fa4

Call the function when the activate event fires.


self.addEventListener("activate", function(event) { event.waitUntil(clearOldCaches())
});
Intercepting fetch requests

The cool thing about service worker’s is that it can handle file requests. We could cache all files requested for offline use, and if a fetch for a resource failed, then the service worker can look for it in the cache or provide an alternative. This is a large section, so I’m going to attempt to break it down as much as I can.

Limit the cache

If the visitor started browsing all of the pages on my site, his or her cache would start to get bloated with files. To not burden my visitors, I decided to only keep the latest 25 pages and latest 10 images in the cache.


var limitCache = function(cache, maxItems) { cache.keys().then(function(items) { if (items.length > maxItems) { cache.delete(items[0]); } })
}

We’ll call it later in the code.

Fetch from network and cache

Every time I fetch a file from the network I throw it into a specific cache container. 'pages' for HTML files, 'images' for CSS files, and 'assets' for any other file. This is so I can handle the cache limiting above easier. Defined within the fetch event.


var fetchFromNetwork = function(response) { var cacheCopy = response.clone(); if (event.request.headers.get('Accept').indexOf('text/html') != -1) { caches.open(version + 'pages').then(function(cache) { cache.put(event.request, cacheCopy).then(function() { limitCache(cache, 25); }) }); } else if (event.request.headers.get('Accept').indexOf('image') != -1) { caches.open(version + 'images').then(function(cache) { cache.put(event.request, cacheCopy).then(function() { limitCache(cache, 10); }); }); } else { caches.open(version + 'assets').then(function add(cache) { cache.put(event.request, cacheCopy); }); } return response; }
When the network fails

There are going to be times where the visitor cannot access the website. Maybe they went in a tunnel while they were riding a train? Or maybe your site went down. I thought it would be nice for my reader’s to be able to look over my blog posts again regardless of an internet connection. So I provide a fall-back. Defined within the fetch event.


var fallback = function() { if (event.request.headers.get('Accept').indexOf('text/html') != -1) { return caches.match(event.request).then(function (response) { return response || caches.match('/offline/'); }) } else if (event.request.headers.get('Accept').indexOf('image') != -1) { return new Response('<svg width="400" height="300" role="img" aria-labelledby="offline-title" viewBox="0 0 400 300" xmlns="http://www.w3.org/2000/svg"><title id="offline-title">Offline</title><g fill="none" fill-rule="evenodd"><path fill="#D8D8D8" d="M0 0h400v300H0z"/><text fill="#9B9B9B" font-family="Helvetica Neue,Arial,Helvetica,sans-serif" font-size="72" font-weight="bold"><tspan x="93" y="172">offline</tspan></text></g></svg>', { headers: { 'Content-Type': 'image/svg+xml' }}); } }
  1. Is the request for a HTML file? 181f40d4f1ce791c780156d7adfd9b6d
  2. Is the request for an image? 91c8857e964d4426f3acdb54a3e0e104
Handle the request

First off, I’m only handling GET requests.


if (event.request.method != 'GET') { return;
}

For HTML files, grab the file from the network. If that fails, then look for it in the cache. Network then cache strategy


if (event.request.headers.get('Accept').indexOf('text/html') != -1) { event.respondWith(fetch(event.request).then(fetchFromNetwork, fallback)); return; }

For non-HTML files, follow this series of steps

  1. Check the cache
  2. Does a cache exist for this file?

    • Yes. Then show it
    • No. Then grab it from the network and cache it.

Cache then network strategy


event.respondWith( caches.match(event.request).then(function(cached) { return cached || fetch(event.request).then(fetchFromNetwork, fallback); }) )

For different stategy’s, take a look at Jake Archibald’s offline cookbook.

Conclusion

With all of that, we now have a fully functioning offline-capable website! I wouldn’t be able to implement this myself if it wasn’t for some of the awesome people I mentioned earlier sharing their experience. So share, share, share! With that sentiment, I’ll now share the full code for service-worker.js Update: There is a new version of this code over at this blog post.


var version = 'v2.0.24:'; var offlineFundamentals = [ '/', '/offline/'
]; //Add core website files to cache during serviceworker installation
var updateStaticCache = function() { return caches.open(version + 'fundamentals').then(function(cache) { return Promise.all(offlineFundamentals.map(function(value) { var request = new Request(value); var url = new URL(request.url); if (url.origin != location.origin) { request = new Request(value, {mode: 'no-cors'}); } return fetch(request).then(function(response) { var cachedCopy = response.clone(); return cache.put(request, cachedCopy); }); })) })
}; //Clear caches with a different version number
var clearOldCaches = function() { return caches.keys().then(function(keys) { return Promise.all( keys .filter(function (key) { return key.indexOf(version) != 0; }) .map(function (key) { return caches.delete(key); }) ); })
} /* limits the cache If cache has more than maxItems then it removes the first item in the cache
*/
var limitCache = function(cache, maxItems) { cache.keys().then(function(items) { if (items.length > maxItems) { cache.delete(items[0]); } })
} //When the service worker is first added to a computer
self.addEventListener("install", function(event) { event.waitUntil(updateStaticCache())
}) //Service worker handles networking
self.addEventListener("fetch", function(event) { //Fetch from network and cache var fetchFromNetwork = function(response) { var cacheCopy = response.clone(); if (event.request.headers.get('Accept').indexOf('text/html') != -1) { caches.open(version + 'pages').then(function(cache) { cache.put(event.request, cacheCopy).then(function() { limitCache(cache, 25); }) }); } else if (event.request.headers.get('Accept').indexOf('image') != -1) { caches.open(version + 'images').then(function(cache) { cache.put(event.request, cacheCopy).then(function() { limitCache(cache, 10); }); }); } else { caches.open(version + 'assets').then(function add(cache) { cache.put(event.request, cacheCopy); }); } return response; } //Fetch from network failed var fallback = function() { if (event.request.headers.get('Accept').indexOf('text/html') != -1) { return caches.match(event.request).then(function (response) { return response || caches.match('/offline/'); }) } else if (event.request.headers.get('Accept').indexOf('image') != -1) { return new Response('<svg width="400" height="300" role="img" aria-labelledby="offline-title" viewBox="0 0 400 300" xmlns="http://www.w3.org/2000/svg"><title id="offline-title">Offline</title><g fill="none" fill-rule="evenodd"><path fill="#D8D8D8" d="M0 0h400v300H0z"/><text fill="#9B9B9B" font-family="Helvetica Neue,Arial,Helvetica,sans-serif" font-size="72" font-weight="bold"><tspan x="93" y="172">offline</tspan></text></g></svg>', { headers: { 'Content-Type': 'image/svg+xml' }}); } } //This service worker won't touch non-get requests if (event.request.method != 'GET') { return; } //For HTML requests, look for file in network, then cache if network fails. if (event.request.headers.get('Accept').indexOf('text/html') != -1) { event.respondWith(fetch(event.request).then(fetchFromNetwork, fallback)); return; } //For non-HTML requests, look for file in cache, then network if no cache exists. event.respondWith( caches.match(event.request).then(function(cached) { return cached || fetch(event.request).then(fetchFromNetwork, fallback); }) )
}); //After the install event
self.addEventListener("activate", function(event) { event.waitUntil(clearOldCaches())
});

# Saturday, November 14th, 2015 at 12:00am

adactio.com

Remy posted a screenshot to Twitter last week.

That “Add To Home Screen” dialogue is not something that Remy explicitly requested (though, of course, you can—and should—choose to add adactio.com to your home screen). That prompt appears in Chrome on Android as the result of a fairly simple algorithm based on a few factors:

  1. The website is served over HTTPS. My site is.
  2. The website has a manifest file. Here’s my JSON manifest file.
  3. The website has a Service Worker. Here’s my site’s Service Worker script (although a little birdie told me that the Service Worker script can be as basic as a blank file).
  4. The user visits the website a few times over the course of a few days.

I think that’s a reasonable set of circumstances. I particularly like that there is no way of forcing the prompt to appear.

There are some carrots in there: Want to have the user prompted to add your site to their home screen? Well, then you need to be serving on a secure connection, and you’d better get on board that Service Worker train.

Speaking of which, after I published a walkthrough of my first Service Worker, I got an email bemoaning the lack of browser support:

I was very much interested myself in this topic, until I checked on the “Can I use…” site the availability of this technology. In one word “limited”. Neither Safari nor IOS Safari support it, at least now, so I cannot use it for implementing mobile applications.

I don’t think this is the right way to think about Service Workers. You don’t build your site on top of a Service Worker—you add a Service Worker on top of your existing site. It has been explicitly designed that way: you can’t make it the bedrock of your site’s functionality; you can only add it as an enhancement.

I think that’s really, really smart. It means that you can start implementing Service Workers today and as more and more browsers add support, your site will appear to get better and better. My site worked fine for fifteen years before I added a Service Worker, and on the day I added that Service Worker, it had no ill effect on non-supporting browsers.

Oh, and according to the Webkit five year plan, Service Worker support is on its way. This doesn’t surprise me. I can’t imagine that Apple would let Google upstage them for too long with that nice “add to home screen” flow.

Alas, Mobile Safari’s glacial update cycle means that the earliest we’ll see improvements like Service Workers will probably be September or October of next year. In the age of evergreen browsers, Apple’s feast-or-famine approach to releasing updates is practically indistinguishable from stagnation.

Still, slowly but surely, game-changing technologies are landing in browsers. At the same time, the long-term problems with betting on native apps are starting to become clearer. Native apps are still ahead of what can be accomplished on the web, but it was ever thus:

The web will always be lagging behind some other technology. I’m okay with that. If anything, I see these other technologies as the research and development arm of the web. CD-ROMs, Flash, and now native apps show us what authors want to be able to do on the web. Slowly but surely, those abilities start becoming available in web browsers.

The pace of this standardisation can seem infuriatingly slow. Sometimes it is too slow. But it’s important that we get it right—the web should hold itself to a higher standard. And so the web plays the tortoise while other technologies race ahead as the hare.

It’s interesting to see how the web could take the desirable features of native—offline support, smooth animations, an icon on the home screen—without sacrificing the strengths of the web—linking, responsiveness, the lack of App Store gatekeepers. That kind of future is what Alex is calling progressive apps:

Critically, these apps can deliver an even better user experience than traditional web apps. Because it’s also possible to build this performance in as progressive enhancement, the tangible improvements make it worth building this way regardless of “appy” intent.

Flipkart recently launched something along those lines, although it’s somewhat lacking in the “enhancement” department; the core content is delivered via JavaScript—a fragile approach.

What excites me is the prospect of building services that work just fine on low-powered devices with basic browsers, but that also take advantage of all the great possibilities offered by the latest browsers running on the newest devices. Backwards compatible and future friendly.

And if that sounds like a naïve hope, then I humbly suggest that Service Workers are a textbook example of exactly that approach.

# Sunday, November 15th, 2015 at 10:56pm

Aaron Gustafson

User experience encompasses more than just the interface. Download speed, render performance, and the cost of accessing a site are often overlooked areas when it comes to the practice of UX, but they all affect how users experience what we build on the Web.

I’m always looking for ways to improve these aspects of my own site. And, since it’s my own personal playground, I often use it as a test-bed for new technologies, ideas, and techniques. My latest adventure was inspired by a bunch of articles and posts I’ve linked to recently, especially

After reading these pieces, I decided to see how much I could do to improve the performance of this site, especially on posts with a lot of images and embedded code samples, like my recent post on form labels.

Using Resource Hints

To kick things off, I followed Malte’s advice and used Resource Hints to prime the pump for any third-party servers hosting assets I use frequently (e.g. Disqus, Twitter, etc.). I used the code Malte references in the AMP Project as my starting point and added two new methods (preconnect() and prefetch()) to my global AG object. With that library code in place, I can call those methods as necessary from within my other JavaScript files. Here’s a simplified extract from my Disqus integration script:

if ( ‘AG’ in window && ‘preconnect’ in window.AG ) { window.AG.preconnect( ‘//disqus.com/’ ); window.AG.prefetch( ‘//’ + disqusshortname + ‘.disqus.com/count.js’ ); } view raw resource-hints-sample.js hosted with ❤ by GitHub

While a minor addition, the speed improvement in supporting browsers was noticeable.1

Integrating Service Worker

With that in the bag, I set about making my first Service Worker. I started off gently, using Dean’s piece as a guide. I added a WebP conversion bit to my image processing Gulp task to get the files in place and then I created the Service Worker. By default, Dean’s code converts all JPG and PNG requests to WebP responses, so I set it up to limit the requests to only those files being requested directly from my server. I have no way of knowing if WebP equivalents of every JPG and PNG exist on the open web (probably not), but I know they exist on my server. Here’s the updated code:

“use strict”; self.addEventListener(‘fetch’, function(event) { var request = event.request, url = request.url, url
object = new URL( url ), rejpgorpng = /\.(?:jpg|png)$/, supportswebp = false, // pessimism webpurl; // Check if the image is a local jpg or png if ( rejpgorpng.test( request.url ) && urlobject.origin == location.origin ) { // console.log(‘WORKER: caught a request for a local PNG or JPG’); // Inspect the accept header for WebP support if ( request.headers.has(‘accept’) ) { supportswebp = request.headers.get(‘accept’).includes(‘webp’); } // Browser supports WebP if ( supportswebp ) { // Make the new URL webpurl = url.substr(0, url.lastIndexOf(‘.’)) + ‘.webp’; event.respondWith( fetch( webpurl, { mode: ‘no-cors’ } ) ); } } }); view raw webp-service-worker.js hosted with ❤ by GitHub

When I began tucking to the caching possibilities of Service Workers, following Nicolas’ and Jeremy’s posts, I opted to tweak Nicholas’ caching setup a bit. I’m still not completely thrilled with it, but it’s a work in progress. I’m sure I will tweak as I get more familiar with the technology.

To keep my Service Worker code modularized (like my other JavaScript code), I opted to break it up into separate files and am using Gulp to merge them all together and move the combined file into the root of the site. If you’d like to follow a similar path, feel free to adapt this Gulp task (which builds all of my JavaScript):

var gulp = require(‘gulp’), path = require(‘path’), folder = require(‘gulp-folders’), gulpIf = require(‘gulp-if’), insert = require(‘gulp-insert’), concat = require(‘gulp-concat’), uglify = require(‘gulp-uglify’), notify = require(‘gulp-notify’), rename = require(‘gulp-rename’), //handleErrors = require(‘handleErrors’), source
folder = ‘source/javascript’, destinationroot = ‘source’, destinationfolder = destinationroot + ‘/j’, publicroot = ‘public’ publicfolder = publicroot + ‘/j’, renameserviceworker = rename({ dirname: “../” }); gulp.task(‘scripts’, folder(sourcefolder, function(folder){ return gulp.src(path.join(sourcefolder, folder, ‘.js’)) .pipe(concat(folder + ‘.js’)) .pipe(insert.transform(function(contents, file){ // insert a build time variable var build_time = (new Date()).getTime() + ”; return contents.replace( ‘{{BUILD_TIME}}’, build_time ); })) .pipe(gulp.dest(destination_folder)) .pipe(gulp.dest(public_folder)) .pipe(rename({suffix: ‘.min’})) .pipe(uglify()) .pipe(gulpIf(folder==’serviceworker’,rename_serviceworker)) .pipe(gulp.dest(destination_folder)) .pipe(gulp.dest(public_folder)) .pipe(notify({ message: ‘Scripts task complete’ })); //.on(‘error’, handleErrors); })); view raw gulp-scripts.js hosted with ❤ by GitHub

As most of the walkthroughs recommended that you version your Service Worker if you’re doing any caching, I set mine up to be auto-versioned by inserting a timestamp (lines 23-27, above) into my Service Worker header file (line 3, below):

var gulp = require(‘gulp’), path = require(‘path’), folder = require(‘gulp-folders’), gulpIf = require(‘gulp-if’), insert = require(‘gulp-insert’), concat = require(‘gulp-concat’), uglify = require(‘gulp-uglify’), notify = require(‘gulp-notify’), rename = require(‘gulp-rename’), //handleErrors = require(‘handleErrors’), source_folder = ‘source/_javascript’, destination_root = ‘source’, destination_folder = destination_root + ‘/j’, public_root = ‘public’ public_folder = public_root + ‘/j’, rename_serviceworker = rename({ dirname: “../” }); gulp.task(‘scripts’, folder(source_folder, function(folder){ return gulp.src(path.join(source_folder, folder, ‘
.js’)) .pipe(concat(folder + ‘.js’)) .pipe(insert.transform(function(contents, file){ // insert a build time variable var buildtime = (new Date()).getTime() + ”; return contents.replace( ‘{{BUILDTIME}}’, buildtime ); })) .pipe(gulp.dest(destinationfolder)) .pipe(gulp.dest(publicfolder)) .pipe(rename({suffix: ‘.min’})) .pipe(uglify()) .pipe(gulpIf(folder==’serviceworker’,renameserviceworker)) .pipe(gulp.dest(destinationfolder)) .pipe(gulp.dest(publicfolder)) .pipe(notify({ message: ‘Scripts task complete’ })); //.on(‘error’, handleErrors); })); view raw gulp-scripts.js hosted with ❤ by GitHub

Service Workers are still pretty new (and modestly supported), but it’s definitely interesting to see what’s possible using them. Like Jeremy, I want to do a bit more exploration into caching and how it may actually increase the monetary cost of accessing a website if not used properly. Like any powerful tool, we need to wield it wisely.

Making Gists Static

On particularly code-heavy posts (yes, like this one), I make liberal use of Gists. They’re quite useful, but the Gist plugin for Jekyll, while good, still requests a script from Github in order to load the pretty printed version of the Gist. On some posts, that can mean 5 or more additional network requests, not to mention execution time for the JavaScript. It’s yet another dependency that could prohibit you from quickly getting to the content you’re looking for. Additionally, if JavaScript should be available, but isn’t, you get nothing (since the noscript content is only evaluated if JavaScript support isn’t available or if a user turns it off).

With all of this in mind, I decided to revise the plugin and make it capable of downloading the JavaScript code directly. It then extracts the HTML markup that the JavaScript would be writing into the page and just embeds it directly. It also caches the result, which is handy for speeding up the build process.

You can grab my fork of the Gist Jekyll Plugin as, well, a Gist. It’s also in the source of this site on Github.

(Hopefully) A Little Faster

All told, these changes have gotten the render time of this site down significantly across the board.2 Even more so on browsers that support Service Workers and Resource Hints. I’ll likely continue tweaking as I go, but I wanted to share my process, code, and thoughts in case any of it might be useful to you in your own work. In the end, it’s all about creating better experiences for our users. How our sites perform is a big part of that.

  1. Sadly I forgot to run some speed tests prior to rolling out this change and I didn’t feel like rolling back the site, so I don’t have solid numbers for you. That said, it seemed to shave nearly 2 seconds off of the load time on heavy pages like the post I mentioned.

  2. Again, I don’t have the numbers, but I am routinely seeing DOMContentLoaded reached between 400-600ms with Service Worker caching in play.

shanehudson.net

Github | Live Demo.

The Challenge

The rules for 10K Apart are (taken from their site):

  • Size — Your total initial download can’t be over 10kB. You can lazy-load additional resources, but your project must be usable in 10kB or less. Scrutinize your project’s performance.

  • Interoperability — Your project must work equally well in all modern browsers. We may look at it in Lynx too. Or Opera Mini. Your code should have standards.

  • Accessibility — Everybody should be able to use your awesome creation. Interaction methods, screen sizes, contrast, assistive technologies… it’s all about creating opportunity. Embrace inclusive design.

  • Progressive Enhancement - The Web is a messy place and you never know what technologies will be available in your user’s devices and browsers. Build in layers.

  • Libraries — This time around, we want you to account for every bit of code you use, so you can use a library or parts of one, but it counts against your 10k if you load it by default. Use only what you need.

The Concept

Call me crazy, but for this progressive enhancement focussed competition I decided to create a canvas web app called Albus. Obviously, it doesn’t make much sense to create a drawing app for something that requires an experience that works without JavaScript. However, progressive enhancement to me means you build upon the minimum viable product—the baseline—to enhance with features for the browsers that support them.

So while on most modern browsers you will be able to draw on the canvas, the actual concept is a website that generates line art from a photo.

The baseline requirement is that you must be able to put in a photo and it will return line art. The browsers that don’t support canvas, or JavaScript is unavailble, will use server-side generation while other browsers will be able to use service workers and generate it locally while offline. That, in my humble opinion, is how progressive enhancement should work.

I previously made a very basic version of this as an example for my book JavaScript Creativity, I had wanted to make it for a long time. This competition was the excuse I’ve been looking for to carry on and do something quite interesting with the concept. In some respects the limitations of the competition are quite freeing, because it means I can’t get too carried away and there is a deadline!

The Name

Albus.

Despite the obvious Wizarding World connotations (most of my side projects have codenames inspired by JK Rowling), Albus is one of the latin words for White. As white light is the sum of all wavelengths of visible light, I thought it would work well for a colouring app.

Also, it is short, so less bytes!

Planning

I started with pen and paper for this one. Partially because I wanted to write my ideas out just to figure out whether it was even possible to make a progressively enhanced colouring app. Turns out, it is. Even on the oldest of web browsers, you can use the server to generate the line art and print out the image to colour it in with pencils.

As you can see from the photo, I wrote a wish list of things such as drag and drop, a gallery, range of brushes. Below that I wrote a list of must-haves:

  • Be usable offline
  • Be usable without JS
  • Have a default that can be printed

You will note that I wrote “be usable”, not for one minute did I think that every device and every browser would have the full experience. But I knew that in order for it to work well, and to fit the rules of the competition, it must be usable in all situations. Without JS, without CSS and without a connection.

I wrote “range of brush types if possible”, I knew that if I had loads of different ideas for brushes then it would not get finished and it would be far too large. Since I had this in mind from the beginning, I was able to write the code in such a way that new brushes can be added over time and can be lazy loaded as they don’t need to be loaded before use.

Client Side

I had the basics of the edge detection working with drag and drop from the example I had written for JavaScript Creativity. So I used that as the basis and refactored so that the code was small and still readable. Although the submitted version only has one brush, I knew I wanted multiple brushes to be possible. So to do that I made an interface for brushes that meant any brush would have startPath and mouseMove methods. With those, brushes can easily define their own unique styles. Lots of brushes would use an image rotating, but others could be algorithmic or just totally random.

I also needed to build the interface, I decided to use radio buttons because it meant I could use as little code as possible. To change the colour I decided to simply use an input with type=color so browsers that support it would show a colour picker and others would show a text box where you can enter the colour manually. The alternative (and original idea) is to create a colour picker, while it is easy enough to do this it seemed like a waste of valuable file size and time. The solution is ideal for creating a progressive experience in under 10KB just by using modern features and keeping it simple.

Server

Right, so I have the client-side edge detection working nicely with drag and drop etc. So what do I do now? Ah… yes, we need server-side generation for when JS isn’t available.

How?

Well after much research, I realised it is quite awkward trying to do image processing with Node.js. Interestingly, while writing this I have just noticed that not for a second did I consider using a different server side language. At the same time, I already had a working implementation client-side. So it made sense to use that. To do so, I used PhantomJS (thanks to Aaron Gustafson for making me realise Node-Phantom automatically installs PhantomJS). This meant that I was able to create a page purely for edge detection at /edgedetect/{{filename}} that I could render in Phantom and return to the client. I originally just copy and pasted the code but have since refactored it so that both HTML files use the same edge detection script. An advantage of this, other than being the ‘right’ way is that in the future difference edge detection algorithms can easily be used.

The limitations for the competition are not required for server-side code but I tried to ensure everything was small anyway. My dependencies (may change) are:

  • Handlebars - For templating, mostly just to change the image to using the generated image or an image from gallery.
  • Hapi - I usually use Express but often have to use body parser and other small things that Hapi does automatically, so decided to use Hapi.
  • Inert - One thing Hapi no longer has is the ability to route to static files, so used Inert for the js and css files.
  • Phantom - I have used Phantom to run the edge detection and render it server side.
  • Vision - This is used for the Handlebars views.

Service Worker

On my website I am using a service worker that I “stole” from Jeremy Keith. Since I am not too knowledgable about service workers, I used the same one for Albus. I then modified it and cut out a lot of the code as Albus is generally not going to change content.

After first load, browsers with service workers will now not need to download anything to use Albus. It will also work offline without an issue at all, because all browsers that support service workers can do the client-side edge detection.

Design

Everything I have written about so far is to do with design, how it will work across browsers and the structure of the code. Most people will however think of the visuals. For this project, I worked backwards. Before even thinking about how it was going to look, I needed to prove that it was even possible to make a colouring app in less than 10KB that could do the baseline of edge detection even without JS.

Turns out it is possible, so I started thinking about how it should look and feel. Albus needed to work well on all browsers. My prototype treated small screens as second-rate. So I needed to make sure it works really well on mobile and that the UI is mobile-first. When I started looking at how it should look, with all of the MUST-haves working, the first load was 5.2KB. So I had a bit of room for CSS, and could possibly lazy load icons or something like that.

The most important part of the design is to work well on mobile, so the toolbox is crucial. Instead of the photoshop-style icons on side of screen, I decided to use a modal/dialog box (using the same styling as the splash screen). This means I didn’t need to have any icons, so better performance and accessibility. On mobile it now works really well. I think it works nicely on desktop too, but some people may prefer a floating toolbox… that can easily be changed in the future.

I made the decision to use a text only logo instead of a nicely designed one. Partially this is because it is quicker, but that’s just an excuse as I could easily lazy load an image or SVG. On the other hand, I am useless at designing logos so went without!

Paper and Crayons

Some browsers don’t support canvas or even JS and CSS. So as long as the main edge detection is working, I decided that it makes sense to provide the processed image for download. That way, even in the oldest of browsers, it can be opened with software or printed.

I strongly believe that progressive enhancement benefits everyone, so instead of only showing the download for older browsers I decided to show it for everyone. Most people will use the basic colouring tools, but others now have the ability to easily create line art from a photo then open it in Photoshop if they wish.

Does it fit the rules?

  • Size - Yes. The main required files came to just over 10KB without compression. With compression, everything came to just under 7KB.
  • Interoperability - Yes. Edge detection, the baseline requirement, works in Lynx. Modern browsers are able to enhance it with modern features.
  • Accessibility - Mostly. There can always be improvements with accessibility, and I did run out of time trying to make sure it worked really well. But in general, it is quite accessible.
  • Progressive Enhancement - Yes. Progressive enhancement was one of my main aims, to prove it could be done despite the nature of the site.
  • Libraries - I have used as few libraries as possible.

Problems and Solutions Dotted Edge Detection

I found that on a lot of the images I tried, edge detection didn’t work very well. To improve the quality I changed the threshold and added a pre-processing blur (thanks to Chris Heilmann for showing me canvas’s native blur). I lazyload Fabien Loison’s StackBlur for browsers that don’t support the native blur. I thought I could use a CSS filter but turns out that doesn’t show up when you use getImageData for canvas.

Drawing on mobile

In my ultimate wisdom, I forgot how mobiles work. Originally I couldn’t get touch working on the canvas, but eventually realised I wasn’t actually looking for targetTouches. In the future this could be changed to allow multitouch.

 <code> var clientX = e.clientX || e.targetTouches[0].clientX; var clientY = e.clientY || e.targetTouches[0].clientY; </code>

Service Worker Re-downloads on install

I noticed that when the service worker adds files to the cache, it downloads the files that have already been downloaded. Jake Archibald said that this can be fixed but I haven’t got around to it.

Responsive and Print

Resizing canvas is tricky business. I ran out of time before fixing a bug where the canvas would resize in strange ways. This means that the canvas painting isn’t aligned with the base image.

Lynx shows hidden content

I used a checkbox hack to create the open and close buttons for dialogs. Turns out Lynx shows these checkboxes, as there is no way to hide them without CSS or the hidden attribute. So my fix was to add them in with JS, from a template in the html. Not pretty but fixed the issue. For most projects it is probably fine to ignore it.

Drag and Drop causes previous brush strokes to turn to outlines

Another bug I haven’t fixed yet is the if you draw on the canvas then drag another image, it will edge detect the brush strokes. Clearing the canvas before edge detection should fix this but didn’t seem to when I tried it.

Screenshots

This year’s 10K Apart is about creating a good experience on the web while focussing on progressive enhancement and small page sizes. I don’t usually enter competitions but having not coded for fun in a while, I thought it seemed a good idea.

# Wednesday, October 5th, 2016 at 12:00am

shanehudson.net

Github | Live Demo.

The Challenge

The rules for 10K Apart are (taken from their site):

  • Size — Your total initial download can’t be over 10kB. You can lazy-load additional resources, but your project must be usable in 10kB or less. Scrutinize your project’s performance.

  • Interoperability — Your project must work equally well in all modern browsers. We may look at it in Lynx too. Or Opera Mini. Your code should have standards.

  • Accessibility — Everybody should be able to use your awesome creation. Interaction methods, screen sizes, contrast, assistive technologies… it’s all about creating opportunity. Embrace inclusive design.

  • Progressive Enhancement - The Web is a messy place and you never know what technologies will be available in your user’s devices and browsers. Build in layers.

  • Libraries — This time around, we want you to account for every bit of code you use, so you can use a library or parts of one, but it counts against your 10k if you load it by default. Use only what you need.

The Concept

Call me crazy, but for this progressive enhancement focussed competition I decided to create a canvas web app called Albus. Obviously, it doesn’t make much sense to create a drawing app for something that requires an experience that works without JavaScript. However, progressive enhancement to me means you build upon the minimum viable product—the baseline—to enhance with features for the browsers that support them.

So while on most modern browsers you will be able to draw on the canvas, the actual concept is a website that generates line art from a photo.

The baseline requirement is that you must be able to put in a photo and it will return line art. The browsers that don’t support canvas, or JavaScript is unavailble, will use server-side generation while other browsers will be able to use service workers and generate it locally while offline. That, in my humble opinion, is how progressive enhancement should work.

I previously made a very basic version of this as an example for my book JavaScript Creativity, I had wanted to make it for a long time. This competition was the excuse I’ve been looking for to carry on and do something quite interesting with the concept. In some respects the limitations of the competition are quite freeing, because it means I can’t get too carried away and there is a deadline!

The Name

Albus.

Despite the obvious Wizarding World connotations (most of my side projects have codenames inspired by JK Rowling), Albus is one of the latin words for White. As white light is the sum of all wavelengths of visible light, I thought it would work well for a colouring app.

Also, it is short, so less bytes!

Planning

I started with pen and paper for this one. Partially because I wanted to write my ideas out just to figure out whether it was even possible to make a progressively enhanced colouring app. Turns out, it is. Even on the oldest of web browsers, you can use the server to generate the line art and print out the image to colour it in with pencils.

As you can see from the photo, I wrote a wish list of things such as drag and drop, a gallery, range of brushes. Below that I wrote a list of must-haves:

  • Be usable offline
  • Be usable without JS
  • Have a default that can be printed

You will note that I wrote “be usable”, not for one minute did I think that every device and every browser would have the full experience. But I knew that in order for it to work well, and to fit the rules of the competition, it must be usable in all situations. Without JS, without CSS and without a connection.

I wrote “range of brush types if possible”, I knew that if I had loads of different ideas for brushes then it would not get finished and it would be far too large. Since I had this in mind from the beginning, I was able to write the code in such a way that new brushes can be added over time and can be lazy loaded as they don’t need to be loaded before use.

Client Side

I had the basics of the edge detection working with drag and drop from the example I had written for JavaScript Creativity. So I used that as the basis and refactored so that the code was small and still readable. Although the submitted version only has one brush, I knew I wanted multiple brushes to be possible. So to do that I made an interface for brushes that meant any brush would have startPath and mouseMove methods. With those, brushes can easily define their own unique styles. Lots of brushes would use an image rotating, but others could be algorithmic or just totally random.

I also needed to build the interface, I decided to use radio buttons because it meant I could use as little code as possible. To change the colour I decided to simply use an input with type=color so browsers that support it would show a colour picker and others would show a text box where you can enter the colour manually. The alternative (and original idea) is to create a colour picker, while it is easy enough to do this it seemed like a waste of valuable file size and time. The solution is ideal for creating a progressive experience in under 10KB just by using modern features and keeping it simple.

Server

Right, so I have the client-side edge detection working nicely with drag and drop etc. So what do I do now? Ah… yes, we need server-side generation for when JS isn’t available.

How?

Well after much research, I realised it is quite awkward trying to do image processing with Node.js. Interestingly, while writing this I have just noticed that not for a second did I consider using a different server side language. At the same time, I already had a working implementation client-side. So it made sense to use that. To do so, I used PhantomJS (thanks to Aaron Gustafson for making me realise Node-Phantom automatically installs PhantomJS). This meant that I was able to create a page purely for edge detection at /edgedetect/{{filename}} that I could render in Phantom and return to the client. I originally just copy and pasted the code but have since refactored it so that both HTML files use the same edge detection script. An advantage of this, other than being the ‘right’ way is that in the future difference edge detection algorithms can easily be used.

The limitations for the competition are not required for server-side code but I tried to ensure everything was small anyway. My dependencies (may change) are:

  • Handlebars - For templating, mostly just to change the image to using the generated image or an image from gallery.
  • Hapi - I usually use Express but often have to use body parser and other small things that Hapi does automatically, so decided to use Hapi.
  • Inert - One thing Hapi no longer has is the ability to route to static files, so used Inert for the js and css files.
  • Phantom - I have used Phantom to run the edge detection and render it server side.
  • Vision - This is used for the Handlebars views.

Service Worker

On my website I am using a service worker that I “stole” from Jeremy Keith. Since I am not too knowledgable about service workers, I used the same one for Albus. I then modified it and cut out a lot of the code as Albus is generally not going to change content.

After first load, browsers with service workers will now not need to download anything to use Albus. It will also work offline without an issue at all, because all browsers that support service workers can do the client-side edge detection.

Design

Everything I have written about so far is to do with design, how it will work across browsers and the structure of the code. Most people will however think of the visuals. For this project, I worked backwards. Before even thinking about how it was going to look, I needed to prove that it was even possible to make a colouring app in less than 10KB that could do the baseline of edge detection even without JS.

Turns out it is possible, so I started thinking about how it should look and feel. Albus needed to work well on all browsers. My prototype treated small screens as second-rate. So I needed to make sure it works really well on mobile and that the UI is mobile-first. When I started looking at how it should look, with all of the MUST-haves working, the first load was 5.2KB. So I had a bit of room for CSS, and could possibly lazy load icons or something like that.

The most important part of the design is to work well on mobile, so the toolbox is crucial. Instead of the photoshop-style icons on side of screen, I decided to use a modal/dialog box (using the same styling as the splash screen). This means I didn’t need to have any icons, so better performance and accessibility. On mobile it now works really well. I think it works nicely on desktop too, but some people may prefer a floating toolbox… that can easily be changed in the future.

I made the decision to use a text only logo instead of a nicely designed one. Partially this is because it is quicker, but that’s just an excuse as I could easily lazy load an image or SVG. On the other hand, I am useless at designing logos so went without!

Paper and Crayons

Some browsers don’t support canvas or even JS and CSS. So as long as the main edge detection is working, I decided that it makes sense to provide the processed image for download. That way, even in the oldest of browsers, it can be opened with software or printed.

I strongly believe that progressive enhancement benefits everyone, so instead of only showing the download for older browsers I decided to show it for everyone. Most people will use the basic colouring tools, but others now have the ability to easily create line art from a photo then open it in Photoshop if they wish.

Does it fit the rules?

  • Size - Yes. The main required files came to just over 10KB without compression. With compression, everything came to just under 7KB.
  • Interoperability - Yes. Edge detection, the baseline requirement, works in Lynx. Modern browsers are able to enhance it with modern features.
  • Accessibility - Mostly. There can always be improvements with accessibility, and I did run out of time trying to make sure it worked really well. But in general, it is quite accessible.
  • Progressive Enhancement - Yes. Progressive enhancement was one of my main aims, to prove it could be done despite the nature of the site.
  • Libraries - I have used as few libraries as possible.

Problems and Solutions Dotted Edge Detection

I found that on a lot of the images I tried, edge detection didn’t work very well. To improve the quality I changed the threshold and added a pre-processing blur (thanks to Chris Heilmann for showing me canvas’s native blur). I lazyload Fabien Loison’s StackBlur for browsers that don’t support the native blur. I thought I could use a CSS filter but turns out that doesn’t show up when you use getImageData for canvas.

Drawing on mobile

In my ultimate wisdom, I forgot how mobiles work. Originally I couldn’t get touch working on the canvas, but eventually realised I wasn’t actually looking for targetTouches. In the future this could be changed to allow multitouch.

 <code> var clientX = e.clientX || e.targetTouches[0].clientX; var clientY = e.clientY || e.targetTouches[0].clientY; </code>

Service Worker Re-downloads on install

I noticed that when the service worker adds files to the cache, it downloads the files that have already been downloaded. Jake Archibald said that this can be fixed but I haven’t got around to it.

Responsive and Print

Resizing canvas is tricky business. I ran out of time before fixing a bug where the canvas would resize in strange ways. This means that the canvas painting isn’t aligned with the base image.

Lynx shows hidden content

I used a checkbox hack to create the open and close buttons for dialogs. Turns out Lynx shows these checkboxes, as there is no way to hide them without CSS or the hidden attribute. So my fix was to add them in with JS, from a template in the html. Not pretty but fixed the issue. For most projects it is probably fine to ignore it.

Drag and Drop causes previous brush strokes to turn to outlines

Another bug I haven’t fixed yet is the if you draw on the canvas then drag another image, it will edge detect the brush strokes. Clearing the canvas before edge detection should fix this but didn’t seem to when I tried it.

Screenshots

This year’s 10K Apart is about creating a good experience on the web while focussing on progressive enhancement and small page sizes. I don’t usually enter competitions but having not coded for fun in a while, I thought it seemed a good idea.

# Wednesday, October 5th, 2016 at 12:00am

zeldmanproduction.wpcomstaging.com

12 LESSONS from An Event Apart San Francisco – ? 6: We work with technology every day. And every day it seems like there’s more and more technology to understand: graphic design tools, build tools, frameworks and libraries, not to mention new HTML, CSS, and JavaScript features landing in browsers. How should we best choose which technologies to invest our time in? When we decide to weigh up the technology choices that confront us, what are the best criteria for doing that?

Jeremy Keith was the seventh speaker at An Event Apart San Francisco this month. His presentation, Evaluating Technology, set out to help us evaluate tools and technologies in a way that best benefits the people who use the websites we design and develop. We looked at some of the hottest new web technologies, like service workers and web components, and dug deep beneath the hype to find out whether they will really change life on the web for the better.

Days of future past

Its easy to be overwhelmed by all the change happening in web design and development. Things make more sense when we apply an appropriate perspective. Although his presentation often dealt with “bleeding-edge” technologies (i.e. technologies that are still being figured out and just beginning to be supported in some browsers and devices), Jeremy’s framing perspective was that of the history of computer science—a field, pioneered by women, that evolved rationally.

Extracting the unchanging design principles that gave rise to the advances in computer science, Jeremy showed how the web evolved from these same principles, and how the seemingly dizzying barrage of changes taking place in web design and development today could be understood through these principles as well—providing a healthy means to decide which technologies benefit human beings, and which may be discarded or at least de-prioritized by busy designer/developers working to stay ahead of the curve.

Resistance to change

“Humans are allergic to change,” computer science pioneer Grace Hopper famously said. Jeremy showed how that very fear of change manifested itself in the changes human beings accept: we have 60 seconds in a minute and 24 hours in a day because of counting systems first developed five thousand years ago. Likewise, we have widespread acceptance of HTML in large part because its creator, Tim Berners-Lee, based it on a subset of elements familiar from an already accepted markup language, SGML.

How well does it fail?

In our evaluating process, Jeremy argued, we should not only concern ourselves with how well a technology works, but also how well it fails. When XHTML 2.0 pages contained an error, the browser was instructed not to skip that error but to shut down completely. Thus, XHTML 2.0 was impractical and did not catch on. In contrast, when an HTML page contains an error or new element, the browser skips what it does not understand and renders the page. This allows us to add new elements to HTML over time, with no fear that browsers will choke on what they don’t understand. This fact alone helps account for the extraordinary success of HTML over the past 25 years.

Likewise, service workers, a powerful new technology that extends our work even when devices are offline, fails well, because it is progressively enhanced by design. If a device or browser does not support service workers, the content still renders.

Jeremy argued that pages built on fragile technologies—technologies which are powerful when they work, but which fail poorly—are a dangerous platform for web content. Frameworks that require JavaScript, for example, offer developers extraordinary power, but at a price: the failure of even a small script can result in no content at all. Service workers technology also offers tremendous power, but it fails well, so is safe to use in the creation of responsive sites and web applications.

On progressive web apps

Likewise, progressive web apps, when designed responsively and with progressive enhancement, are a tremendously exciting web development. But when they are designed the wrong way, they fail poorly, making them a step backward for the web.

Jeremy used the example of The Washington Post’s Progressive Web App, which has been much touted by Google, who are a driving force behind the movement for progressive web apps. A true progressive web app works for everyone. But The Washington Post’s progressive web app demands that you open it in your phone. This kind of retrograde door-slam is like the days when we told people they must use Flash, or must use a certain browser or platform, to view our work. This makes it the antithesis of progressive.

Dancing about architecture

There was much, much more to Jeremy’s talk—one of the shortest hours I’ve ever lived through, as 100 years of wisdom was applied to a dizzying array of technologies. Summarizing it here is like trying to describe the birth of your child in five words or less. Fortunately, you can see Jeremy give this presentation for yourself at several upcoming An Event Apart conference shows in 2017.

The next AEA event, An Event Apart St. Louis, takes place January 30-February 1, 2017. Tomorrow I’ll be back with more takeaways from another AEA San Francisco 2016 speaker.

Also published in Medium.

Like Loading…

# Monday, February 12th, 2024 at 9:50pm

3 Likes

# Liked by ⓕⓣ on Sunday, November 8th, 2015 at 12:43am

# Liked by Chris Smith-Hill on Monday, December 28th, 2015 at 4:55am

# Liked by Matthias Pfefferle on Monday, September 23rd, 2019 at 6:35pm

Related posts

Am I cached or not?

Complementing my site’s service worker strategy with an extra interface element.

Timing out

A service worker strategy for dealing with lie-fi.

Going Offline—the talk of the book

…of the T-shirt.

Move Fast and Don’t Break Things by Scott Jehl

A presentation at An Event Apart Seattle 2019.

Push without notifications

Making use of the real-time nature of push notifications without the annoying notification part.

Related links

Add a Service Worker to Your Site | CSS-Tricks - CSS-Tricks

Damn, I wish I had thought of giving this answer to the prompt, “What is one thing people can do to make their website better?”

If you do nothing else, this will be a huge boost to your site in 2022.

Chris’s piece is a self-contained tutorial!

Tagged with

Getting Started with PWAs [Workshop]

The slides from Aaron’s workshop at today’s PWA Summit. I really like the idea of checking navigator.connection.downlink and navigator.connection.saveData inside a service worker to serve different or fewer assets!

Tagged with

Progressier | Make your website a PWA in 42 seconds

This in an intriguing promise (there’s no code yet):

A PWA typically requires writing a service worker, an app manifest and a ton of custom code. Progressier flattens the learning curve. Just add it to your html template — you’re done.

I worry that this one line of code will pull in many, many, many, many lines of JavaScript.

Tagged with

Service Workers | Go Make Things

Chris Ferdinandi blogs every day about the power of vanilla JavaScript. For over a week now, his daily posts have been about service workers. The cumulative result is this excellent collection of resources.

Tagged with

The amazing power of service workers | Go Make Things

So, why would you want to use a service worker? Here are some cool things you can do with it.

Chris lists some of the ways a service worker can enhance user experience.

Tagged with

Previously on this day

14 years ago I wrote Collectivism

Social(ist) networking.

17 years ago I wrote Streaming my life away

Squishing RSS feeds together.

19 years ago I wrote Generating thumbnails with PHP

Looking through the family photo diaries over at the Guardian website made me realise how much I like having thumbnails in picture galleries.

20 years ago I wrote Linkage

Let this serve as a practical demonstration of the multitudinous uses of the hyperlink.

21 years ago I wrote iLove

Even though I already own an iBook, I can’t help giving the newly released models a longing look. Their bang to buck ratio is incredibly high. The bottom of the range model has twice the memory and hard drive capacity of my aging model.

21 years ago I wrote Dark Side Switch Campaign

thedarkside.com/switch:

22 years ago I wrote Apple Renderings

This is fun: a page of speculative designs for future Macs.