Subscribe to receive notifications of new posts:

Fast Google Fonts with Cloudflare Workers

2018-11-22

12 min read

Google Fonts is one of the most common third-party resources on the web, but carries with it significant user-facing performance issues. Cloudflare Workers running at the edge is a great solution for fixing these performance issues, without having to modify the publishing system for every site using Google Fonts.

This post walks through the implementation details for how to fix the performance of Google Fonts with Cloudflare Workers. More importantly, it also provides code for doing high-performance content modification at the edge using Cloudflare Workers.

Google fonts are SLOW

First, some background. Google Fonts provides a rich selection of royalty-free fonts for sites to use. You select the fonts you want to use, and end up with a simple stylesheet URL to include on your pages, as well as styles to use for applying the fonts to parts of the page:

<link href="https://fonts.googleapis.com/css?family=Open+Sans|Roboto+Slab"
      rel="stylesheet">
<style>
body {
 font-family: 'Open Sans', sans-serif;
}
h1 {
 font-family: 'Roboto Slab', serif;
}

Your visitor’s browser fetches the CSS file as soon as the HTML for the page is available. The browser will request the underlying font files when the browser does layout for the page and discovers that it needs fonts for different sections of text.

The way Google fonts are served, the CSS is on one domain (fonts.googleapis.com) and the font files are on a different domain (fonts.gstatic.com). Since they are on separate domains, the fetch for each resource takes a minimum of four round trips back and forth to the server: One each for the DNS lookup, establishing the socket connection, negotiating TLS encryption (for https) and a final round trip for the request itself.

Network waterfall diagram showing 4 round trips each for the font css and font file.

4 round trips each for the font css and font file.

The requests can’t be done in parallel because the fonts aren’t known about until after the CSS has been downloaded and the styles applied to the page. In a best-case scenario, this translates to eight round-trips before the text can be displayed (the text which was already available in the browser as soon as the HTML was available). On a slower 3G connection with a 300ms round-trip time, this adds up to a 2.4 second delay. Best case!

There is also a problem with resource prioritization. When you have all of the requests coming from the same domain on the same HTTP/2 connection they can be scheduled against each other. Critical resources (like CSS and fonts) can be pushed ahead in the queue and delivered before lower priority resources (like images). Since Google Fonts (and most third-party resources) are served from a different domain than the main page resources, they cannot be prioritized and end up competing with each other for download bandwidth. This can cause the actual fetch times to be much longer than the best-case of 8 round trips.

Network waterfall showing CSS and font download competing with low-priority images for bandwidth.

CSS and font download competing with low-priority images for bandwidth.

Users will see the images and skeleton for the page long before the actual text is displayed:

The page starts painting at 3.3 seconds with partial images and no text.

The page starts painting at 3.3 seconds with partial images and no text.

The text finally displays at 6.2 seconds into the page load.

The text finally displays at 6.2 seconds into the page load.

Fixing Google Fonts performance

The paths to fixing performance issues and making fonts lightning-fast is different for the CSS and the font files themselves. We can reduce the total number of round trips to one:

  1. Embed the CSS directly in the HTML.

  2. Proxy the Google Font files through the page origin.

Before and after diagram showing the embedded CSS loading immediately as part of the HTML and the fonts loading before the images.

The embedded CSS loads immediately as part of the HTML and the fonts load before the images.

Optimizing CSS delivery

For CSS, the easy answer is to just download the CSS file that Google hosts and either serve it yourself directly or place it into the HTML as an embedded stylesheet. The problem with this is that Google fonts serves a browser-specific CSS file that is different for each browser so it can serve newer formats and use newer features, when supported, while still providing custom font support for older browsers.

With Cloudflare Workers, we can dynamically replace the external CSS reference with the browser-specific stylesheet content at fetch time when the HTML is requested by the browser. This way, the embedded CSS will always be up to date and support the capabilities of whichever browser is making the request. This completely eliminates any round trips for fetching the CSS for the fonts.

Optimizing font file delivery

For the font files themselves, we can eliminate all of the round trips except for the fetch itself by serving the font files directly from the same domain as the HTML. This brings with it the added benefit of serving fonts over the same HTTP/2 connection as the rest of the page resources, allowing them to be prioritized correctly and not compete for bandwidth.

Specifically, when we embed the CSS into the HTML response, we rewrite all of the font URLs to use the same domain as the HTML. When those rewritten requests arrive at the worker the URL is transformed back to the original URL served by Google and fetched by the worker (as a proxy). The worker fetches are routed through Cloudflare’s caching infrastructure so they will be cached automatically at the edge. The actual URL rewriting is pretty simple but effective. We take font URLs that look like this:

https://fonts.gstatic.com/s/...

and we just prepend the page domain to the front of the URL:

https://www.example.com/fonts.gstatic.com/s/...

That way, when they arrive at the edge, the worker can look at the path of a request and know right away that it is a proxy request for a Google font. At this point, rewriting the original URL is trivial. On the extremely odd chance that a page on your site actually has a path that starts with /fonts.gstatic.com/, those resources would break and something else should be appended to the URL to make sure they are unique.

Optimization Results

In practice, the results can be quite dramatic. On this test page for example, the wait time for fonts to become available dropped from 5.5 seconds to 1 second (an 81% improvement!):

Network waterfall diagram showing the fonts loading immediately after the HTML.

The fonts load immediately after the HTML.

Visually, the improvement to the user experience is also quite dramatic. Instead of seeing a skeleton page of images followed by the text eventually appearing, the text is available (and correctly styled) immediately and the user can start consuming the content right away:

The first paint happens much sooner at 2.5 seconds with all of the text displayed while the original page is still blank.

The first paint happens much sooner at 2.5 seconds with all of the text displayed while the original page is still blank.

At 3.3 seconds the original page finally starts to paint, displaying part of the images and no text.

At 3.3 seconds the original page finally starts to paint, displaying part of the images and no text.

At 4.4 seconds the optimized page is visibly complete while the original page still has not displayed any text.

At 4.4 seconds the optimized page is visibly complete while the original page still has not displayed any text.

At 6.2 seconds the original page finally displays the text content.

At 6.2 seconds the original page finally displays the text content.

One thing that I didn’t notice in the initial testing and only discovered when looking at the side-by-side video is that the fonts are only correctly styled in the optimized case. In the original case it took longer than Chrome’s 3-second timeout for the fonts to load and it fell back to the system fonts. Not only was the experience much faster; it was also styled correctly with the custom fonts right from the beginning.

Optimizing Google fonts with a Cloudflare worker

The full Cloudflare worker code for implementing the font optimization is available on GitHub here. Buckle-in because this is quite a bit more involved than most of the samples in the documentation.

At a high level this code:

  • Filters all requests and determines if a request is for a proxied font or HTML (and passes all other requests through unmodified).

  • Rewrites the URL and passes the fetch request through for Font requests.

  • For HTML requests:

    • Passes the request through unmodified to the origin server.

    • Returns non-UTF-8 content unmodified..

    • Processes the HTML response in streaming chunks as it is available.

    • Replaces Google font stylesheet link tags with style tags containing the CSS and the font URLs rewritten to proxy through the origin.

The code here is slightly simplified to make it clearer to understand the flow. The full code on GitHub adds support for an in-memory worker cache for the CSS (in addition to the persistent cache API) and provides query parameters for toggling the HTML rewriting on and off (for testing).

The content modification is all done by operating on the HTML as strings (with a combination of regular expressions and string matches). This is much faster and lighter weight than parsing the HTML into a virtual DOM, operating on it and converting back to HTML. It also allows for incremental processing of the HTML as a stream.

Entry Point

The addEventListener(“fetch”) call is the main entry point for any worker and houses the JavaScript for intercepting inbound requests. If the handler does nothing, then the requests will be passed through normally and the worker will be out of the path in processing the response. Our goal it to minimize the amount of work that the worker has to do to determine if it is a request that it is interested in.

In the case of the proxied font requests, we can just look at the request URL and see that the path starts with /fonts.gstatic.com/. To identify requests for HTML content we can look at the “accept” header on the request. Every major browser I have tested includes text/html on the list of content types that it will accept when requesting a document. On the off chance that there is a browser that doesn’t include it as an accept header, the HTML will just be passed through and returned unmodified. The goal with everything here is to fail-safe and just return unoptimized content for any edge cases that aren’t covered. This way nothing breaks; it just doesn’t get the added performance boost.

addEventListener("fetch", event => {
 
 const url = new URL(event.request.url);
 const accept = event.request.headers.get('accept');
 if (event.request.method === 'GET' &&
     url.pathname.startsWith('/fonts.gstatic.com/')) {
 
   // Proxy the font file requests
   event.respondWith(proxyRequest('https:/' + url.pathname,
                                  event.request));
 
 } else if (accept && accept.indexOf("text/html") !== -1) {
 
   // Process the HTML
   event.respondWith(processHtmlRequest(event.request, event));
 
 }
})

Request Proxy

The proxying of the font requests is pretty straightforward. Since we are crossing origins it is generally a bad idea to just reuse the existing request object with a new URL. That can leak user data like cookies to a Third-party. Instead, we make a new request, clone a subset of the headers and pass the new fetch request back for the Worker runtime to handle.

The fetch path between workers and the outside Internet goes through the Cloudflare cache so the actual font files will only be fetched from Google if they aren’t already in the cache. Even in that case, the connection from Cloudflare’s edge to Google’s font servers is much faster (and more reliable) than the end-user’s connection from the browser. Even on a cache miss, it is an insignificant delay.

async function proxyRequest(url, request) {
 
 // Only pass through a subset of request headers
 let init = {
   method: request.method,
   headers: {}
 };
 const proxyHeaders = ["Accept",
                       "Accept-Encoding",
                       "Accept-Language",
                       "Referer",
                       "User-Agent"];
 for (let name of proxyHeaders) {
   let value = request.headers.get(name);
   if (value) {
     init.headers[name] = value;
   }
 }
 const clientAddr = request.headers.get('cf-connecting-ip');
 if (clientAddr) {
   init.headers['X-Forwarded-For'] = clientAddr;
 }
 
 // Only include a strict subset of response headers
 const response = await fetch(url, init);
 if (response) {
   const responseHeaders = ["Content-Type",
                            "Cache-Control",
                            "Expires",
                            "Accept-Ranges",
                            "Date",
                            "Last-Modified",
                            "ETag"];
   let responseInit = {status: response.status,
                       statusText: response.statusText,
                       headers: {}};
   for (let name of responseHeaders) {
     let value = response.headers.get(name);
     if (value) {
       responseInit.headers[name] = value;
     }
   }
   const newResponse = new Response(response.body, responseInit);
   return newResponse;
 }
 
 return response;
}

In addition to filtering the request headers we also filter the response headers sent back to the browser. If you’re not careful you could end up in a situation where a third-party is setting cookies on your origin or even turning on something like HTTP Strict Transport Security for your site.

Streaming HTML Processing

The HTML path is more complicated because we are going to intercept and modify the content itself.

In processing the HTML request, the first thing we want to do is make sure it is actually an HTML response. If it is something else, then we should get out of the way and let the response stream back to the browser as it does normally. It is very possible that a PDF document, file download, or even a directly opened image, has a Accept of text/html. It is critical to check the actual content that is being responded with to make sure it is something we want to inspect and possibly modify.

The easiest way to modify a response is to just wait for the response to be fully complete, process it as a single block of HTML, and then pass the modified HTML back to the browser:

 async function processHtmlRequest(request) {
 
 // Fetch from origin server.
 const response = await fetch(request)
 if (response.headers.get("content-type").indexOf("text/html") !== -1) {
  
   let content = await response.text();
   content = await modifyHtmlChunk(content, request);
 
   // Create a cloned response with our modified HTML
   return new Response(content, response);
 }
 return response;
}

This works reasonably well if you are sure that all of the HTML is UTF-8 (or ascii), and the server returns all of the HTML at once, but there are some pretty serious concerns with doing it this way:

  • The memory use can be unbounded and only limited by the size of the largest HTML response (possibly causing the worker to be terminated for using too much memory).

  • Significant delay will be added to any pages where the server flushes the initial content early and then does some expensive/slow work before returning the rest of the HTML.

  • This only works if the text content uses an encoding that JavaScript can decode directly as UTF-8. Any other character encodings will fail to decode.

For our worker we are going to process the HTML stream incrementally as it arrives from the server and pass it through to the browser as soon as possible (and pass-through any content that isn’t utf-8 unmodified).

Processing HTML as a stream

First we are going to look at what it takes to process the HTML stream incrementally. We will leave the character encoding changes out for now to keep things (relatively) simple.

To process the stream incrementally, we need to generate a new fetch response to pass back from the worker that uses a TransformStream for its content. That will allow us to pass the response itself back immediately and write to the stream as soon as we have content to add. We pass all of the other headers through unmodified.

async function processHtmlRequest(request) {
 
 // Fetch from origin server.
 const response = await fetch(request)
 if (response.headers.get("content-type").indexOf("text/html") !== -1) {
  
   // Create an identity TransformStream (a.k.a. a pipe).
   // The readable side will become our new response body.
   const { readable, writable } = new TransformStream();
 
   // Create a cloned response with our modified stream
   const newResponse = new Response(readable, response);
 
   // Start the async processing of the response stream (NO await!)
   modifyHtmlStream(response.body, writable, request);
 
   // Return the in-process response so it can be streamed.
   return newResponse;
 }
 return response;
}

The key thing here is to not wait for the async modifyHtmlStream async function to complete before passing the new response back from the worker. This way the initial headers can be sent immediately and the response will continue to stream anything written into the TransformStream until it is closed.

Processing the HTML stream in chunks as it arrives is a little tricky. The HTML stream will arrive in chunks of arbitrary sizes as strings. We need to add some protection to make sure that a chunk boundary doesn’t split a link tag. If it does, and we don’t account for it, we can miss a stylesheet link (or worse, process a partial link URL with the wrong style type). To make sure we don’t split link tags, we search from the end of the string for “<link “ and from the start of the last link tag we search forward for a closing “>” tag. If we don’t find one, then there is a partial link tag and we split the string just before the link tag starts. We process everything up to the split link tag and keep the partial tag to prepend it to the next chunk of data that arrives.

An alternative would be to keep accumulating data and only process it when there is no split link tag at the end, but this way we can return more data to the browser sooner.

When the incoming stream is complete, we process any partial data left over from the previous chunk and close the output stream (ending the response to the browser).

async function modifyHtmlStream(readable, writable, request) {
 const reader = readable.getReader();
 const writer = writable.getWriter();
 const encoder = new TextEncoder();
 let decoder = new TextDecoder();
 
 let partial = '';
 let content = '';
 
 for(;;) {
   // Read the next chunk of HTML from the fetch request.
   const { done, value } = await reader.read()
 
   if (done) {
 
     // Send any remaining fragment and complete the request.
     if (partial.length) {
       partial = await modifyHtmlChunk(partial, request);
       await writer.write(encoder.encode(partial));
       partial = '';
     }
     break;
 
   }
  
   try {
     let chunk = decoder.decode(value, {stream:true});
 
     // Add the inbound chunk to the the remaining partial chunk
     // from the previous read.
     content = partial + chunk;
     partial = '';
 
     // See if there is an unclosed link tag at the end (and if so,
     // carve it out to complete when the remainder comes in).
     const linkPos = content.lastIndexOf('<link');
     if (linkPos >= 0) {
       const linkClose = content.indexOf('/>', linkPos);
       if (linkClose === -1) {
         partial = content.slice(linkPos);
         content = content.slice(0, linkPos);
       }
     }
 
     if (content.length) {
       // Do the actual HTML modifications on the current chunk.
       content = await modifyHtmlChunk(content, request);
     }
   } catch (e) {
     // Ignore the exception
   }
 
   // Send the processed HTML chunk to the requesting browser.
   if (content.length) {
     await writer.write(encoder.encode(content));
     content = '';
   }
 }
 
 await writer.close()
}

One thing I was initially worried about was having to modify the “content-length” response header from the original response since we are modifying the content. Luckily, the worker takes care of that automatically and it isn’t something you have to implement.

There is a try/catch handler around the processing in case something goes horribly wrong with the decode.

The actual HTML rewriting is handled in “modifyHtmlChunk”. This is just the logic for processing the incoming data as incremental chunks.

Dealing with character encodings other than UTF-8

We intentionally skipped over handling character encodings other than UTF-8 up until now. To handle arbitrary pages you will need to be able to process other character encodings. The Worker runtime only supports decoding UTF-8 but we need to make sure that we don’t break any content that isn’t UTF-8 (or similar). To do this, we detect the current encoding if it is specified and anything that isn’t UTF-8 is passed through unmodified. In the case that the content type can not be detected we also detect decode errors and pass content through unmodified when they occur.

The HTML charset can be specified in the content-type response header or as a <meta charset> tag in the HTML itself.

For the response headers it is pretty simple. When we get the original response, see if there is a charset in the content-type header. If there is, extract the current value and if it isn’t a supported charset just pass the response through unmodified.

   // Workers can only decode utf-8. If it is anything else, pass the
   // response through unmodified
   const VALID_CHARSETS = ['utf-8', 'utf8', 'iso-8859-1', 'us-ascii'];
   const charsetRegex = /charset\s*=\s*([^\s;]+)/mgi;
   const match = charsetRegex.exec(contentType);
   if (match !== null) {
     let charset = match[1].toLowerCase();
     if (!VALID_CHARSETS.includes(charset)) {
       return response;
     }
   }
  
   // Create an identity TransformStream (a.k.a. a pipe).
   // The readable side will become our new response body.
   const { readable, writable } = new TransformStream();
 
   // Create a cloned response with our modified stream
   const newResponse = new Response(readable, response);
 
   // Start the async processing of the response stream
   modifyHtmlStream(response.body, writable, request, event);

For the cases where there is a “” tag in the HTML (and possibly no header) things get a bit more complicated. If at any point an unsupported charset is detected then we pipe the incoming byte stream directly into the output stream unmodified. We first decode the first chunk of HTML response using the default decoder. Then, if a ” tag is found in the html we extract the charset. If it isn’t a supported charset then we enter passthrough mode. If at any point the input stream can’t be decoded (likely because of an invalid charset) we also enter passthrough mode and pipe the remaining content through unprocessed.

async function modifyHtmlStream(readable, writable, request, event) {
 const reader = readable.getReader();
 const writer = writable.getWriter();
 const encoder = new TextEncoder();
 let decoder = new TextDecoder("utf-8", {fatal: true});
 
 let firstChunk = true;
 let unsupportedCharset = false;
 
 let partial = '';
 let content = '';
 
 try {
   for(;;) {
     const { done, value } = await reader.read();
     if (done) {
       if (partial.length) {
         partial = await modifyHtmlChunk(partial, request, event);
         await writer.write(encoder.encode(partial));
         partial = '';
       }
       break;
     }
 
     let chunk = null;
     if (unsupportedCharset) {
       // Pass the data straight through
       await writer.write(value);
       continue;
     } else {
       try {
         chunk = decoder.decode(value, {stream:true});
       } catch (e) {
         // Decoding failed, switch to passthrough
         unsupportedCharset = true;
         if (partial.length) {
           await writer.write(encoder.encode(partial));
           partial = '';
         }
         await writer.write(value);
         continue;
       }
     }
 
     try {
       // Look inside of the first chunk for a HTML charset or
       // content-type meta tag.
       if (firstChunk) {
         firstChunk = false;
         if (chunkContainsInvalidCharset(chunk)) {
           // switch to passthrough
           unsupportedCharset = true;
           if (partial.length) {
             await writer.write(encoder.encode(partial));
             partial = '';
           }
           await writer.write(value);
           continue;
         }
       }
 
       content = partial + chunk;
       partial = '';
 
       // See if there is an unclosed link tag at the end (and if so,
       // carve it out to complete when the remainder comes in).
       const linkPos = content.lastIndexOf('<link');
       if (linkPos >= 0) {
         const linkClose = content.indexOf('/>', linkPos);
         if (linkClose === -1) {
           partial = content.slice(linkPos);
           content = content.slice(0, linkPos);
         }
       }
 
       if (content.length) {
         content = await modifyHtmlChunk(content, request, event);
       }
     } catch (e) {
       // Ignore the exception
     }
     if (content.length) {
       await writer.write(encoder.encode(content));
       content = '';
     }
   }
 } catch(e) {
   // Ignore the exception
 }
 
 try {
   await writer.close();
 } catch(e) {
   // Ignore the exception
 }
}

There is a helper that scans for the charset in both meta tags that support setting the charset:

function chunkContainsInvalidCharset(chunk) {
 let invalid = false;
 const VALID_CHARSETS = ['utf-8', 'utf8', 'iso-8859-1', 'us-ascii'];
 
 // meta charset
 const charsetRegex = /<\s*meta[^>]+charset\s*=\s*['"]([^'"]*)['"][^>]*>/mgi;
 const charsetMatch = charsetRegex.exec(chunk);
 if (charsetMatch) {
   const docCharset = charsetMatch[1].toLowerCase();
   if (!VALID_CHARSETS.includes(docCharset)) {
     invalid = true;
   }
 }
 // content-type
 const contentTypeRegex = /<\s*meta[^>]+http-equiv\s*=\s*['"]\s*content-type[^>]*>/mgi;
 const contentTypeMatch = contentTypeRegex.exec(chunk);
 if (contentTypeMatch) {
   const metaTag = contentTypeMatch[0];
   const metaRegex = /charset\s*=\s*([^\s"]*)/mgi;
   const metaMatch = metaRegex.exec(metaTag);
   if (metaMatch) {
     const charset = metaMatch[1].toLowerCase();
     if (!VALID_CHARSETS.includes(charset)) {
       invalid = true;
     }
   }
 }
 return invalid;
}

HTML Business Logic

Finally, we can start the actual logic for embedding the font CSS. The basic logic is:

  • Use a regex to find link tags for Google fonts css.

  • Fetch the browser-specific version of the CSS (from cache if possible).

    • The fetch logic (discussed later) modifies the font URLs in the CSS to proxy through the worker.

  • Replace the link tag with a style block with the CSS content.

async function modifyHtmlChunk(content, request, event) {
 const fontCSSRegex = /<link\s+[^>]*href\s*=\s*['"]((https?:)?\/\/fonts.googleapis.com\/css[^'"]+)[^>]*>/mgi;
 let match = fontCSSRegex.exec(content);
 while (match !== null) {
   const matchString = match[0];
   if (matchString.indexOf('stylesheet') >= 0) {
     const fontCSS = await fetchCSS(match[1], request, event);
     if (fontCSS.length) {
       // See if there is a media type on the link tag
       let mediaStr = '';
       const mediaMatch = matchString.match(/media\s*=\s*['"][^'"]*['"]/mig);
       if (mediaMatch) {
         mediaStr = ' ' + mediaMatch[0];
       }
       // Replace the actual css
       let cssString = "<style" + mediaStr + ">\n";
       cssString += fontCSS;
       cssString += "\n</style>\n";
       content = content.split(matchString).join(cssString);
     }
     match = fontCSSRegex.exec(content);
   }
 }
 
 return content;
}

The fetching (and modifying) of the CSS is a little more complicated than a straight passthrough because we want to cache the result when possible. We cache the responses locally using the worker’s Cache API. Since the response is browser-specific, and we don’t want to fragment the cache too crazily, we create a custom cache key based on the browser user agent string that is basically browser+version+mobile.

Some plans have access to named cache storage, but to work with all plans it is easiest if we just modify the font URL that gets stored in cache and append the cache key to the end of the URL as a query parameter. The cache URL never gets sent to a server but is useful for local caching of different content that shares the same URL. For example:

https://fonts.googleapis.com/css?family=Roboto&chrome71

If the CSS isn’t available in the cache then we create a fetch request for the original URL from the Google servers, passing through the HTML url as the referer, the correct browser user agent string and the client’s IP address in a standard proxy X-Forwarded-For header. Once the response is available we store it in the cache for future requests.

For browsers that can’t be identified by user agent string a generic request for css is sent with the user agent string from Internet Explorer 8 to get the lowest common denominator fallback CSS.

The actual modification of the CSS just uses a regex to look for font URLs, replaces them with the HTML origin as a prefix.

async function fetchCSS(url, request) {
 let fontCSS = "";
 if (url.startsWith('/'))
   url = 'https:' + url;
 const userAgent = request.headers.get('user-agent');
 const clientAddr = request.headers.get('cf-connecting-ip');
 const browser = getCacheKey(userAgent);
 const cacheKey = browser ? url + '&' + browser : url;
 const cacheKeyRequest = new Request(cacheKey);
 let cache = null;
 
 let foundInCache = false;
 // Try pulling it from the cache API (wrap it in case it's not implemented)
 try {
   cache = caches.default;
   let response = await cache.match(cacheKeyRequest);
   if (response) {
     fontCSS = response.text();
     foundInCache = true;
   }
 } catch(e) {
   // Ignore the exception
 }
 
 if (!foundInCache) {
   let headers = {'Referer': request.url};
   if (browser) {
     headers['User-Agent'] = userAgent;
   } else {
     headers['User-Agent'] =
       "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0)";
   }
   if (clientAddr) {
     headers['X-Forwarded-For'] = clientAddr;
   }
 
   try {
     const response = await fetch(url, {headers: headers});
     fontCSS = await response.text();
 
     // Rewrite all of the font URLs to come through the worker
     fontCSS = fontCSS.replace(/(https?:)?\/\/fonts\.gstatic\.com\//mgi,
                               '/fonts.gstatic.com/');
 
     // Add the css info to the font caches
     FONT_CACHE[cacheKey] = fontCSS;
     try {
       if (cache) {
         const cacheResponse = new Response(fontCSS, {ttl: 86400});
         event.waitUntil(cache.put(cacheKeyRequest, cacheResponse));
       }
     } catch(e) {
       // Ignore the exception
     }
   } catch(e) {
     // Ignore the exception
   }
 }
 
 return fontCSS;
}

Generating the browser-specific cache key is a little sensitive since browsers tend to clone each other’s user agent strings and add their own information to them. For example, Edge includes a Chrome identifier and Chrome includes a Safari identifier, etc. We don’t necessarily have to handle every browser string since it will fallback to the least common denominator (ttf files without unicode range support) but it is helpful to catch as many of the large mainstream browser engines as possible.

function getCacheKey(userAgent) {
 let os = '';
 const osRegex = /^[^(]*\(\s*(\w+)/mgi;
 let match = osRegex.exec(userAgent);
 if (match) {
   os = match[1];
 }
 
 let mobile = '';
 if (userAgent.match(/Mobile/mgi)) {
   mobile = 'Mobile';
 }
 
 // Detect Edge first since it includes Chrome and Safari
 const edgeRegex = /\s+Edge\/(\d+)/mgi;
 match = edgeRegex.exec(userAgent);
 if (match) {
   return 'Edge' + match[1] + os + mobile;
 }
 
 // Detect Chrome next (and browsers using the Chrome UA/engine)
 const chromeRegex = /\s+Chrome\/(\d+)/mgi;
 match = chromeRegex.exec(userAgent);
 if (match) {
   return 'Chrome' + match[1] + os + mobile;
 }
 
 // Detect Safari and Webview next
 const webkitRegex = /\s+AppleWebKit\/(\d+)/mgi;
 match = webkitRegex.exec(userAgent.match);
 if (match) {
   return 'WebKit' + match[1] + os + mobile;
 }
 
 // Detect Firefox
 const firefoxRegex = /\s+Firefox\/(\d+)/mgi;
 match = firefoxRegex.exec(userAgent);
 if (match) {
   return 'Firefox' + match[1] + os + mobile;
 }
  return null;
}

Profit!

Any site served through Cloudflare can implement workers to rewrite their content but for something like Google fonts or other third-party resources it gets much more interesting when someone implements it once and everyone else can benefit. With Cloudflare Apps’ new worker support you can bundle up and deliver complex worker logic for anyone else to consume and publish it to the Apps marketplace.

If you are a third-party content provider for sites, think about what you might be able to do to leverage workers for your content for sites that are served through Cloudflare.

I get excited thinking about the performance implications of something like a tag manager running entirely on the edge without the sites having to change their published pages and without browsers having to fetch heavy JavaScript to do the page modifications. It can be done dynamically for every request directly on the edge!

Cloudflare's connectivity cloud protects entire corporate networks, helps customers build Internet-scale applications efficiently, accelerates any website or Internet application, wards off DDoS attacks, keeps hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you're looking for a new career direction, check out our open positions.
Speed & ReliabilityCloudflare WorkersServerlessProgrammingHTTP2DevelopersDeveloper Platform

Follow on X

Patrick Meenan (Guest Author)|@patmeenan
Cloudflare|@cloudflare

Related posts

October 31, 2024 1:00 PM

Moving Baselime from AWS to Cloudflare: simpler architecture, improved performance, over 80% lower cloud costs

Post-acquisition, we migrated Baselime from AWS to the Cloudflare Developer Platform and in the process, we improved query times, simplified data ingestion, and now handle far more events, all while cutting costs. Here’s how we built a modern, high-performing observability platform on Cloudflare’s network. ...