site logoTune The Web
I've written a book! - click here to view or buy "HTTP/2 in Action" from Manning. Use code 39pollard to get 39% off!

What do Lighthouse Scores look like across the web?

This page was originally created on and last edited on .

Introduction

Lighthouse is a free, open-source tool provided by Google, to analyse and suggest improvements to websites. It is available as a webtool, is built into Chromium based browsers (Chrome, Edge, Opera...etc.), has a command line version, and can also be run in popular tools such WebPageTest.

It runs a number of checks on a website and then comes up with scores in various categories and suggestions for improvements. Running this site's home page through it, for example, leads to below report:

Example Lighthouse Report

We'll discuss more what each of those scores mean, the 6 performance metrics beneath, and also the observation section at the bottom.

The HTTP Archive is a data set created by crawling popular websites monthly and gathering various stats about what makes up those websites. It uses WebPageTest to do this crawl and this includes a Lighthouse test using mobile parameters. All the stats are stored in a set of publicly queryable BigQuery tables. As of September it queries 6.8million of the most popular websites viewed by Chrome's mobile users that month. This list of websites comes from the Chrome UX Report (CrUX) and is based on Chrome users who have opt-in to provide anonymous browsing information.

Querying the HTTP Archive dataset therefore can give us interesting insights into the state of the web. In fact the HTTP Archive, publishes the Web Almanac, an annual state of the web report that I am heavily involved in. While preparing for the 2020 edition, I started looking at the Lighthouse stats and then discussed this with various people on Twitter and elsewhere, which led to some interesting discussions and discoveries that I thought I'd share in a longer form than 280 characters allows!

The Web Almanac divides its information into 20+ chapters on various topics, some of which use the Lighthouse data amongst other information, but in this post I want to concentrate purely on Lighthouse and look across the topics. So in part it's a look at the state of the web through one tool, and in part it's a look at Lighthouse itself.

Top Level Scores

First up let's look at the summary scores in each of the five Lighthouse categories, and yes there are five despite above screenshot only showing four, but one of them as we'll soon see is treated a bit differently! Anyway, to gather the scores across all the sites the HTTP Archive crawls, I run the below query.

#standardSQL SELECT 2020_09_01' AS date, count(0) as num_sites, percentile, APPROX_QUANTILES(performance, 1000)[OFFSET(percentile * 10)] AS performance, APPROX_QUANTILES(accessibility, 1000)[OFFSET(percentile * 10)] AS accessibility, APPROX_QUANTILES(best_practices, 1000)[OFFSET(percentile * 10)] AS best_practices, APPROX_QUANTILES(seo, 1000)[OFFSET(percentile * 10)] AS seo, APPROX_QUANTILES(pwa, 1000)[OFFSET(percentile * 10)] AS pwa FROM ( SELECT CAST(JSON_EXTRACT(report, '$.categories.performance.score') AS NUMERIC) AS performance, CAST(JSON_EXTRACT(report, '$.categories.accessibility.score') AS NUMERIC) AS accessibility, CAST(JSON_EXTRACT(report, '$.categories.best-practices.score') AS NUMERIC) AS best_practices, CAST(JSON_EXTRACT(report, '$.categories.seo.score') AS NUMERIC) AS seo, CAST(JSON_EXTRACT(report, '$.categories.pwa.score') AS NUMERIC) AS pwa FROM `httparchive.lighthouse.2020_09_01_mobile`), #`httparchive.sample_data.lighthouse_mobile_10k`), #Cheaper test table UNNEST([10, 25, 50, 75, 90]) AS percentile GROUP BY date, percentile ORDER BY date, percentile

From the above query we get the below stats, that I've applied a colour-coded formatting to:

Percentile Performance Accessibility Best Practices SEO PWA
10 8 56 64 69 14
25 16 69 64 80 25
50 31 80 71 86 29
75 55 88 79 92 36
90 80 95 86 99 54

It should be noted these scores are based in Lighthouse 6, which was the latest available at the time of the analysis. Since then Lighthouse 7 has come and gone and Lighthouse 8 is now the latest. The audits are continually being updated, improved and added to, but I don't expect any material differencves from what I ahve presented here. Maybe at some point in the future I'll rerun this analsysis and see if anything's changed - either in Lighthouse or the web.

So what does this tell us? The query breaks the down scores in each category into Percentiles rather than just show the averages as this gives us a greater insight into how the scores are spread across the web. For those not familiar with these, this table tells us 10% of sites score 8 or less on Performance, and 90% of sites score 80 or less on Performance or, to put it another way, only 10% of sites score higher than 80 in the Performance category. This means the median score is the 50th percentile.

What's immediately apparent from the table above, is that Performance is a very tough category to do well in, with a much larger spread of scores (ignoring the PWA category which is a special case and which we'll discuss more later). In fact the Performance score is actually set based on a weighted average of scores based on HTTP Archive using a Log Normal distribution around the 50th and 90th percentile control points. That complicated sentence basically means the distribution shown above is actually expected. The other categories are simply weighted sums of the metric scores of the various audits. This means it is much more difficult to get a high performance score. In fact, it is actually impossible for the majority of sites to score highly in this category - if every website all improved their performance at once (what a world that would be), then the bar would just be shifted higher and the percentiles would remain roughly the same. That is not to say you shouldn't aim to boost your performance score – your users are also subconsciously comparing your site speed to the other website they visit after all! However, it does go some way to explaining my past experience that it is easier to get to 100 in the other categories with less of an effort than in the Performance category.

Adding in some of the higher percentiles (which I deliberately excluded initially in the above table to avoid skewing your perceptions), to look more at the top end, we see the following:

Percentile Performance Accessibility Best Practices SEO PWA
10 8 56 64 69 14
25 16 69 64 80 25
50 31 80 71 86 29
75 55 88 79 92 36
90 80 95 86 99 54
95 93 97 93 100 54
99 99 100 93 100 64

Here we see only 1% of websites score 99 or higher in the Performance category, while Accessibility and SEO reach the 100 mark. Again I want to stress that 100 doesn't mean there are no Accessibility or SEO issues with sites that achieve this! To prove the point, Manuel Matuzović wrote a Building the most inaccessible site possible with a perfect Lighthouse score post for an interesting take on gaming the score the wrong way for a change! I took the theme and did a similar post on Making the slowest "fast" page.

The Best Practices category is an interesting one, as it is stubbornly stuck on 93 in the 95th and 99th percentile. As we shall see, this category score is made up of 14 evenly-weighted metrics (meaning they are worth 100/14 = 7 each), so above shows that nearly every site is failing at least one of these metrics. As I'll show below,the No Vulnerable Libraries metric is a particularly poor metric (only 16.61% of sites pass this metric) which likely goes a long way to explaining the topping out of this score. We'll get into the accuracy of that "No Vulnerable Libraries" metric below as it is somewhat in dispute.

The Progressive Web Apps (PWA) category looks for particular technologies like Service Workers, which are very much in the minority of websites (though they are used on a much higher percentage when looking at page loads, as popular sites use them). This explains why those scores are lower when looking across all sites, and I honestly don't expect this to change much in the future to be honest – especially given the long tail of existing sites on the web. In fact many tools that use Lighthouse don't actually display the PWA category (for example, the screen shot at the beginning of this article), and even when they do, they remove the PWA score itself, and use it more as a placeholder to allow access to the underlying audits.

The big take away from me from looking at these high-level scores is that Performance (and PWA!) are treated differently to the others and that can be a cause of confusion. The Google team have worked very hard on finding the best way to measure performance, and Lighthouse 6 brought in new categories and weightings, and their Core Web Vitals, and in particular the Cumulative Layout Shift (CLS) metric has caused a bit of a storm, especially in the SEO world with the announcement that these are to be a ranking factor in the future. So far, these seem to have been well received in the Performance community as a good proxy for measuring real performance as experienced by users, so it is positive that they are featured so prominently in the Performance category.

However, it does leave the other major categories (Accessibility, Best Practices and SEO) a bit behind and maybe more work needs to be done in those categories to make them more accurately reflect how these topics are experienced by users in the real world? Or perhaps we just need more metrics here so we can push web owners forward? Lighthouse is seen by many as an important tool as to how Google "sees" websites and, while Google can often be seen as using its considerable influence unfairly, I want the web to get better and think Lighthouse can be used to drive us website owners there.

Next let's dig a little deeper into what make up the scores in each category...

Performance Metrics

The Performance score is calculated from 6 key, weighted metrics (including three Core Web Vital metrics). However it also runs 48 other metrics that do not directly influence the score, but likely indirectly influence the metrics the score does measure. This is what you see as the suggestions or Opportunities when you run a Lighthouse audit of your website.

To get these metrics, and the average value we run the below SQL in BigQuery (again note that this processes 2TB and will therefore cost approximately $10 to run):

#standardSQL # Get summary of all lighthouse scores for a category # Note scores, weightings, groups and descriptions may be off in mixed months when new versions of Lighthouse roles out CREATE TEMPORARY FUNCTION getAudits(report STRING, category STRING) RETURNS ARRAY<STRUCT<id STRING, weight INT64, audit_group STRING, title STRING, description STRING, score INT64>> LANGUAGE js AS ''' var $ = JSON.parse(report); var auditrefs = $.categories[category].auditRefs; var audits = $.audits; $ = null; var results = []; for (auditref of auditrefs) { results.push({ id: auditref.id, weight: auditref.weight, audit_group: auditref.group, description: audits[auditref.id].description, score: audits[auditref.id].score }); } return results; ''; SELECT performance' AS category, audits.id, COUNTIF(audits.score > 0) AS num_pages, COUNT(0) AS total, COUNTIF(audits.score IS NOT NULL) AS total_applicable, SAFE_DIVIDE(COUNTIF(audits.score > 0), COUNTIF(audits.score IS NOT NULL)) AS pct, APPROX_QUANTILES(audits.weight, 100)[OFFSET(50)] AS weight, MAX(audits.audit_group) AS audit_group, MAX(audits.description) AS description FROM `httparchive.lighthouse.2020_09_01_mobile`, #`httparchive.sample_data.lighthouse_mobile_10k`, #Cheaper test table UNNEST([10, 25, 50, 75, 90, 95, 99]) AS percentile UNNEST(getAudits(report, "performance")) AS audits WHERE LENGTH(report) < 20000000 # necessary to avoid out of memory issues. Excludes 16 very large results GROUP BY audits.id ORDER BY category, weight DESC, id

The results of this query are too a bit big to include in this article, but are available in this public Google Sheets spreadsheet, and I'll share a few snippets here. First up the 6 metrics that account for the score:

id num_pages total total applicable pct weight audit_group
largest-contentful-paint 1,429,564 6,800,350 6,684,685 21.32% 25 metrics
total-blocking-time 2,245,286 6,811,475 6,006,553 37.35% 25 metrics
first-contentful-paint 3,525,603 6,811,475 6,709,637 52.38% 15 metrics
interactive 1,284,134 6,811,475 6,006,553 21.36% 15 metrics
speed-index 2,740,603 6,811,475 6,704,904 40.76% 15 metrics
cumulative-layout-shift 4,651,649 6,800,350 6,698,649 69.14% 5 metrics

Here we see the weightings as described in the Lighthouse Scoring Calculator. We can see that the new CLS metric is actually the highest scoring metric (69.14% of sites pass this), (though the Lighthouse Score is not what will be used for the ranking impact as that is measured on field data). Read my article on How To Fix Cumulative Layout Shift (CLS) Issues to further improve that. However, Largest Contentful Paint (LCP) is a pain point for 79% of sites, as is Time to Interactive (TTI).

The eagle eyed amongst you may also have spotted that LCP and CLS were run on 11,125 less sites. These are new metrics in the recently rolled out Lighthouse 6, and it looks like some of the crawlers are still on Lighthouse 5. 99.83% of sites used the new Lighthouse 6 so don't think these outliers are statistically significant.

Delving beyond these metrics there are other interesting points. Here are the five lowest passing audits (excluding the weighted ones above):

id num_pages total total applicable pct weight audit_group
font-display 1,625,219 6,811,475 6,726,646 24.06% 0 diagnostics
first-cpu-idle 1,768,675 6,811,475 6,676,622 26.46% 0 metrics
uses-long-cache-ttl 1,823,701 6,811,475 6,730,675 27.04% 0 diagnostics
max-potential-fid 1,860,167 6,811,475 6,709,637 27.70% 0 metrics
render-blocking-resources 1,944,698 6,811,475 6,697,039 28.93% 0 load-opportunities

The Font Display audit is only passed by 24.06% of sites, suggesting use of font-display: optional could be improved for those willing to life with a Flash of Unstyled Text (FOUT). See my Should you self-host Google Fonts? article for more information on this.

First CPU idle and Max Potential First Input Delay (FID) suggest perhaps we are using too much JavaScript though in my experience, images also take up considerable CPU on the low-end devices that Lighthouse simulates. So make sure you size your images appropriately both in layout and in bytes sent to reduce the strain on end devices resizing them, as well as reducing network impact.

Long Cache times is something I actually disagree with to be honest and think is over-rated. In my opinion the likelihood of your assets staying in the browser cache is not as high as you think and short cache times (e.g. an hour or 3) still makes your site feel zippy as you navigate around it, while meaning any deployment errors will soon fix themselves.

The Render Blocking Resources audit is an interesting one. I've written before about why I don't like inlining CSS, but even then I was surprised to see 28.93% of sites passing this until I found the source code for this audit (love open-source btw!), and it "ignores stylesheets that loaded fast enough to possibly be non-blocking". Testing it on this site for example (where I push use HTTP/2 push to push the CSS), I don't fail this audit despite not inlining. This was one of a number of audits that I discovered were more intricate than may seem at first and makes me even more impressed by Lighthouse!

Related to above two points, I think that some web performance tools, like Lighthouse, perhaps over-emphasis the first load time to the detriment of second load time. While the initial load time is important, IMHO, these tools should also be looking at ensuring a performant experience as people browse around your website (though I admit that not everyone does and some visitors will be single page visitors). The Long Cache time metric is basically the only metric in Lighthouse that considers second load time, and even then is not directly related to this as, as long as there are cache times, then in-session browsing should be faster. Personally I'll forgive a slight delay in a website loading, and possibly blame my network (especially on mobile), rather than the website itself, but I will 100% blame the site if clicking on another page gives the same slow load experience. Anyway, I digress...

I'll leave Performance here and let readers delve into the raw data to see what else they can find – comment below if you do find any interesting nuggets! Those that do look at the other stats might also notice that some metrics (e.g. critical-request-chains) have 0 sites passing this metric. This is because some of the metrics don't come up with a score but provide more information instead (e.g. the chain of requests forming the critical path). As those are less easy to map to a summary spreadsheet I haven't delved into them further.

Accessibility Metrics

Moving on to Accessibility we can run the same query as Performance, but changing the category (pro tip: use a UNION ALL and run all 5 categories at once to save yourself some money!). We see there are 51 metrics captured, of which 22 influence the total Accessibility score in a weighted average, as listed in the Accessibility Scoring article. The pass percentages of those audits are shown below:

id num_pages total total applicable pct weight audit_group
aria-allowed-attr 3,342,675 6,782,042 3,469,250 96.35% 10 a11y-aria
aria-hidden-body 6,697,276 6,770,949 6,697,735 99.99% 10 a11y-aria
aria-required-attr 3,622,497 6,782,042 3,657,978 99.03% 10 a11y-aria
aria-required-children 3,391,591 6,782,042 3,657,976 92.72% 10 a11y-aria
aria-required-parent 3,621,030 6,782,042 3,657,978 98.99% 10 a11y-aria
aria-roles 3,619,429 6,782,042 3,657,741 98.95% 10 a11y-aria
aria-valid-attr 3,452,899 6,782,042 3,469,735 99.51% 10 a11y-aria
aria-valid-attr-value 3,333,294 6,782,042 3,469,735 96.07% 10 a11y-aria
button-name 2,522,871 6,782,042 3,761,967 67.06% 10 a11y-names-labels
image-alt 3,488,991 6,782,042 6,396,177 54.55% 10 a11y-names-labels
label 973,238 6,782,042 3,629,773 26.81% 10 a11y-names-labels
meta-viewport 3,958,881 6,782,042 6,046,393 65.48% 10 a11y-best-practices
bypass 6,252,309 6,782,042 6,644,249 94.10% 3 a11y-navigation
color-contrast 1,396,329 6,782,042 6,579,723 21.22% 3 a11y-color-contrast
document-title 6,655,738 6,782,042 6,729,419 98.91% 3 a11y-names-labels
duplicate-id-active 3,615,654 6,770,949 3,991,362 90.59% 3 a11y-navigation
html-has-lang 5,282,696 6,782,042 6,729,419 78.50% 3 a11y-language
html-lang-valid 5,306,327 6,782,042 5,342,829 99.32% 3 a11y-language
link-name 1,946,209 6,782,042 6,613,258 29.43% 3 a11y-names-labels
list 4,832,332 6,782,042 5,397,508 89.53% 3 a11y-tables-lists
listitem 4,996,927 6,782,042 5,380,236 92.88% 3 a11y-tables-lists
heading-order 3,505,837 6,770,949 6,014,415 58.29% 2 a11y-navigation

We can see it is the usual suspects with missing labels, colour contrasts, link text which are the lowest scoring. This has been covered well before in last year's Web Almanac chapter on Accessibility, and in other publications like WebAIM's Million report, so I won't dwell on these depressing stats too much here, but instead talk more about Lighthouse's methodology and whether its accessibility score is something to be taken seriously

In all automated checks, but accessibility in particular, it is impossible to fully test a site for "passing", and trying to measure whether a site is accessible or not based on easily tested metrics is an impossible challenge. Certain things cannot be tested easily, and other tests will give raise to false positives and therefore call into question the validity of the audit.

In my experience Lighthouse does a good job of keeping the latter to a minimum and only includes less noisy metrics in the scoring, and leaves the other metrics as 0-weighted advice that can be reported on as Observations but not penalise the site. That's not to say it is perfect, or it doesn't get it wrong on occasion, but in general I think it does a pretty good job. So in general the points it highlights are real issues and should be addressed by website owners - IMHO there is no reason NOT to get a score of 100 in the Lighthouse Accessibility audit with a bit of care an attention.

A score of 100 won't indicate that the site is fully accessible, but a score of less than 100 means there is a good chance it is not. This is different to Performance where a score of 100 is very, very difficult (and probably requires making some tough choices about what you have on your site). In general a score of 90 in Performance is great and 95 is fantastic. But for Accessibility you should be aiming for the 100 in my opinion.

Nothing beats a manual audit completed by an Accessibility expert of course, but that is more difficult and time consuming (and impossible to complete on 6.8 Million websites for an overall view like this!) so I still think there is value in what tools like Lighthouse do to provide to highlight the basics. These basics should be addressed before you complete a manual audit, and being more aware of them can only led to better development and less clean up in future.

In the accessibility space, there are other dedicated tools like WebAIM's WAVE Web Accessibility Evaluation Tool, Deque's axe which might make Lighthouse's offering seem less authoritative in comparison. However Lighthouse actually uses axe-core to perform its audit, with most of the WCAG2 A and WCAG2 AA rules, so don't dismiss Lighthouse as a "lighter" or "lesser" tool in this space, as I have been guilty of in the past. Yes it is easier to get a good Accessibility score in Lighthouse than maybe it should be but again, there is a balance to be struck here between providing useful feedback to the majority of websites owners, and making that feedback less useful due to noise! I would still recommend running the other tools on your site and they will give more detail, but also more noise. However don't dismiss Lighthouse's offering at the same time.

Best Practice Metrics

The Lighthouse Best Practice audits are a smaller collection of 17 audits (14 of which are equally weighted towards the total score) that don't fit into the other categories but are, as their name suggests, things websites should be doing to create the best websites, and experience for their end users.

id num_pages total total applicable pct weight audit_group
appcache-manifest 6,729,137 6,782,042 6,730,674 99.98% 1 best-practices-general
charset 6,618,224 6,770,949 6,693,116 98.88% 1 best-practices-browser-compat
deprecations 6,492,702 6,782,042 6,730,675 96.46% 1 best-practices-general
doctype 5,819,488 6,782,042 6,730,675 86.46% 1 best-practices-browser-compat
errors-in-console 3,595,846 6,782,042 6,730,675 53.42% 1 best-practices-general
external-anchors-use-rel-noopener 2,373,448 6,782,042 6,730,615 35.26% 1 best-practices-trust-safety
geolocation-on-start 6,692,318 6,782,042 6,730,675 99.43% 1 best-practices-trust-safety
image-aspect-ratio 5,658,654 6,782,042 6,730,533 84.07% 1 best-practices-ux
image-size-responsive 3,288,479 6,770,949 6,719,507 48.94% 1 best-practices-ux
is-on-https 4,563,502 6,782,042 6,730,675 67.80% 1 best-practices-trust-safety
no-unload-listeners 3,761,283 6,770,949 6,719,647 55.97% 1 best-practices-general
no-vulnerable-libraries 1,126,110 6,782,042 6,781,303 16.61% 1 best-practices-trust-safety
notification-on-start 6,730,675 6,782,042 6,730,675 100.00% 1 best-practices-trust-safety
password-inputs-can-be-pasted-into 6,729,531 6,782,042 6,730,614 99.98% 1 best-practices-ux

As you can see, they include checks like whether a <doctype> is set, whether HTTPS is used, if there are errors in the console... etc. In most cases they are fairly easy to resolve and sites should score highly for most of these audits and we see most sites pass them. Like Accessibility I would recommend shooting for the 100 here.

However, as mentioned above, only 16.61% of sites pass the "No Vulnerable Libraries" audit and therefore this one audit is what often holds sites up from the perfect 100 score. It is a depressing fact but JavaScript Libraries Are Almost Never Updated Once Installed, so there are a huge amount of vulnerable jQuery installs and the like with vulnerabilities that cause this audit to flag. Some major players like WordPress and Joomla even run these older versions, though the claim to have patched out the vulnerabilities. Due to their popularity, some tools like WebPageTest and Synk have written special exceptions to ignore those "false positives" (primarily to avoid noise for their users and, I suspect, mostly themselves being inundated with support tickets!). Lighthouse has taken a harder line and I can't say I disagree with this stance to be honest. The truth is we need to get better at maintaining our libraries (as an aside, here's an interesting Twitter conversation on that).

On a related note, it would be good if Lighthouse had a dedicated Security category. While we are well served by tools like SSLLabs, SecurityHeaders.com and Mozilla Observatory, to name but a few, it would still be nice to see some of the checks they perform integrated into Lighthouse as a one stop shop. Between checking TLS config, Security Headers, and some of the other audits in this category (Vulnerable libraries, no password pasting...etc.) it feels there's more than enough to fit into a dedicated Security category. There's been a long standing issue open for this in Lighthouse, but little enough progress. Given the importance of Security on the web and Google taking a leading role in pushing things like HTTPS, this is disappointing.

SEO Metrics

The Lighthouse SEO category runs only 15 audits (one of which - Structured Data does not appear to run anything yet and is not included in the score). The 15 categories are shown below:

id num_pages total total applicable pct weight audit_group
canonical 3,584,060 6,782,042 3,724,135 96.24% 1 seo-content
crawlable-anchors 4,215,415 6,770,949 6,719,588 62.73% 1 seo-crawl
document-title 6,655,738 6,782,042 6,729,419 98.91% 1 seo-content
font-size 5,732,238 6,782,042 6,710,271 85.42% 1 seo-mobile
hreflang 6,699,165 6,782,042 6,730,675 99.53% 1 seo-content
http-status-code 6,730,675 6,782,042 6,730,675 100.00% 1 seo-crawl
image-alt 3,488,991 6,782,042 6,396,177 54.55% 1 seo-content
is-crawlable 6,633,682 6,782,042 6,727,279 98.61% 1 seo-crawl
link-text 5,711,976 6,782,042 6,730,615 84.87% 1 seo-content
meta-description 4,439,472 6,782,042 6,730,675 65.96% 1 seo-content
plugins 6,685,955 6,782,042 6,730,586 99.34% 1 seo-content
robots-txt 5,072,082 6,782,042 5,447,685 93.11% 1 seo-crawl
tap-targets 5,500,226 6,782,042 6,730,042 81.73% 1 seo-mobile
viewport 5,996,595 6,782,042 6,730,675 89.09% 1 seo-mobile
structured-data 0 6,782,042 0 0

Like Accessibility, this category is prone to being dismissed by SEO experts more used to their own tools, but again I think that's a missed opportunity. The audits above are all good things to check, and rarely cause false positives.

Like the other categories it is important to understand what is being checked. The canonical check doesn't check that a canonical link is set - few enough set these, and they are only needed if there will be multiple variants, so that would be prone to false positives. Instead it checks that if it is set, then it is set to what it considers a valid value. Now you can argue about whether its definition of valid isn't correct (I in particularly think you can rel="canonical" to a page on another domain, though apparently Microsoft's Bing and Yahoo used to disagree), but in general you'd imagine you'll be running Lighthouse on the canonical page so that should not be a big deal. It is similar with the hreflang check which, again, only flags if it is set to an invalid value.

One thing that all the Lighthouse SEO checks have in common, is that they are Technical (or on-site) SEO checks. Lighthouse is about auditing this page, and doesn't use any off-page resources to check keywords, backlinks, domain authority, competitor analysis or such like. Such information would be nice but that is getting a bit into the dark arts of SEO and, especially given Google's weight in this area, and secrecy as to exactly how it's search engine works, it's unsurprising they stick to the more easily measured statistics.

One advantage the Lighthouse does have over some other SEO tools and crawlers, if that it executes JavaScript (like Googlebot itself does) so its audits should be completed on the DOM produced after JavaScript has run. This can produce different results from a technical SEO perspective (both better and worse!), but should be what Googlebot says rather than the raw HTML returned for the page that some crawlers use that may not reflect what the end page looks like.

This category is one that most sites should easily be able to score highly in and it is no surprise to see very high scores in most of the categories above. Similar to Accessibility and Best Practices I think a score of 100 is easily achievable and what sites should be aiming for. The lack of image alt attributes is disappointing (more so from an Accessibility than an SEO one in my opinion), but most of the rest of the scores are fairly high.

I think the main issue with the SEO metrics is that there could be more of them. Structured Data is an obvious one that is already being worked on, but other items could be added to extend this audit and provide more guidance to websites (particularly those without the ability to pay for dedicated SEO expertise). I'm not sure what audits could be added (word counts, headings, noindex checks and/or favicons off the top of my head) but it definitely feels there is more scope here.

PWA Metrics

And so to the last "category": PWA. I use the term category in inverted commas as it clearly is being treated in a different light to the other 4, with many Lighthouse instances not showing the score. I always thought that meant the page didn't qualify as a PWA, but in digging into this, I found even the sites that are considered PWAs, have the score hidden. Digging around the Lighthouse GitHub repo it seems this is intentional, even though a score is calculated and available when querying the data directly (like we are doing here).

The PWA category runs 17 audits (14 of which count to the hidden score) in various weightings:

id num_pages total total applicable pct weight audit_group
load-fast-enough-for-pwa 1,876,086 6,782,042 6,706,336 27.97% 7 pwa-fast-reliable
works-offline 62,527 6,782,042 6,730,675 0.93% 5 pwa-fast-reliable
installable-manifest 146,669 6,782,042 6,782,042 2.16% 2 pwa-installable
is-on-https 4,563,502 6,782,042 6,730,675 67.80% 2 pwa-installable
redirects-http 4,742,365 6,782,042 6,689,418 70.89% 2 pwa-optimized
viewport 5,996,595 6,782,042 6,730,675 89.09% 2 pwa-optimized
apple-touch-icon 2,391,350 6,782,042 6,730,675 35.53% 1 pwa-optimized
content-width 5,371,210 6,782,042 6,730,675 79.80% 1 pwa-optimized
maskable-icon 9,404 6,770,949 6,770,949 0.14% 1 pwa-optimized
offline-start-url 52,224 6,782,042 6,730,657 0.78% 1 pwa-fast-reliable
service-worker 71,388 6,782,042 6,730,675 1.06% 1 pwa-installable
splash-screen 128,145 6,782,042 6,782,042 1.89% 1 pwa-optimized
themed-omnibox 269,834 6,782,042 6,730,675 4.01% 1 pwa-optimized
without-javascript 6,532,429 6,782,042 6,689,492 97.65% 1 pwa-optimized
pwa-cross-browser 0 6,782,042 0 0
pwa-each-page-has-url 0 6,782,042 0 0
pwa-page-transitions 0 6,782,042 0 0

Loads fast enough for PWA gets a very heavy weighting compared to the others and that's understandable why - for a PWA to compete with a native app it needs to feel fast. It is based on Time to Interactive (TTI) but TTI has had its weighting more than halved from 33% to 15% in Lighthouse 6 so perhaps this is a little simplistic? Maybe the Core Web Vitals are better metrics to use here - or would that just be repeating the Performance category?

The rest of the audits check for specific characteristics that are considered necessary to be considered a PWA (manifests and service workers), and also a whole load of best practice items that are helpful even if you're website is not a PWA. It is unsurprising to see the non-PWA specific audits getting much, much higher results, and there being very low results – less than 2% – for the PWA specific ones. As I alluded to above, PWAs are a very niche technology at present and, although it is growing, I expect it will be a very long time before we see significant usage of this technology on a home pages unless a major player like WordPress uses it by default.

Given the relatively low uptake, I do question whether PWA deserves its own category or should simply be lumped under the Best Practices category? As I stated earlier, I personally think a Security category is much more important than this one, and would fit better in with the other four categories. PWAs have been pushed for a while now, and the opportunities are indeed powerful, and support is pretty much there in all browsers now (avoiding the arguments like installability that has PWA advocates giving out to Safari). However, the same can be said of many other technologies (e.g. WebAssembly) but they don't (and shouldn't btw) get their own Lighthouse category.

Conclusion

Well that's a healthy dose of analysis (and opinion!) on what Lighthouse scores look like across the web. I've learnt a few things about the state of the web, but also how Lighthouse works. It is a very impressive tool and the more I learn about it, the more impressed I am about it. I've noted some ways I think it could be improved in this article, but that's not to take away from the impressive feat it already manages or the hard work that goes into it! Even as it stands it does a lot, and more website owners would do well to use the tool and address the issues it points out. I've written a post on how to aumatically run it, and considerations to be taken into account when doing that.

The differences between Performance and the other categories are interesting and site owners would do well to understand them too. While 100 across the board is a lovely thing to boast about on Twitter, that may involve making too many compromises in the Performance category with little actual perceivable gain to the end user. A definite risk to any scoring tool is people "chasing the grade" when time is better spent elsewhere. Site owners need to realise and get comfortable with a 90+ score in this category. The different way it is measured however also creates issues in the other 3 main categories as, in my opinion, missed points there are highlighting real issues that are usually easy enough to fix and site owners should be aiming for 100 in those categories. And then there's the PWA category which is in its own little world. ¯\_(ツ)_/¯

Let me know below if you spot anything else of interest, of if you spot any mistakes in this article. As a said before, the raw results from the above queries are available in this Google Sheet for you to explore too.

This analysis started from working on the Web Almanac and if this article is the sort of thing that piques your interest, then I'd strongly suggest you checkout the 2019 Web Almanac for a lot of similar analysis on websites using Lighthouse and a lot of other data from the HTTP Archive and Chrome UX report. We're also hard at work on this year's data and the next few months we should be publishing the 2020 Web Almanac with all new info to delve into.

Many thanks to Matt Hobbs for putting up with the typos in early drafts of this. The remaining typos and other errors are still my fault and not his but it is definitely more readable now thanks to his input :-)

Thanks also to David Fox who spotted a problem in the SQL and suggested only counting applicable sites for each audit, so the data was updated slightly from first publish, though the overal theme of the post has not changed.

Want to read more?

More Resources on the this topic

This page was originally created on and last edited on .

How useful was this page?
Loading interactions…