Why improving only your Lighthouse score doesn't guarantee a fast website

In a nutshell...

Fier de votre score Lighthouse de 100 % ? C’est mérité !

However, keep in mind that this only gives you a partial view of your site’s performance. Explore the differences between Lighthouse metrics and other tools, understand their impact on performance indicators, and discover why tracking real users is essential for a complete picture.”

Performance Lighthouse de La FOSSE

Optimize speed...

This moment is familiar to everyone: you’re working to optimize a site’s loading speed, paying attention to every millisecond. To measure performance, you use Google Lighthouse in Chrome’s development tools, a common choice for evaluating speed improvements.

Performance Lighthouse de La FOSSE

After generating dozens of reports and applying all the recommendations, you finally get a perfect 100% score on Google Lighthouse! It’s gratifying, of course, and you might even consider mentioning it in a request for a raise. But don’t rely on this score alone. Google Lighthouse is a good indicator of certain improvements, but it doesn’t reflect the full reality of site performance for real users. It’s just one tool among many, offering an incomplete perspective.

As you can see, even some of the world’s great names fall short of perfection.

To run a report with Lighthouse, open DevTools, access the Lighthouse tab and generate the report! You can configure Lighthouse for testing on slow connections, or create separate reports for mobiles and PCs. Integrated into Chrome and Google’s PageSpeed Insights tool, it produces results in record time: just 10 to 15 seconds. However, while Lighthouse is practical and fast, it does not offer a complete view of actual performance, and may differ from the results of other tools.

This does not reflect the experience of “real users”.

In Capital Web Performance, not all data is equivalent. Lighthouse reports are based on simulated or synthetic data, obtained through assumptions. This approach enables Lighthouse to be fast. For example, by limiting the connection speed in Lighthouse, you can test both slow and fast browsing conditions. By default, Lighthouse uses a fast connection, but it can be adjusted to simulate slower loads, although this remains an estimate of performance on a different connection.

Once again, the test environment is a simulation, not a faithful representation of reality. The limiting conditions used probably don’t correspond to the real connection speeds of an average user on the site, who might have a faster connection or a slower processor. What Lighthouse offers is closer to an instant, simplified test.

Simulated data is useful for rapid testing under optimized conditions, but it sacrifices accuracy by assuming certain connection speeds and making averages that may deviate from users’ reality.

Although simulated limiting is the default setting in Lighthouse, it is also possible to apply limiting methods that are closer to real-life conditions. These tests will take longer but provide more accurate results. For more realistic testing with Lighthouse, we recommend using online tools such as DebugBear or WebPageTest.

This does not affect Core Web Vitals scores.

Core Web Vitals are the standard performance criteria defined by Google. They go far beyond a simple “Your page loaded in X seconds” report by analyzing a series of finer details that provide a better understanding of how the page loads, which resources block other elements, the impact of slow user interactions, as well as how the page moves as resources and content load. Zeunert has an excellent article on this subject on Smashing Magazine, which explains each indicator in detail.

The main idea here is that the simulated data generated by Lighthouse can (and often does) differ from the measurements obtained with other performance tools. I’ve taken the time to explain this in depth in another article. The important thing to remember is that Lighthouse scores do not affect Core Web Vitals data. This is because Core Web Vitals is based on data collected from real users, extracted from the Chrome User Experience (CrUX) report, which is updated every month. Although CrUX data has a certain temporal limit, it offers a more faithful view of actual user behavior and browsing conditions, unlike the data simulated in Lighthouse.

The crucial point I’d like to make is that Lighthouse is simply not suited to accurately measuring Core Web Vitals indicators.

I’ve highlighted the essential point: in the real world, users have a variety of experiences on the same page. It’s not as if you visit a site, let it load, then close it without doing anything. In fact, you’re far more likely to interact with the page. However, when it comes to measuring certain Core Web Vitals indicators, such as INP (Interaction to Next Paint), which evaluates the slowness of painting after user interaction, Lighthouse simply can’t measure that!

The same applies to indicators such as Cumulative Layout Shift (CLS), which evaluates the “visual stability” of the page layout. Layout changes often occur further down the page after the user has scrolled. If Lighthouse used CrUX data (which it doesn’t), then it could rely on real data from users who interact with the page and encounter CLS problems. However, since Lighthouse simply waits until the page is fully loaded without ever interacting with its content, it cannot detect layout changes or evaluate CLS.

But it’s still a “good starting point”.

The bottom line is this: a Lighthouse report is very useful for generating results quickly, thanks to the simulated data it uses. In this sense, Lighthouse can be seen as a “practical test” and even a first step in identifying opportunities for performance optimization.

However, this only gives a partial view. To get a more complete view, you need to use a tool based on real user data. Tools that leverage CrUX data are good for this, but again, this data is updated once a month (specifically, every 28 days), which may not reflect the most recent user behaviors and interactions. Although this data is updated daily, and it is possible to examine historical records to obtain larger samples, it is not real-time.

The best solution is to use a tool that tracks users in real time.
If you want to get an overall picture of your site’s performance, use Lighthouse, but don’t stop there, because you’ll only get a partial view. You’ll need to go deeper into your analyses and examine performance with real-user monitoring to get a more complete and accurate assessment.

Would you like to discuss it with us?