How to improve Web Core Vitals in a Frontity project

According to Google, they are going to start using Web Core Vitals to rank websites in May 2021. Google is already storing this information and you can access it in your Google Search Console, Core Web Vitals section. Alternatively, you can run a check of any website at PageSpeed Insights. If you use this tool, bear in mind that the data Google will take into account is the Field data, which corresponds to the real data based on user experience. On the other hand, the Lab data is a simulation of the worst case scenario, using a really low internet connection. It could be useful as well, but it doesn’t measure the real cases.

Having said this, we have been checking out some Frontity websites and they score pretty well in the Field data, but we would like to do some research to ensure that passing the Web Core Vitals is easily achievable in a Frontity project. At the end, we would like to end up with:

  • Improvements in the framework.
  • Recommendations / Resources to Frontity users to implement them in their themes and packages.
  • Explanations on how to measure the performance improvements.

The idea of this topic is talk about different things we could try, share the results of the different tests and share resources that could be interesting for anyone. We will run the tests mainly in our own web, frontity.org, and in our blog, which is using the twentytwenty-theme at this moment.

Any feedback, idea or useful resource is really welcome :slightly_smiling_face:


Summary

In order to have the information easily available, we will keep updated this summary in the opening post, but bear in mind that the information below will change.

Tests

In order to keep track of the tests, we will use the following Google Sheets where we will store the different hypothesis we want to test and the results once we have done it. We could add more metrics if we want. For example in some cases it could be interesting to compare the Time to First Byte or the bundle size.

– Google Sheets

Interesting Resources

4 Likes

As a first step, I took a quick look at the Cumulative Layout Shift in frontity.org homepage and how to improve it.

I’ve recorded this video trying to explain it:

We might need to add something similar to the .hero-animation.wp-block-column class to solve this also for the desktop.

What do you think? Any idea if this makes sense or how to solve it?

1 Like

I’ve also seen that the Cumulative Layout Shift of the blog posts could be improved if we start using native lazy loading and define the width and height properties. It seems with this change the CLS is almost zero. I’ve made another video explaining it:

Let me know what you think. Should we implement this in our web and the twentytwenty-theme?

1 Like

Both things make sense. To avoid CLS we have to make sure that everything has a height set up before hand.

I think setting up a width is not important unless there are elements that can be pushed to the right or left when the images load, which is usually not the case if the image is the only element at that height.

What I wonder is if using the full width and height works for responsive layouts.

I am really bad with CSS, but I know that @David and @orballo used to use this CSS padding hack to solve this.

There is an ongoing conversation about this topic in this thread: Image component: Avoid layout shift on image load. Maybe we could rename it to “Image component: Avoid layout shift on image load” and continue the conversation there?

I guess the goal could be to create an Image component that is 100% compatible with Gutenberg and it never does a layout shift, no matter what.

1 Like

There are a lot of things going on related to the Web Core Vitals so, in order to keep this as organized as possible, we have been thinking about how to proceed here. This is the workflow we have in mind:

  1. We will start gathering interesting resources that could be useful, do our own research, and create hypothesis that we think could help to improve the Web Core Vitals. At the end we should come up with a bunch of tests to improve the LCP, other list with tests to improve the CLS, and other one for the FIP.

  2. Once these lists are ready, although they can always increase, we will run all the tests on localhost. We will use LightHouse CI in the CLI for this and we will share the results in this topics. This tool easily provides a report that can be exported as a json, so we would end up with two different json reports, one before the change and other after it. These different reports can be easily compared using tools like LightHouse CI Diff.

    We will try to measure different kind of urls. For example, we will run the tests for both our homepage, an archive in our blog, and a post in our blog.

  3. After we have run all the tests locally, we will go through of all them one by one and decide what to do next. We will see if it makes sense to discard that test, implement it in a staging site and check the results there, or implement it in a real website to start getting real data.

    Regarding this, we are considering creating a package (or a setting in some packages) to send relevant performance data directly to your Analytics services. Anyway, this would come later once we start with this phase.

  4. After running the relevant tests in real websites, we should be able to decide if it makes sense or not. With this information, we will write down a final summary of things that could be improved in the framework or the themes.

For reference, this is the file we used to do local tests with @lhci/cli:

module.exports = {
  ci: {
    collect: {
      startServerCommand: "npx frontity build && npx frontity serve",
      url: ["http://localhost:3000"],
      numberOfRuns: 10,
      settings: {
        onlyCategories: ["performance"],
      },
    },
    upload: {
      target: "temporary-public-storage",
    },
  },
};

After that, we run the LHCI using lhci autorun.


@santosguillamot, I was thinking that maybe we should start sending the WCV events to our GA now: GitHub - GoogleChrome/web-vitals: Essential metrics for a healthy site.. That way we can start learning what type of insights we can get and how to display the data before we start doing tests.

We could do it with a small package in GitHub - frontity/frontity.org: The Frontity project of the frontity.org site.

What do you think?

Let’s implement it. I agree it’s better to start learning as soon as possible.

I think we won’t need a new package because we are using Google Tag Manager and we can configure it following this guide (everything is done in GTM). I’m already working on it. I’ll let you know.

2 Likes

I finally used this other guide because we are still using Universal Analytics and not Google Analytics 4. It’s already implemented in frontity.org. I’ve run some tests and it seems to be working, let’s see when we have a bit more data.

I also found that in the repo you shared they are discussing Web Vitals Metrics for Single Page Applications, that could be interesting. At the end they mention that “they’re working on some content related to SPAs for web.dev/vitals.”

1 Like

One other test or suggestion I would have is to mark and sweep unused property serialised on the backend. In my local prototype the minified, uncompress first document for mars-theme is 448kb, if we do a mark and sweep technique for the unused properties, the initial minified and uncompressed first document for mars-theme is 226kb.

I’ve explain it better in this video about it.

3 Likes

It’s an interesting approach. Thanks for sharing Cris!

I have some questions about the approach and how it would work in some situations, but let’s not get into details right now. We could add it to the list of possible tests and talk about the details later :slightly_smiling_face::+1:

Keep them coming! :grinning_face_with_smiling_eyes:

For those using a lot of Youtube videos embedded in the content, it may be interesting to take a look at - GitHub - paulirish/lite-youtube-embed: A faster youtube embed. .

It seems that embeds are a constant issue on a lot of webs, so this could help on that regard. I guess users could create a html2react processor to handle this.

This is an interesting tool to analyze the cumulative layout shift: Cumulative Layout Shift Debugger (CLS) - webvitals.dev

This is the result of measuring frontity.org: Cumulative Layout Shift Debugger (CLS) - webvitals.dev

1 Like