There are many articles of different freshness and usefulness, but usually they give the most simple and common recommendations that are known to anyone who is at least a little friendly with the web. Something like: “ Push scripts to the bottom of the page “ or “ Minify files “ . But what to do when the basic recommendations were taken into account at the development stage, and the indicators are still far from ideal?
Let’s try to deal with this problem. Let’s talk about those points that are described in the PageSpeed Insights recommendations quite generally, for example: “ Minimize work in the main thread “ or “Shorten code execution time “ . Great recommendations, but just what exactly is required?
The following are examples of questions that can confuse developers who are not sufficiently immersed in the topic of optimization.
- What to do with analytics or third-party widgets?
- What is faster: sequence or video?
- Does it make sense to set up a CDN?
- Is it normal that the indicators periodically jump, although you have not done anything?
And other similar moments from practice. Of course, we will not be able to cover all such points in one article, but we will try to touch on the most common problems, a little more complicated than: “ An uncompressed image of 20 megabytes was loaded on the page ” or “ Forgot to enable GZIP on the server ” .
Until a few years ago, many of the recommendations didn’t make much sense except when the site was loading so long that it didn’t require special metrics to figure it out. But with recent changes to Google’s SERP policy, optimization is gaining more weight. So let’s first take a quick look at the latest changes from Google PageSpeed.
Changes in Google PageSpeed
At the end of 2018, the old PageSpeed engine was replaced with Lighthouse estimates and analytics . And this new tool is also built into Google Chrome. The main difference from previous versions is points, which are now awarded not only for the implementation of recommendations, but also directly for speed. Page load began to be evaluated by several time parameters:
- at what point in time after the start of the download the content becomes visible;
- when you can interact with the page – click or enter data;
- how slowly it all loads, and when all operations are completed.
The obtained values of the characteristics are then compared with other sites from the database that have recently been tested, and are converted into points. It was because of the “ average temperature in the ward ” that the scores obtained could move in one direction or another over time. In addition, it is possible to change the coefficients in the scoring algorithm, change the parameters of the “ test devices ”, and in general, even in a series of measurements that follow one after another, the result may differ by several points, although nothing has changed.
Here’s what it says on the official website:
“ Many factors affect the download speed measurement result to varying degrees. The main ones are the availability of the local network, the availability of client hardware, and the presence of conflicts when accessing client resources. »
In fact, anything can affect, therefore, for a more relevant assessment, it is recommended to take not 1 measurement, but 3-5 during the day at different times of the day.
- Largest ContentFul Paint (LCP) – rendering large content. The time it takes for most of the content to be rendered to the screen. Desirable time up to 2.5 seconds, acceptable – up to 4 seconds.
- First Input Delay (FID) – the delay between the first user action and the browser’s response. Desirable time is up to 100 ms, acceptable time is up to 300 ms.
- Cumulative Layout Shift (CLS) – cumulative layout shift. Shows the visual stability score of the page. Aimed at combating pop-up ads and blocks that appear and disappear without user action. If the image “ jumps “, then the performance drops. The normal value is up to 0.1, the permissible value is up to 0.25.
And these indicators from the report are included in Core Web Vitals. This means that the metrics will significantly affect the ranking of search results. An important condition is that at least 75% of the site’s pages must correspond to the lower boundaries of these three indicators. Otherwise, the site will be recognized by Google as poorly optimized. But at the same time, the indicators themselves have different weights in taking into account the total score, for example, LCP or FID affect the final score more than CLS.
In November 2020, Google confirmed that Core Web Vitals will be a ranking factor from May 2021. There are recommendations as before, but now they are not directly related to points. It is not at all certain that following the recommendations will improve the situation, as following some of them may worsen the situation or be unacceptable in terms of support for various browsers. Previously, there was often a situation where obviously slow sites gave excellent ratings, and fast sites were rated poorly. Now, with the change in algorithms, it is speed and user experience that become important. All the marketing rattles that people love to stuff on the first screen will now inevitably worsen the assessment, no matter how you debug them. Therefore, a recommendation for the future: on the page, especially on the first screen,
How to improve performance?
So what can we do to improve the overall performance picture?
It is widely recommended to use asynchronous loading. At the same time, it should be remembered that asynchronous and “ not affecting download speed ” are different things. Yes, it allows you to load resources in parallel without blocking any of them, but the total time still increases. Further, if you have already moved the code to the bottom of the page, then adding asynchrony will not change anything in terms of blocking content loading. So asynchrony is not a panacea, but only a way to partially improve the situation. Much more efficient is to use as little code as possible in general, for example, not to download the entire library, but only those components that you need specifically on this page.
It follows from this that it is better to split the build into its component parts, so that in the future we can connect and use only those things that are needed only on this page, moreover, it is better to break even such code into 2 parts. The first is the “ critical ” things that are responsible for the framework of your application and the content of the first screen. We paste them directly into the code. Everything else is below, and the interactive, requiring user action, is generally at the end of the page.
Especially often problems are caused by analytics and other similar things connected through GTM. By itself, GTM does not slow down the page, it slows down what you put in the container, moreover, including the same scripts directly in the site code, on the contrary, can worsen the situation. There are not many options for action, for example:
- You can set up caching on your server and update by cron, but apart from the banal inconvenience of such a solution, it may not improve anything. But setting the container itself to load scripts on an event can improve the situation. Of course, we are interested in lazy loading by event. Ideally, the loading of such things should be delayed until the page is fully loaded, but analysts will most likely be against it, because, for example, if the user leaves the site before the full load, they will not receive any information at all. But, at a minimum, you should focus on the DOM Ready event only to collect the most critical metrics.
- You can use a service worker , it will cache part or all of the content of the HTML page. The service updates the cache only when there are any changes on the page.
- Specify the sizes of images and videos in tags, this is again useful. If the images are responsive, at least specify the aspect ratio (for example, 16:9).
Some optional things can be removed from the page altogether, pulling them up only after user actions. This also applies to scripts that perform specific tasks, such as accessing third-party resources, rebuilding data graphs or captchas. This will allow us not to load extra lines, and will also remove points from the “ unused code ” indicator.
Use the http / 2 protocol, it allows you to transfer data faster and more efficiently between the browser and the server. Unlike http/1, it does not create separate connections for downloading each file, but downloads them in parallel. Thus, http/2 reduces the load on the server and saves its resources.
If your audience is not limited to one geographic region, then set up a CDN. A distributed content delivery network reduces the load on hosting by distributing files across multiple sources. Images and videos are the most resource-intensive content, so you can load them via CDN or transfer them to a subdomain.
As for the preference for this or that animation, everything depends on the image quality and duration. Due to the most advanced compression algorithms, video is preferable to using a sequence. Also, it is preferable to use the new WebM format in the video itself, as it is specifically designed for streaming over the Internet. WebM is much smaller compared to MP4. And solutions like gif animation generally cause a PageSpeed warning asking you to use more modern formats. The choice between WebGL or video for animations is already very ambiguous, it will not be possible to predict the result in advance. In general, one should proceed from the ratio of the weight of decisions to their duration.
Also, sometimes the question arises: on some sites, at first the images are not very clear, and then they are loaded, is this the acceleration of the site? Visually, yes, but in reality, no. For such a technology, we are first forced to load images that are small in weight, and then asynchronously pull up high-quality ones. This is not a partial download of a large file “ quickly “ , but an additional preload of a small one. For the user, this is not bad, but the metrics cannot be improved in this way.
Don’t forget about fonts, in addition to using appropriate formats, it’s recommended to use rel=”preload” as you would for scripts.
Also in styles it is worth specifying font-display:swap; – this will make it possible to use embedded fonts while waiting for the main one to load.
Sliders and floating sidebars are a common case of layout offset problems not related to popup ads. In the first case, the problem is due to the fact that only the slider wrapper was created on the page, and all the insides and sizes are set after the script is initialized, which causes resizing and recalculation of the coordinates of the entire page, so do not forget to specify the sizes of the desired block in styles in advance. And for floating sidebars, instead of changing the values using scripts, it is better to use the position: sticky property.