The Secret Formula: Lesser-known Tips For Improved Web Performance
Contributor: Bhavya Saggi
There are plenty of articles around on how to set up a robust infrastructure, and even more articles on how to make it resilient and faster. But other than optimising the infrastructure and web-services, individual web pages that are served to users must also be optimised to gain the attention of your consumers. This calls for the need for smooth content delivery and an efficient On-Page SEO.
Following the premise, similar common optimisations were in place at my workplace at Makaan.com but I figured I could shelve a few extra seconds off the web-page and deliver more content (to Search-Engine Crawlers and Users). Following is a summary of actions, optimisations and experiments that were performed to this effect.
A build automation tool like
The white-spaces from
<style>tags can be eliminated by using as sparingly as possible in these tags and moving other content to an external file, which later can be minimised and uglified by utilising already available tasks under major build automation tools
For the HTML templates or static HTML files, whitespace elimination between HTML-tags should be done either via regex-replacement or innate template optimisation.
In our case, we had many inline
<script> tags containing relevant information in JSON-format. Therefore, we simply overrode our
<script> tag in template-renderer to wrap contents said script-tags in
JSON.stringify(JSON.parse()) and for other script tags, simply made a regex
Use of Link tags to hint future resources
After setting up the HTML, its time we checked the external resources. Resources are requested by
- Explicit inclusion by
<link>tag in HTML document.
- Resources needed by external resources, eg. a
background-imagein a CSS style or dependency for a JS files.
<link rel="dns-prefetch" href="resource">
This directive mentions to the browser to prematurely resolve the DNS query to a
domain,so that the future resources from the domain can be resolved. dns-prefetch is most effective when used for the CDN domain, or declaring alternate domains if domain-sharding is employed.
<link rel="preconnect" href="resource" crossorigin>
Moving further ahead from
dns– preconnect, the preconnectdirects the browser to actually setupthe connection to the domain.
This is most effective when used in conjunction with a domain that utilises HTTP/2, as HTTP/2 keeps a connection opened till it is explicitly closed.
<link rel="preload" href="resource" as="type">
If there is a resource that is sure to be used on a webpage but is declared deep in a chain, it can
indicateto the browser beforehand using this directive. preload allows browserto fetch the resource and keep it in its cache, and serve it from cache itself when resourceis requested.
This can be effectively used for mentioning the font/image files mentioned further in the CSS stylesheet.
<link rel="prefetch" href="resource">
Similar to preload, the prefetch directive hints browser to request the resource, but in the background in idle-time.
Link prefetch is supported by most modern browsers with the exception of Safari, IOS Safari, and Opera Mini.
<link rel="prerender" href="resource">
Prerendering is very similar to prefetching in that it gathers resources that the user may navigate to next. The difference is that prerendering actually renders the entire page in the background.
Hence, it is most effective to indicate the next likely HTML navigation target.
Ergo, we added all the CDNs under the ‘preconnect’ link-tag, the image-sprites and font-
Chunking, Segmenting & Deferring Page Resources
Payload Size of a TCP packet is ~63Kb which means that we can transfer 63Kb of data in a single RTT (Round-Trip Time) before the network waits for the next packet. Hence, it means that:
- Very large files would take many packets (which arrive out-of-order), and network waits till all are received so that they can be processed.
- Very small files would be resolved in a single RTT, but multiple requests pollute the network with
copiousamount of packets.
Therefore, the experience can be improved by chunking the external resource files into independent modules of size 40–50Kb. But this comes with a problem that multiple independent modules, all requested and executing in parallel (thanks to http/2), chokes the network and main thread of the browser. To solve this, manual intervention is needed to segment resource requests/execution in three phases:
- First-Fold Resources (inline resources)
is,if the definition of a function has not arrived yet and it is used.
- Page-Essential Resources (linked resources)
Only resources which provide basic functionality for your webpage should be ‘linked’ in
ofthe HTML document, to reduce the number of requests the browser sends and avoid network choking. This means, adding only the CSS for basic and static components, and only the JS for essential functionalities (e.g. event-binding or tracking).
- Lazy-loaded resources
Heavy and dynamic resources should be lazyloaded and their resources should be fetched only when
eventfor browserhas fired. You may even go further and delay a few resources after the
event. This allows us to manually defer resources and in a hacky-manner define thesequential order of resources.
While images provide a more user-friendly and visual medium to a webpage
A solution for smooth visual experience is to defer image loading on the webpage. To achieve this, we placed a dummy-image (usually a 1px jpg) in the src attribute of
<img> tag and url of original image in the
data-src attribute of the tag. Upon ‘domInteractive’ or ‘pageLoad’ event, we initiate a IntersectionObserver and start replacing the src of Image-tags with the value in data-src attribute as the images come in view.
To extend user-experience and gain SEO/Accessibility brownie-points, popular tools suggest:
- Provide all
srcattribute for the Image (<img>) tags.
- Serve images
imageformat, if browsersupports.
- Resize and serve images for the same size as required on the webpage.
Use HTML5 semantic Tags & Follow W3C Validations
Because of highly interactive and content-rich
Furthermore, modern browsers are resilient and often auto-correct incorrect HTML tags (e.g. a tag you forgot to close), but we should make sure that we provide as perfect content as possible, therefore the HTML document should pass the W3C validations. Not only does this help browsers render the document with much ease, but makes your content apprehensible to crawlers.
Update & Optimise Nginx configuration
We use Nginx as our primary front-facing load-balancer and web server. Following are the general optimisations that should be performed to get most out of it.
- Enable http/2
The HTTP/2 (h2) being the successor of HTTP/1.x (h1) provides a range of optimisations, some of which includes, Header-Compression & Request-Multiplexing.
HTTP/2 also comes with an additional feature of ‘Server Push’, which can be enabled from Nginx after v1.13.9. (which will be discussed at a later stage). One limitation before enabling http/2 is that it runs only over
https, making us look for SSL related improvements.
- Enable OSCP stapling
Online Certificate Status Protocol (OCSP) is used to check the revocation status of X.509 digital certificates. Nginx allows appending (“stapling”) a time-stamped OCSP response which is signed by the Certificate Authority to the initial TLS handshake (eliminating the need for clients to contact the CA) & reducing
timetaken to setupan SSL connection.
- Extend SSL cache.
Nginx allows to sharing of the SSL-session between its workers
.The cache size is specified in bytes; one megabyte can store about 4000 sessions, therefore reducing SSL resolution time for a recurring user. The
canalso be used as an alternate to SSL-cache. In caseof session tickets, information about sessionis given to the client. If a client has a session ticket, it can present it to the server and re-negotiation is not necessary.
- Enable Server-push
When a connection is http/2 enabled, asset files can Server Push is where the server pushes a resource directly to the client without the client asking for the resource, saving 1 round-trip for assets. Since v1.13.9, Nginx natively supports Server-push, by pushing the assets defined in the “Link preload” response headers.
Furthermore, do refer the [Mozilla SSL Configuration Generator] as a boilerplate to generate a secure & optimised Server configuration, as per the need.
The discussed optimisations are quite generic and can easily be extended to other web-servers (e.g. Apache, IIS).
The culmination of the set of aforementioned activities is evident in the following snapshot of ‘Average Page-Load Time’ for Makaan.com (taken from Google Analytics).
Not only did we manage to reduce the size of each asset (including HTML, CSS, and JS files), we chunked and linearised the delivery for the assets over http/2 protocol. This provided us ~60KB drop in average data transmission till page-load and a full effective boost of ~2 seconds in page-load time across the entire website!
For further detailed explanation, here’s a waterfall-snapshot for a webpage on Makaan.com
Following are popular webpage optimisation tools which should be used and referred for a better user-experience.
- TestMySite — Provides a comprehensive performance and User Engagement result from a suite of tests.
- PageSpeed Insights — A tool which scores your webpage out of 100, based upon a list of optimisations which are expected to be present.
- Webpagetest — A utility which allows to visualise the webpage’s performance in metrics & screenshots.
- Lighthouse — A client-side utility which can be run on
browseritself (can be found in Google Chrome, in
- Google Structured-Data Testing Tool — Google’s Utility to verify & validate ‘Structured Data Markup’ on a webpage.
- Google Mobile-Friendly Test — Google’s Utility to verify & validate if a webpage performs & works as expected on Mobile Devices.
- W3C Validator — The Markup Validator is a free service by W3C that helps check the validity of Web documents.