Tachometer – Statistically rigorous benchmark runner for the web

From the Polymer team:

Tachometer is a tool for running benchmarks in web browsers. It uses repeated sampling and statistics to reliably identify even tiny differences in runtime.

To compare two files, run it like so:

npx tachometer variant1.html variant2.html

Tachometer will open Chrome and load each HTML file, measuring the time between bench.start() and bench.stop() that you need to add to your code. It round-robins between the files, running each at least 50 times, and will output a comparison table.

What’s very interesting is that it has support for swapping NPM dependencies, so that you can measure how these impact performance. Tachometer comes with WebDriver plugins for Chrome, Safari, Firefox, and Internet Explorer.

Tachometer →

Partytown: Run Third-Party Scripts off the Main Thread in a Web Worker

The folks from builder.io set out to create a way to prevent Third-Party Scripts from blocking the main thread. The result is Partytown, which runs Third-Party Scripts Within a Web Worker.

Partytown is able to sandbox and isolate third-party scripts within a web worker and allow, or deny, access to main thread APIs. This includes Cookies, localStorage, userAgent, etc. Because the code must go through Partytown’s Proxy in order to access the main thread, Partytown also has the ability to log every read and write, and even restrict access to certain DOM APIs.

It works by creating JavaScript Proxies to replicate and forward calls to the main thread APIs (such as DOM operations) and making calls to them using synchronous XHR requests. Pretty crazy, right?! 🤯

To mark third-party scripts to run in a Partytown web worker, set the type attribute of its opening script tag to text/partytown.

<script type="text/partytown">
  // Third-party analytics scripts
</script>

Also comes with integrations for frameworks like React.

Partytown (GitHub) →
Introducing Partytown: Run Third-Party Scripts From a Web Worker →

ct.css – Let’s take a look inside your <head>

Harry Roberts created a utility CSS file ct.css that x-rays your site’s <head>:

Your <head> is the single biggest render-blocking part of your page—ensuring it is well-formed is critical. ct.css is a diagnostic CSS snippet that exposes potential performance issues in your page’s <head> tags.

<link rel="stylesheet" href="https://csswizardry.com/ct/ct.css" class="ct" />

The CSS basically adds display: block; to all script and style includes. Using attribute selectors, it can then highlight the ones that do things wrongly. For example:

head script[src] {
  display: block;
  border-style:var(--ct-is-problematic);
  border-color:var(--ct-warn)
}
head script[src]::before{
  content:"[Blocking Script – " attr(src) "]"
}

The legend for the output is this:

  • Red: This is an error and should be addressed.
  • Orange: This could be problematic in certain scenarios.
  • Green: This is fine and is purely informational.
  • Solid: This file is the problem.
  • Dashed: Another file(s) are causing problems with this file.

Also available as a bookmarklet that injects the code for you.

ct.css

Don’t attach tooltips to document.body

Very good performance deep dive on why you shouldn’t attach tooltips to document.body, but to a div that’s a direct child of the <body>.

Tooltips in our app were taking >80ms. And during this time, the main thread was blocked, you couldn’t interact with anything. The main reason for the slowness of Tooltip was “Recalculate Style” being called at the end of mouseover event call stack which takes a lot of time. noticed the tooltip performance was inversely proportional to number of DOM nodes currently in document.

Don’t attach tooltips to document.body

When going for Pure CSS Tooltips, you won’t run into this problem 😉

How to optimize ORDER BY RANDOM()

Doing a ORDER BY RAND() in SQL is bad. Very bad. As Tobias Petry details (and Bernard Grymonpon always used to tell at local meetups):

Ordering records in a random order involves these operations:

  1. Load all rows into memory matching your conditions
  2. Assign a random value RANDOM() to each row in the database
  3. Sort all the rows according to this random value
  4. Retain only the desired number of records from all sorted records

His solution is to pre-add randomness to each record, in an extra column. For it he uses a the Geometric Datatype POINT type. In Postgres he then uses the following query that orders the records by distance measured against a new random point.

SELECT * FROM repositories ORDER BY randomness <-> point(0.753,0.294) LIMIT 3;

~

In MySQL you also have a POINT class (ever since MySQL 5.7.6) that you can use. However I don’t really see how that would work there, as you’d need to calculate the distance for each record using a call to ST_Distance:

SET @randomx = RAND();
SET @randomy = RAND();
SELECT *, ST_Distance(POINT(@randomx, @randomy), randomness) AS distance FROM repositories ORDER BY distance DESC LIMIT 0,3;

💁‍♂️ Using EXPLAIN on the query above verifies it doesn’t use an index, and thus goes over all records.

What I do see working instead, is use of a single float value to hold pre-randomness:

-- Add column + index
ALTER TABLE `repositories` ADD `randomness` FLOAT(17,16) UNSIGNED NOT NULL AFTER `randomness`;
ALTER TABLE `repositories` ADD INDEX(`randomness`);

-- Update existing records. New records should have this number pre-generated before inserting
UPDATE `repositories` SET randomness = RAND() WHERE 1;

With that column in place, you could then do something like this:

SET @randomnumber = RAND(); -- This number would typically be generated by your PHP code, and then be injected as a query param
SELECT * FROM repositories WHERE randomness < @randomnumber ORDER BY randomness DESC 0,3;

Unlike the query using POINT(), this last query will leverage the index created on the randomness column 🙂

~

How to optimize ORDER BY RANDOM()

Via Freek

Performance-Testing the F1 websites

Jake Archibald has published a nice series in which he has been been performance-testing all F1 sites. Not only does he dig into waterfall charts, he also points out a few simple things that could be applied in order to improve overall loading performance.

As a bonus he also tested the Google I/O site, which uses an SPA approach. After 9 seconds of showing nothing, eventually a spinner is shown. At the 26 second mark (!) all data was prepared and done, and the page got rendered. As Jake puts it:

Imagine you went to a restaurant, took a seat, and 20 minutes later you still haven’t been given a menu. You ask where it is, and you’re told “oh, we’re currently cooking you everything you might possibly ask for. Then we’ll give you the menu, you’ll pick something, and we’ll be able to give you it instantly, because it’ll all be ready”. This sounds like a silly way to do things, but it’s the strategy the I/O site is using.

I really recommend you to read all posts thoroughly, as there’s a ton to learn from all this!

Who has the fastest F1 website in 2021? Part 1 →
Performance-testing the Google I/O site →

Too Many SVGs Clogging Up Your Markup? Try use

Good reminder by Georgi Nikoloff to have one (visually hidden) SVG that contains several layers, which you can then include further down in your code.

SVG has a <defs> tag that lets us declare something like our graph footer just once and then simply reference it — using <use> — from anywhere on the page to render it as many times as we want.

That way you end up with less DOM nodes, less bytes transferred, and less memory consumed.

Too Many SVGs Clogging Up Your Markup? Try use

Debugging Layout Shifts

Over at web.dev, Katie Hempenius learns us how to identify and fix layout shifts using the Layout Instability API and the DevTools.

What I take away from this is that you can easily spot them using DevTools: In the Rendering Panel you can enable an option to highlight areas of Layout Shift:

To enable Layout Shift Regions in DevTools, go to Settings → More Tools → Rendering → Layout Shift Regions then refresh the page that you wish to debug. Areas of layout shift will be briefly highlighted in purple.

Debugging layout shifts →

JavaScript performance beyond bundle size

Nolan Lawson, on how we might focus too much on JavaScript bundle size:

Performance is a multi-faceted thing. It would be great if we could reduce it down to a single metric such as bundle size, but if you really want to cover all the bases, there are a lot of different angles to consider.

He digs into other factors that have an influence:

  • Parse/compile time
  • Execution time
  • Power usage
  • Memory usage
  • Disk usage

JavaScript performance beyond bundle size →

Before You React.memo()

Dan Abramov shares 2 techniques to try before resorting to optimize with React.memo()

In this post, I want to share two different techniques. They’re surprisingly basic, which is why people rarely realize they improve rendering performance.

These techniques are complementary to what you already know! They don’t replace memo or useMemo, but they’re often good to try first.

Covered Solutions:

  1. Move State Down
  2. Lift Content Up

Before You memo() →