When you’re building a Docker image it’s important to keep the size under control. Having small images means ensuring faster deployment and transfers.
Wish I had found this post before I started playing with Docker, as it is packed with solid advice which I found out “along the way” myself.
Find the right balance with the cache layers
Use .dockerignore files
Use the multi-stage builds feature
Choose the right base image
Especially number 3 was an eye opener to me when I first discovered it. Basically it boils down to this: Don’t do an npm install/composer install/npm build directly in your “main” image, but do it in a separate container and afterwards copy its results into your main image.
The Webkit blog, on how to optimize your pages so that they don’t drain the battery of your visitors their devices:
Users spend a large proportion of their online time on mobile devices, and a significant fraction of the rest is users on untethered laptop computers. For both, battery life is critical. In this post, we’ll talk about factors that affect battery life, and how you, as a web developer, can make your pages more power efficient so that users can spend more time engaged with your content.
The three sections where to optimize covered are scripting, painting (the way stuff is rendered/animated), and networking (requests over the network)
Essential Image Optimization is an free and online eBook by Addy Osmani:
Images take up massive amounts of internet bandwidth because they often have large file sizes. According to the HTTP Archive, 60% of the data transferred to fetch a web page is images composed of JPEGs, PNGs and GIFs. As of July 2017, images accounted for 1.7MB of the content loaded for the 3.0MB average site.
Per Tammy Everts, adding images to a page or making existing images larger have been proven to increase conversion rates. It’s unlikely that images will go away and so investing in an efficient compression strategy to minimize bloat becomes important.
Essentially Addy pushes forward two key factors:
We should all be automating our image compression
Everyone should be compressing their images efficiently
Since earlier this week Flipboard now is a website too. As they wanted to mimic their mobile apps, it would sport lots of animations. During their first tests, they found the DOM being too slow (although that’s not entirely true, see this video and its description for example). And then, an epiphany:
Most modern mobile devices have hardware-accelerated canvas, so why couldn’t we take advantage of this? HTML5 games certainly do. But could we really develop an application user interface in canvas?
And so they did. They’ve built react-canvas for this, “high performance <canvas> rendering for React components”. Reminds me of Letterpress, which is an optimized OpenGL scene, and acko.net, which is a WebGL layer which renders the site.
Where possible, it’s best to try automating image optimization so that it’s a first-class citizen in your build chain. To help, I thought I’d share some of the tools I use for this.
Not only contains a list of grunt plugins one can use, but also a few command line and online tools. I’ve been using TinyPNG for quite some time now before uploading images to my blog. Can shave up to 83% (!) of a screenshot created in OS X.
Update 2017: By now it’s clear that an equal amount of JS-bytes and JPG-bytes are not the same. The JS still needs to be evaluated and executed, which also comes as a cost.
Before you go worrying about how to minify every last library or shave tests out of Modernizr, try and see if you can remove just one photo from your design. It will make a bigger difference.