10 best practices to containerize Node.js web applications with Docker

Solid list of tips by the folks over at Snyk:

By the time you’re at number 8 your mind may have dwelled, but don’t skip out on that step though! It not only allows you to build smaller images but also prevents you from having unnecessary files (read: security risks) left inside your container. Basically you install your dependencies in one throwaway container, and then copy the resulting folder into your main container.

10 best practices to containerize Node.js web applications with Docker →

`dive` – A tool for exploring a Docker Image, Layer Contents, and discovering ways to shrink the size of your Docker/OCI Image

You can use dive to help you optimize your Docker image layers.

Say you have these two layers in your Dockerfile:

RUN wget http://xcal1.vodafone.co.uk/10MB.zip -P /tmp
RUN rm /tmp/10MB.zip

Then you’ll end up with 10MB of wasted space. dive will tell you, so that you can combine these into one optimized layer:

RUN wget http://xcal1.vodafone.co.uk/10MB.zip -P /tmp && rm /tmp/10MB.zip

You can also integrate it as a build step in your CI/CD pipeline, and make it break the build in case you don’t meet a certain quotum.

Installation per Homebrew:

brew install dive

dive →

🔗 Related: In How to build smaller Docker images you can find some practical tips to keep your docker image size under control.

How are Docker Layers bundled into Docker Images?

It’s impossible to work with docker containers without docker images. In this post I want to talk about what makes docker images possible: the overlay filesystems.

Interesting to know how things work behind the scenes.

How are docker images built? A look into the Linux overlay file-systems and the OCI specification →

Speed up your Docker builds in Google Cloud Build with Kaniko Cache

When building Docker images locally it will leverage its build cache:

When building an image, Docker steps through the instructions in your Dockerfile, executing each in the order specified. As each instruction is examined, Docker looks for an existing image in its cache that it can reuse, rather than creating a new (duplicate) image.

Therefore it is important that you carefully cater the order of your different Docker steps.

~

When using Google Cloud Build however, there – by default – is no cache to fall back to. As you’re paying for every second spent building it’d be handy to have some caching in place. Currently there are two options to do so:

  1. Using the --cache-from argument in your build config
  2. Using the Kaniko cache

⚠️ Note that the same rules as with the local cache layers apply for both scenarios: if you constantly change a layer in the earlier stages of your Docker build, it won’t be of much benefit.

~

Using the --cache-from argument (ref)

The easiest way to increase the speed of your Docker image build is by specifying a cached image that can be used for subsequent builds. You can specify the cached image by adding the --cache-from argument in your build config file, which will instruct Docker to build using that image as a cache source.

To make this work you’ll first need to pull the previously built image from the registry, and then refer to it using the --cache-from argument:

steps:
 - name: 'gcr.io/cloud-builders/docker'
   entrypoint: 'bash'
   args:
   - '-c'
   - |
     docker pull gcr.io/$PROJECT_ID/[IMAGE_NAME]:latest || exit 0
- name: 'gcr.io/cloud-builders/docker'
  args: [
            'build',
            '-t', 'gcr.io/$PROJECT_ID/[IMAGE_NAME]:latest',
            '--cache-from', 'gcr.io/$PROJECT_ID/[IMAGE_NAME]:latest',
            '.'
        ]
images: ['gcr.io/$PROJECT_ID/[IMAGE_NAME]:latest']

~

Using the Kaniko cache (ref)

Kaniko cache is a Cloud Build feature that caches container build artifacts by storing and indexing intermediate layers within a container image registry, such as Google’s own Container Registry, where it is available for use by subsequent builds.

To enable it, replace the cloud-builders/docker worker in your cloudbuild.yaml with the kaniko-project/executor.

steps:
- name: 'gcr.io/kaniko-project/executor:latest'
  args:
  - --destination=gcr.io/$PROJECT_ID/image
  - --cache=true
  - --cache-ttl=XXh

When using Kaniko, images are automatically pushed to Container Registry as soon as they are built. You don’t need to specify your images in the images attribute, as you would when using cloud-builders/docker.

Here’s a comparison of a first and second run:

From +8 minutes down to 55 seconds by one simple change to our cloudbuild.yaml 🙂

~

Did this help you out? Like what you see?
Thank me with a coffee.

I don't do this for profit but a small one-time donation would surely put a smile on my face. Thanks!

☕️ A thank you coffee for saving build time (€4)

To stay in the loop you can follow @bramus or follow @bramusblog on Twitter.

Easily Build and push Docker images with the `build-push-action` GitHub Action

The Docker folks have released their first GitHub Action build-push-action which builds and pushes Docker images and will log in to a Docker registry if required.

Building and pushing an image becomes really easy:

uses: docker/build-push-action@v1
with:
  username: ${{ secrets.DOCKER_USERNAME }}
  password: ${{ secrets.DOCKER_PASSWORD }}
  repository: myorg/myrepository
  tags: latest

Amongst other options you can also define a registry, which defaults to the Docker Hub.

Build and push Docker images GitHub Action →

Going Serverless with Google Cloud Run

Recently I was invited as a speaker to Full Stack Ghent and PHP-WVL. At both events I brought a new talk called “Going Serverless with Google Cloud Run”.

Cloud Run is a fully managed compute platform by Google that automatically scales stateless containers. By abstracting away all infrastructure management, us developers can focus on what matters most: building great applications.

In this talk I’ll show you not only how to deploy PHP/Node applications onto Cloud Run, but also how to create a build pipeline using either Google Cloud Build or Github Actions.

The slides are up on slidr.io, and also embedded below:

Thanks to the organisers for having me, and thanks to the attendees for coming to see me. I hope you all had fun attending this talk. I know I had making it (and whilst bringing it forward) 🙂

💁‍♂️ If you are a conference or meetup organiser, don’t hesitate to contact me to come speak at your event.

Delete untagged image refs in Google Container Registry, as a service, with gcr-cleaner

GCR Cleaner deletes untagged images in Google Container Registry. This can help reduce costs and keep your container images list in order.

GCR Cleaner is designed to be deployed as a Cloud Run service and invoked periodically via Cloud Scheduler.

Clever! All commands to install this one are provided.

gcr-cleaner

How to build smaller Docker images

When you’re building a Docker image it’s important to keep the size under control. Having small images means ensuring faster deployment and transfers.

Wish I had found this post before I started playing with Docker, as it is packed with solid advice which I found out “along the way” myself.

In short:

  1. Find the right balance with the cache layers
  2. Use .dockerignore files
  3. Use the multi-stage builds feature
  4. Choose the right base image

Especially number 3 was an eye opener to me when I first discovered it. Basically it boils down to this: Don’t do an npm install/composer install/npm build directly in your “main” image, but do it in a separate container and afterwards copy its results into your main image.

How to build a smaller Docker image →

Reverse-engineer a Dockerfile from a Docker image with dfimage

Might come in handy:

Similar to how the docker history command works, this Python script is able to re-create the Dockerfile (approximately) that was used to generate an image using the metadata that Docker stores alongside each image layer.

$ docker pull laniksj/dfimage
Using default tag: latest
latest: Pulling from dfimage

$ alias dfimage="docker run -v /var/run/docker.sock:/var/run/docker.sock --rm laniksj/dfimage"

$ dfimage imageID
FROM buildpack-deps:latest
RUN useradd -g users user
RUN apt-get update && apt-get install -y bison procps
RUN apt-get update && apt-get install -y ruby
ADD dir:03090a5fdc5feb8b4f1d6a69214c37b5f6d653f5185cddb6bf7fd71e6ded561c in /usr/src/ruby
WORKDIR /usr/src/ruby
RUN chown -R user:users .
USER user
RUN autoconf && ./configure --disable-install-doc
RUN make -j"$(nproc)"
RUN make check
USER root
RUN apt-get purge -y ruby
RUN make install
RUN echo 'gem: --no-rdoc --no-ri' >> /.gemrc
RUN gem install bundler
ONBUILD ADD . /usr/src/app
ONBUILD WORKDIR /usr/src/app
ONBUILD RUN [ ! -e Gemfile ] || bundle install --system

Docker File From Image →

Via Patrick