When building an image, Docker steps through the instructions in your Dockerfile, executing each in the order specified. As each instruction is examined, Docker looks for an existing image in its cache that it can reuse, rather than creating a new (duplicate) image.
Therefore it is important that you carefully cater the order of your different Docker steps.
When using Google Cloud Build however, there – by default – is no cache to fall back to. As you’re paying for every second spent building it’d be handy to have some caching in place. Currently there are two options to do so:
Using the --cache-from argument in your build config
Using the Kaniko cache
⚠️ Note that the same rules as with the local cache layers apply for both scenarios: if you constantly change a layer in the earlier stages of your Docker build, it won’t be of much benefit.
The easiest way to increase the speed of your Docker image build is by specifying a cached image that can be used for subsequent builds. You can specify the cached image by adding the --cache-from argument in your build config file, which will instruct Docker to build using that image as a cache source.
To make this work you’ll first need to pull the previously built image from the registry, and then refer to it using the --cache-from argument:
Kaniko cache is a Cloud Build feature that caches container build artifacts by storing and indexing intermediate layers within a container image registry, such as Google’s own Container Registry, where it is available for use by subsequent builds.
To enable it, replace the cloud-builders/docker worker in your cloudbuild.yaml with the kaniko-project/executor.
When using Kaniko, images are automatically pushed to Container Registry as soon as they are built. You don’t need to specify your images in the images attribute, as you would when using cloud-builders/docker.
Here’s a comparison of a first and second run:
From +8 minutes down to 55 seconds by one simple change to our cloudbuild.yaml 🙂
Did this help you out? Like what you see? Thank me with a coffee.
I don't do this for profit but a small one-time donation would surely put a smile on my face. Thanks!
Recently I was invited as a speaker to Full Stack Ghent and PHP-WVL. At both events I brought a new talk called “Going Serverless with Google Cloud Run”.
Cloud Run is a fully managed compute platform by Google that automatically scales stateless containers. By abstracting away all infrastructure management, us developers can focus on what matters most: building great applications.
In this talk I’ll show you not only how to deploy PHP/Node applications onto Cloud Run, but also how to create a build pipeline using either Google Cloud Build or Github Actions.
When you’re building a Docker image it’s important to keep the size under control. Having small images means ensuring faster deployment and transfers.
Wish I had found this post before I started playing with Docker, as it is packed with solid advice which I found out “along the way” myself.
Find the right balance with the cache layers
Use .dockerignore files
Use the multi-stage builds feature
Choose the right base image
Especially number 3 was an eye opener to me when I first discovered it. Basically it boils down to this: Don’t do an npm install/composer install/npm build directly in your “main” image, but do it in a separate container and afterwards copy its results into your main image.
Similar to how the docker history command works, this Python script is able to re-create the Dockerfile (approximately) that was used to generate an image using the metadata that Docker stores alongside each image layer.
$ docker pull laniksj/dfimage
Using default tag: latest
latest: Pulling from dfimage
$ alias dfimage="docker run -v /var/run/docker.sock:/var/run/docker.sock --rm laniksj/dfimage"
$ dfimage imageID
RUN useradd -g users user
RUN apt-get update && apt-get install -y bison procps
RUN apt-get update && apt-get install -y ruby
ADD dir:03090a5fdc5feb8b4f1d6a69214c37b5f6d653f5185cddb6bf7fd71e6ded561c in /usr/src/ruby
RUN chown -R user:users .
RUN autoconf && ./configure --disable-install-doc
RUN make -j"$(nproc)"
RUN make check
RUN apt-get purge -y ruby
RUN make install
RUN echo 'gem: --no-rdoc --no-ri' >> /.gemrc
RUN gem install bundler
ONBUILD ADD . /usr/src/app
ONBUILD WORKDIR /usr/src/app
ONBUILD RUN [ ! -e Gemfile ] || bundle install --system