The serverless gambit: Building ChessMsgs.com on Cloud Run

Interesting read how Greg Wilson built ChessMsgs.com, a website that can track chess games played by sending links to eachother.

Instead of tweeting moves back and forth, players tweet links back and forth, and those links go to a site that renders the current chessboard, allows a new move, and creates a new link to paste back to the opponent. I wanted this to be 100% serverless, meaning that it will scale to zero and have zero maintenance requirements.

The board’s state is represented as a string using the Forsyth–Edwards Notation (FEN)

rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1

That same FEN is also fed to service that generate static images of the board for use in the meta tags.

The serverless gambit: Building ChessMsgs.com on Cloud Run →
ChessMsgs Source (GitHub) →

What’s new / coming to Google Cloud Run?

At Google Cloud Next ’20, Cloud Run Product Manager Steren Giannini (@steren) walked us through some of the new things that are coming to Google Cloud Run.

Some highlights:

  • VPC Peering
  • Gradual Rollouts (with custom URLs per tagged release)
  • New Load-Balancing Features (Multi Region, Cloud CDN, IAP)
  • Easy Continuous Deployment
  • Min instances (no cold starts)
  • 4 cpus/4 GB RAM
  • 1 hour request timeouts
  • Graceful shutdowns (SIGTERM)
  • Server-Side Streaming

🧐 Unfamiliar with Google Cloud Run? Watch the video of a talk I recently gave on the subject.

Going Serverless with Google Cloud Run (JSConf.be)

Back in June I was invited to speak at JSConf.be. This year’s edition focused on DevSecOps and Security. My talk “Going Serverless with Google Cloud Run” — which I have brought forward before at Full Stack Ghent and PHP-WVL — was a perfect match for it.

Cloud Run is a fully managed compute platform by Google that automatically scales stateless containers. By abstracting away all infrastructure management, us developers can focus on what matters most: building great applications.

In this talk I’ll show you not only how to deploy PHP/Node applications onto Cloud Run, but also how to create a build pipeline using either Google Cloud Build or Github Actions.

In mid August the video got released, which I’ve embedded below:

The slides are up on slidr.io, and also embedded below:

Thanks to the organisers for having me, and thanks to the attendees for coming to see me. I hope you all had fun attending this talk. I know I had making it (and whilst bringing it forward) 🙂

💁‍♂️ If you are a conference or meetup organiser, don’t hesitate to contact me to come speak at your event.

Speed up your Docker builds in Google Cloud Build with Kaniko Cache

When building Docker images locally it will leverage its build cache:

When building an image, Docker steps through the instructions in your Dockerfile, executing each in the order specified. As each instruction is examined, Docker looks for an existing image in its cache that it can reuse, rather than creating a new (duplicate) image.

Therefore it is important that you carefully cater the order of your different Docker steps.

~

When using Google Cloud Build however, there – by default – is no cache to fall back to. As you’re paying for every second spent building it’d be handy to have some caching in place. Currently there are two options to do so:

  1. Using the --cache-from argument in your build config
  2. Using the Kaniko cache

⚠️ Note that the same rules as with the local cache layers apply for both scenarios: if you constantly change a layer in the earlier stages of your Docker build, it won’t be of much benefit.

~

Using the --cache-from argument (ref)

The easiest way to increase the speed of your Docker image build is by specifying a cached image that can be used for subsequent builds. You can specify the cached image by adding the --cache-from argument in your build config file, which will instruct Docker to build using that image as a cache source.

To make this work you’ll first need to pull the previously built image from the registry, and then refer to it using the --cache-from argument:

steps:
 - name: 'gcr.io/cloud-builders/docker'
   entrypoint: 'bash'
   args:
   - '-c'
   - |
     docker pull gcr.io/$PROJECT_ID/[IMAGE_NAME]:latest || exit 0
- name: 'gcr.io/cloud-builders/docker'
  args: [
            'build',
            '-t', 'gcr.io/$PROJECT_ID/[IMAGE_NAME]:latest',
            '--cache-from', 'gcr.io/$PROJECT_ID/[IMAGE_NAME]:latest',
            '.'
        ]
images: ['gcr.io/$PROJECT_ID/[IMAGE_NAME]:latest']

~

Using the Kaniko cache (ref)

Kaniko cache is a Cloud Build feature that caches container build artifacts by storing and indexing intermediate layers within a container image registry, such as Google’s own Container Registry, where it is available for use by subsequent builds.

To enable it, replace the cloud-builders/docker worker in your cloudbuild.yaml with the kaniko-project/executor.

steps:
- name: 'gcr.io/kaniko-project/executor:latest'
  args:
  - --destination=gcr.io/$PROJECT_ID/image
  - --cache=true
  - --cache-ttl=XXh

When using Kaniko, images are automatically pushed to Container Registry as soon as they are built. You don’t need to specify your images in the images attribute, as you would when using cloud-builders/docker.

Here’s a comparison of a first and second run:

From +8 minutes down to 55 seconds by one simple change to our cloudbuild.yaml 🙂

~

Did this help you out? Like what you see?
Thank me with a coffee.

I don't do this for profit but a small one-time donation would surely put a smile on my face. Thanks!

☕️ A thank you coffee for saving build time (€4)

To stay in the loop you can follow @bramus or follow @bramusblog on Twitter.

Going Serverless with Google Cloud Run

Recently I was invited as a speaker to Full Stack Ghent and PHP-WVL. At both events I brought a new talk called “Going Serverless with Google Cloud Run”.

Cloud Run is a fully managed compute platform by Google that automatically scales stateless containers. By abstracting away all infrastructure management, us developers can focus on what matters most: building great applications.

In this talk I’ll show you not only how to deploy PHP/Node applications onto Cloud Run, but also how to create a build pipeline using either Google Cloud Build or Github Actions.

The slides are up on slidr.io, and also embedded below:

Thanks to the organisers for having me, and thanks to the attendees for coming to see me. I hope you all had fun attending this talk. I know I had making it (and whilst bringing it forward) 🙂

💁‍♂️ If you are a conference or meetup organiser, don’t hesitate to contact me to come speak at your event.

Delete untagged image refs in Google Container Registry, as a service, with gcr-cleaner

GCR Cleaner deletes untagged images in Google Container Registry. This can help reduce costs and keep your container images list in order.

GCR Cleaner is designed to be deployed as a Cloud Run service and invoked periodically via Cloud Scheduler.

Clever! All commands to install this one are provided.

gcr-cleaner

google/cloud-functions-framework – Google Cloud Functions Framework for PHP

google/cloud-functions-framework is an open source FaaS (Function as a Service) Framework for writing portable PHP functions.

An example function looks like this:

<?php

use Symfony\Component\HttpFoundation\Request;

function helloHttp(Request $request)
{
    return "Hello World from PHP HTTP function!" . PHP_EOL;
}

One can invoke it locally by executing the included router as follows:

export FUNCTION_TARGET=helloHttp
export FUNCTION_SIGNATURE_TYPE=http
export FUNCTION_SOURCE=index.php
php -S localhost:8080 vendor/bin/router.php

Alternatively you can run it in a Docker container (which you can then deploy to Cloud Run):

docker build . \
    -f vendor/google/cloud-functions-framework/examples/hello/Dockerfile \
    -t my-cloud-function

docker run -p 8080:8080 \
    -e FUNCTION_TARGET=helloHttp \
    -e FUNCTION_SIGNATURE_TYPE=http \
    my-cloud-function

Installation per Composer:

composer require google/cloud-functions-framework

google/cloud-functions-framework

⚠️ There is a sample Dockerfile included with the repo, but a first glance tells me it needs some polishing as it uses a full blown GAE_RUNTIME Docker base image and directly installs the composer dependencies into the image itself instead of relying on multi-stage builds.

Cloud Run vs App Engine: What’s the difference?

Simple and to the point article, with a few commands included, by Dirk Hoekstra:

In a nutshell, you give Google’s Cloud Run a Docker container containing a webserver. Google will run this container and create an HTTP endpoint.

With Google’s App Engine however you tell Google how your app should be run. The App Engine will create and run a container from these instructions.

Have been using Cloud Run for a few projects recently and I really like it I must say! It just works™ without me having to think about it all too much, whilst also allowing me to finetune the environment.

Cloud Run vs App Engine →

💵 This linked article is stuck behind Medium’s metered paywall, which may prevent you from reading it. Open the link in an incognito window to bypass Medium’s ridiculous reading limit.

Running the same Node.js code on Google Cloud Functions, App Engine, and Cloud Run

Google Cloud has a number of options to run your code. We can deploy a function to Cloud Functions, an app to App Engine, or an app with a custom runtime (a Docker container) to Cloud Run.

In this post the same code snippet is deployed to all three Google Cloud Platform features. Locally the Functions Framework is used .

Portable code migrating across Google Cloud’s serverless platforms →

Building a Website Screenshot API with Puppeteer and Google Cloud Functions

Here’s the source of a Google Cloud function that, using Puppeteer, takes a screenshot of a given website and store the resulting screenshot in a bucket on Google Cloud Storage:

const puppeteer = require('puppeteer');
const { Storage } = require('@google-cloud/storage');

const GOOGLE_CLOUD_PROJECT_ID = "screenshotapi";
const BUCKET_NAME = "screenshot-api-net";

exports.run = async (req, res) => {
  res.setHeader("content-type", "application/json");
  
  try {
    const buffer = await takeScreenshot(req.body);
    
    let screenshotUrl = await uploadToGoogleCloud(buffer, "screenshot.png");
    
    res.status(200).send(JSON.stringify({
      'screenshotUrl': screenshotUrl
    }));
    
  } catch(error) {
    res.status(422).send(JSON.stringify({
      error: error.message,
    }));
  }
};

async function uploadToGoogleCloud(buffer, filename) {
    const storage = new Storage({
        projectId: GOOGLE_CLOUD_PROJECT_ID,
    });

    const bucket = storage.bucket(BUCKET_NAME);

    const file = bucket.file(filename);
    await uploadBuffer(file, buffer, filename);
  
    await file.makePublic();

  	return `https://${BUCKET_NAME}.storage.googleapis.com/${filename}`;
}

async function takeScreenshot(params) {
	const browser = await puppeteer.launch({
		args: ['--no-sandbox']
	});
	const page = await browser.newPage();
	await page.goto(params.url, {waitUntil: 'networkidle2'});

	const buffer = await page.screenshot();

	await page.close();
	await browser.close();
  
  	return buffer;
}

async function uploadBuffer(file, buffer, filename) {
    return new Promise((resolve) => {
        file.save(buffer, { destination: filename }, () => {
            resolve();
        });
    })
}

Usage:

curl -X POST -d '{"url": "https://github.com"}' https://google-cloud-endpoint/my-function

Building a Website Screenshot API →

💡 If I were to run this in production I’d extend the code to first check the presence of an existing screenshot in the bucket or not, and – if the screenshot is not too old – redirect to it.