JSONbox – Free HTTP based JSON Storage

jsonbox.io lets you store, read & modify JSON data over HTTP APIs for free. Copy the URL below and start sending HTTP requests to play around with your data store.

Oh, this will come in handy for Workshops and quick Proof Of Concepts:

curl -X POST 'https://jsonbox.io/demobox_6d9e326c183fde7b' \
    -H 'content-type: application/json' \
    -d '{"name": "Jon Snow", "age": 25}'

// {"_id":"5d776a25fd6d3d6cb1d45c51","name":"Jon Snow","age":25,"_createdOn":"2019-09-10T09:17:25.607Z"}

Don’t know about the retention policy though 😉

jsonbox.io →

Using AWS’ “Server­less Image Han­dler” to roll your own Image Transform Service

Ama­zon AWS has offered a Server­less Image Han­dler for a while that allows you to spin up an AWS Lamb­da func­tion to cre­ate your own pri­vate lit­tle image trans­form ser­vice that is inex­pen­sive, fast, and is front­ed by the Cloud­Front con­tent deliv­ery net­work (CDN).

Whenever an image is uploaded to the bucket, a Lambda function processes it and creates all other required versions.

Under the hood it uses SharpJS, so you could always use their code to make it run on other Cloud Providers 😉

serverless-image-handler Source Code (GitHub) →
Setting Up Your Own Image Transform Service →

Ship legacy JavaScript and CSS files in a Webpack Project with webpack-merge-and-include-globally

One of the projects that I’m working on is quite reliant on jQuery and Bootstrap. As we’re introducing new features (such as a few React-based components, and Stylus for CSS) in said project, we’ve also introduced Webpack into it. Now, we don’t want to run jQuery nor Bootstrap through Babel (using Webpack), but we want to keep on shipping them untouched to the user.

However, we do want those files to “live” in our Webpack ecosystem. Basically we just want to concatenate those assets, so that we can ship one single legacy.js and one legacy.css file to the user. This is where webpack-merge-and-include-globally comes into play:

Webpack plugin to merge your source files together into single file, to be included in index.html, and achieving same effect as you would by including them all separately through <script> or <link>.

Here’s how we use the plugin:

import webpack from 'webpack';
import MergeIntoSingleFilePlugin from 'webpack-merge-and-include-globally';

const config = (env) => ({
	// …

	plugins: [
		// …

		new MergeIntoSingleFilePlugin({
			files: {
				'js/legacy.js': [
				'css/legacy.css': [

export default config;

With this in place we include legacy.js and legacy.css in the project to keep things working as they were before. Our “new” CSS and JS is written separately and is being ran through the entire Webpack build pipeline, like one would normally do. In the end we end up with four includes:

<!-- legacy files, concatenated -->
<script src="/js/legacy.js"></script>
<link rel="stylesheet" href="/css/legacy.css" />

<!-- our new files, compiled through Babel/Stylus/etc. -->
<script src="/js/app.js"></script>
<link rel="stylesheet" href="/css/styles.css" />

Installation per npm/yarn

yarn add -D webpack-merge-and-include-globally


Did this help you out? Like what you see?
Consider donating.

I don’t run ads on my blog nor do I do this for profit. A donation however would always put a smile on my face though. Thanks!

☕️ Buy me a Coffee ($3)

Visually Search using your Phone’s Camera with The Web Perception Toolkit

The Web Perception Toolkit is an open-source library that provides the tools for you to add visual search to your website. The toolkit works by taking a stream from the device camera, and passing it through a set of detectors. Any markers or targets that are identified by the detectors are mapped to structured data on your site, and the user is provided with customizable UI that offers them extended information.

This mapping is defined using Structured Data (JSON-LD). Here’s a barcode for example:

    "@context": "https://schema.googleapis.com/",
    "@type": "ARArtifact",
    "arTarget": {
      "@type": "Barcode",
      "text": "012345678912"
    "arContent": {
      "@type": "WebPage",
      "url": "http://localhost:8080/demo/artifact-map/products/product1.html",
      "name": "Product 1",
      "description": "This is a product with a barcode",
      "image": "http://localhost:8080/demo/artifact-map/products/product1.png"

When the user now scans an object with that barcode (as defined in arTarget), the description page (defined in arContent) will be shown on screen.

Next to BarCodes, other supported detectors include QR Codes, Geolocation, and 2D Images. ML Image Classification is not supported, but planned.

The Web Perception Toolkit: Getting Started →
Visual searching with the Web Perception Toolkit →
Web Perception Toolkit Repo (GitHub) →

Implementing Dark Mode on adactio.com

Jeremy recently implemented “Dark Mode” on his site. Tanks to CSS Custom Properties the implementation is pretty straightforward (also see my writeup here).

But of course, Jeremy added some extra details that make the difference:

  1. In Dark Mode images are toned down to make ‘m blend in better, as detailed by Melanie Richards:

    @media (prefers-color-scheme: dark) {
      img {
        filter: brightness(.8) contrast(1.2);
  2. Using the picture element, and media queries on the various source‘s contained, you can ship a dark version of an image (such as map) to the visitor:

      <source media="prefers-color-scheme: dark" srcset="https://api.mapbox.com/styles/v1/mapbox/dark-v10/static...">
      <img src="https://api.mapbox.com/styles/v1/mapbox/outdoors-v10/static..." alt="map">

Implementing Dark Mode on adactio.com →

GitHub CI Workflow for PHP applications

Mattias Geniar has shared his GitHub Workflow to make GitHub do the CI work for PHP applications:

on: push
name: Run phpunit testsuite
    runs-on: ubuntu-latest
      image: mattiasgeniar/php73

    - uses: actions/[email protected]
        fetch-depth: 1

    - name: Install composer dependencies
      run: |
        composer install --prefer-dist --no-scripts -q -o;
    - name: Prepare Laravel Application
      run: |
        cp .env.example .env
        php artisan key:generate
    - name: Compile assets
      run: |
        yarn install --pure-lockfile
        yarn run production --progress false
    - name: Set custom php.ini settings
      run: echo 'short_open_tag=off' >> /usr/local/etc/php/php.ini
    - name: Run Testsuite
      run: vendor/bin/phpunit tests/

Tests get run in a custom mattiasgeniar/php73 Docker container, which contains PHP 7.3 and extensions (including imagick).

With some fiddling you can easily adjust this to:

  1. Run these tests for pull requests too (just like you can automatically run linters on pull requests)
  2. Extend it to become a CI/CD pipeline to automatically deploy your code too

A github CI workflow tailored to modern PHP applications →

Simple Scroll Snapping Carousel (Flexbox Layout / Grid Layout)

Here are two small scroll-snapping carousels that I made. In the top one the items are laid out using CSS Flexbox, whereas the second one uses CSS Grid.

The code also works fine with arbitrarily sized .scroll-items elements, they don’t need to have the same width.

ℹ️ Want to now more about scroll snapping? Check out Well-Controlled Scrolling with CSS Scroll Snap by Google, and Practical CSS Scroll Snapping on CSS Tricks.

😅Great to see that CSS Scroll Snapping moved away from the snapList, as first proposed in 2013.

Did this help you out? Like what you see?
Consider donating.

I don’t run ads on my blog nor do I do this for profit. A donation however would always put a smile on my face though. Thanks!

☕️ Buy me a Coffee ($3)

Working with symlinked packages in PHP

When developing a PHP library/package, you most likely also have a project on disk that consumes said library. Take a WordPress plugin for example: to test it, you need a WordPress installation — both are linked but separate projects. To speed up development you can tell Composer to use the local version of the package, instead of having to copy files from folder to folder as you develop.


I’m assuming a folder structure like this:

bramus in ~/repos
$ tree -L 1
├── my-library
└── my-project

If my-project were to use the published version of my-library, you would run the following command:

bramus in ~/repos/my-project
$ composer require author/my-library

The ~/repos/my-project/composer.json would then look as follows:

    // …
    "require": {
        "author/my-library": "*"


During development you don’t want to be editing the copy of my-library which is contained inside the my-project repo. What you want is my-project to use the local version of ~/repos/my-library, so that you can directly edit those.

To solve this, Composer allows you to configure the package sources using the repositories option of its configuration. To refer to the local copy adjust ~/repos/my-project/composer.json so that it has an entry pointing to ~/repos/my-library/:

    // …
    "require": {
        "author/my-library": "*"
    "repositories": [
            "type": "path",
            "url": "../my-library"

With this addition, re-run the composer require command, and it’ll symlink ~/repos/my-library/ into ~/repos/my-project/vendor/author/my-library/:

bramus in ~/repos/my-project
$ composer require author/my-library

- Installing author/my-library (dev)
Symlinked from ../my-library

When you now edit code inside ~/repos/my-library/, the ~/repos/my-project will also be up-to-date 🙂

💁‍♂️ You can also use repositories to refer to privately published repositories. No need to fiddle with Satis or the like

Did this help you out? Like what you see?
Consider donating.

I don’t run ads on my blog nor do I do this for profit. A donation however would always put a smile on my face though. Thanks!

☕️ Buy me a Coffee ($3)

Idiosyncrasies of the HTML parser

Highly interesting book (in the making) by Simon Pieters, on how HTML parsers work:

The HTML parser is a piece of software that processes HTML markup and produces an in-memory tree representation (known as the DOM).

The HTML parser has many strange behaviors. This book will highlight the ins and outs of the HTML parser, and contains almost-impossible quizzes.

Not for beginning audiences!

Idiosyncrasies of the HTML parser →

Overriding the PHP version to use when installing Composer dependencies

If you have a (legacy) PHP project running on a legacy server (running PHP 5.4.27 for example), but are locally developing with a more modern PHP version (PHP 7.4 for example), you might end up installing dependencies that are not compatible with the PHP version on the server.

To bypass this, you can tell Composer, via its config property, which PHP version that you’re using (on production).

    // …
    "config": {
        "platform": {
            "php": "5.4.27"

When you now run composer require package, it will only install dependencies that are compatible with that PHP version (instead of the PHP version that’s executing Composer)

💁‍♂️ You might also want to run that legacy application locally in a Docker container with said PHP version. The Docker West PHP Docker Images will come in handy, although versions prior to PHP 5.6 are supported.

🔐 Even better, of course, is to phase out the production server running that outdated/unsupported/insecure PHP version …

Did this help you out? Like what you see?
Consider donating.

I don’t run ads on my blog nor do I do this for profit. A donation however would always put a smile on my face though. Thanks!

☕️ Buy me a Coffee ($3)