Fixing the valet share 301 Redirect Loop

One of the nice things of Laravel Valet is that it includes an easy way to make your local site available publicly. For this it has the aforementioned Ngrok built-in. To use it, just run the valet share command, and your local site will be shared through a * subdomain.

However, when combining valet share with valet secure (which serves your local site over HTTPS) it won’t work: you’ll end up in a 301 Redirect Loop when visiting your site through the * domain.

To fix this issue there are two options:

  1. The old way: Manually edit your sites Nginx config file and remove the block that listens on port 80.

    As valet issue#382 details, you can fix this error by manually editing the Nginx configuration for your site.

    Say your site is mysite.test, then its Nginx config file can be found at ~/.config/valet/Nginx/mysite.test.

    💁‍♂️ Can’t find the file at said location?

    Note that Valet versions < 2.1.0 use the ~/.valet folder instead of the ~/.config/valet/ folder

    Inside the ~/.config/valet/Nginx/mysite.test file, look for a block that looks like this:

    server {
        listen 80;
        server_name mysite.test;
        return 301 https://$host$request_uri;

    Now remove that block entirely (or comment it out using the # sign), save it, and restart valet using valet restart.

    When now using valet share it will work fine 🙂

  2. The new way: Upgrade Valet to version 2.1.3 or newer

    It came to my attention through valet issue#148 that valet 2.1.3 got released just three days ago and that it contains an out-of-the-box fix for this bug … yay! 🎉

    To upgrade your Valet install run this:

    # update package
    composer global update
    # Make Valet do its housekeeping
    valet install

    When now using valet share it will work fine 🙂

    Note that when coming from Valet < 2.1.0 the ~/.valet folder will have moved to ~/.config/valet/

There ya go 🙂

Did this help you out? Like what you see?
Thank me with a coffee.

I don't do this for profit but a small one-time donation would surely put a smile on my face. Thanks!

☕️ Buy me a Coffee (€3)

To stay in the loop you can follow @bramus or follow @bramusblog on Twitter.

Chrome 66 to Untrust Symantec-issued Certificates

Chrome is really tightening up the security game here. In Chrome 66 it will untrust Symantec-issued SSL/TLS certificates, after Symantec has repeatedly screwed up by wrongly issuing certificates for domains, including itself.

Thanks to a decision in September by Google to stop trusting Symantec-issued SSL/TLS certs, from mid-April Chrome browser users visiting websites using a certificate from the security biz issued before June 1, 2016 or after December 1, 2017 will be warned that their connection is not private and someone may be trying to steal their information. They will have to click past the warning to get to the website.

This will also affect certs that use Symantec as their root of trust even if they were issued by an intermediate organization. For example, certificates handed out by Thawte, GeoTrust, and RapidSSL that rely on Symantec will be hit by Google’s crackdown. If in doubt, check your cert’s root certificate authority to see if it’s Symantec or not.

Arkadiy Tetelman has recently done an experiment and made an inventory of how many sites in the Alexa Top 1 Million that will be affected by this.

Included in the 100,000 affected sites we find/found (some have gotten a new certificate by now),,, etc.

Quantifying Untrusted Symantec Certificates →
Chrome’s Plan to Distrust Symantec Certificates →
Beware the looming Google Chrome HTTPS certificate apocalypse! →

Extended Validation Is Broken is an interesting site by Ian Carroll. See it? Take a closer look at the certificate.

Yes, that’s one for “Stripe, Inc” … but, that’s not “Stripe, Inc” is it?

This site uses an EV certificate for “Stripe, Inc”, that was legitimately issued by Comodo. However, when you hear “Stripe, Inc”, you are probably thinking of the payment processor incorporated in Delaware. Here, though, you are talking to the “Stripe, Inc” incorporated in Kentucky. This problem can also appear when dealing with different countries.

Yes, what Ian did was register a company with the same name in another state. And it’s easy-peasy to do so:

From incorporation to issuance of the EV certificate, I spent less than an hour of my time and about $177. $100 of this was to incorporate the company, and $77 was for the certificate. It took about 48 hours from incorporation to the issuance of the certificate.

When it comes to homograph attacks browsers use punycode in the address bar, yet I’m very curious if and how this can be fixed.

Let this be a reminder to always be cautious. Type in addresses manually when in doubt. When still in doubt after that, don’t proceed.

Extended Validation Is Broken →

Get HTTPS working on localhost, with green padlock

In On “Secure Contexts” in Firefox, HTTPS for local development, and a potential nice gesture by Chrome I said:

One of the things that’s still not really frictionless for your local development domains is the use of certificates.

To circumvent the use of self-signed certificates I explained in said article how I have a dedicated domain with all subdomains pointing to That way I can use proper certificates for my local dev needs.

In How to get HTTPS working on your local development environment in 5 minutes another technique – one which works with all domains, even localhost itself – is unearthed: create a root certificate yourself, and trust it on your system. Any certificate linked to that root CA will then also be trusted, thus any host using it will have valid certificate no matter how it’s being accessed (curl, browser, etc.)

I like this “hack” because you only have to trust the generated root CA once (and not each certificate separately). I guess if you combine it with you can also get it to work on your smartphone talking to local dev stuff (or is there any way to have manual DNS entries on your iOS device?).

How to get HTTPS working on your local development environment →

On “Secure Contexts” in Firefox, HTTPS for local development, and a potential nice gesture by Chrome

👋 This post also got published on Medium. If you like it, please give it some love a clap over there.

Earlier today, in a post entitled Secure Contexts Everywhere, it was announced on the Mozilla Security Blog that Firefox from now on will only expose new features such as new CSS properties to secure contexts:

Effective immediately, all new features that are web-exposed are to be restricted to secure contexts. Web-exposed means that the feature is observable from a web page or server, whether through JavaScript, CSS, HTTP, media formats, etc. A feature can be anything from an extension of an existing IDL-defined object, a new CSS property, a new HTTP response header, to bigger features such as WebVR. In contrast, a new CSS color keyword would likely not be restricted to secure contexts.

Whilst I somewhat applaud this move, especially in the age of data theft and digital espionage, it’s – pardon my French – quite a ballsy thing to do. CSS Properties, really?

You might not be entirely aware of it, but features like Geolocation, the Payment Request API, Web Bluetooth, Service Workers, etc. are already locked down to secure contexts only.

But what is this secure context they speak of exactly? Is it HTTPS? Or is it more than that?

Let’s take a look at the MDN Web Docs on Secure Contexts:

A context will be considered secure when it’s delivered securely (or locally), and when it cannot be used to provide access to secure APIs to a context that is not secure. In practice, this means that for a page to have a secure context, it and all the pages along its parent and opener chain must have been delivered securely.

Roughly translated: http://localhost/ is considered to be a secure context, so that one will have the latest and greatest. However, the docs aren’t that clear on whether the definition of “local” involves actual DNS resolving or not:

Locally delivered files such as http://localhost and file:// paths are considered to have been delivered securely.

What about development domains – think project.local or project.test – pointing to Right now we’re left in the blue here …

Do note however that Mozilla “will provide developer tools to ease the transition to secure contexts and enable testing without an HTTPS server”, but the related bug is still unassigned.


It’s best to have your local web development stack mimic the production environment as closely as possible. Over the years we’ve seen the rise of tools like Vagrant and Docker to easily allow us to do so: you can run the same OS and version with the exact same Apache version with the exact same PHP version with the exact same MySQL version with … oh you get the point.

One of the things that’s still not really frictionless right now for your local development domains is the use of certificates.

Yes, you could use self-signed certificates for this, but that’s:

  • Not always possible: The Fetch API for example doesn’t allow the ignoring of self signed certificates.
  • Asking for potential security problems the moment you go into production: I once read a post about some production code that still had the --insecure flag enabled on the curl requests 🤦‍♂️. It was put there in the first place because of to the self-signed certificates used during development.

Another approach – and one that I am taking – is to use an actual domain for your dev needs. Each project I build is reachable via a subdomain on that dedicated domain, all pointing to thanks to a an entry in my hosts file. Since it’s an actual domain I am using, I can use an actual and correctly signed certificate along with that when it comes to HTTPS.

At the cost of only a domain name (you can get your certificates for free via Let’s Encrypt, which now support wildcards too) that’s quite a nice solution I think.


Back in December 2017 version 63 of Google Chrome was released. One of the changes included is that Chrome now requires https for .dev and .foo domains. This was ill-received by quite a lot of developers as most of them – including me before – use(d) the .dev TLD for local development purposes.

And yes, the pain is real … just do a quick search on Twitter on chrome https dev and you’ll see that lots of developer have wasted – and still are wasting – many hours on this breaking change.

Yes, there’s a (lengthy) workaround available to get the green lock in Chrome, but I wouldn’t recommend it really. It would require you to repeat a likewise procedure in all browsers you have running/are testing in, and this per .dev domain you have. Above that it won’t fix any out-of-browser contexts.

Could Google have prevented this .dev debacle? Of course. But I don’t see a reversal of this change coming any time soon though …

Can Google still do something about it? Well … I have an idea about this:

Just imagine if Google were to point all .dev domains to, that’d truly be paving the cowpaths! Add free SSL certificates for all .dev domains along with that and we’d jump quite far. With announcements like Secure Contexts Everywhere (referenced at the very top of this page) this idea seems like a real winner:

  1. (Chrome) Us developers can keep using the .dev TLD for projects in development.
  2. Us developers no longer need to use self-signed certificates anymore for projects in development. We could use certificates directly signed by Google’s CA (or by Let’s Encrypt in case Google doesn’t feel like providing certificates)
  3. (Firefox) Domains with the .dev TLD (and with an SSL Certificate) can be considered as a Secure Context, just like any other domain reachable via HTTPS.

One can dream, right?


Did this help you out? Like what you see?
Consider donating.

I don’t run ads on my blog nor do I do this for profit. A donation however would always put a smile on my face though. Thanks!

☕️ Buy me a Coffee ($3)

Monitoring for the encrypted web with “Oh Dear!”

Because there’s more to HTTPs than just monitoring for certificate expiration dates.

Next to SSL Certificate Expirations, Oh Dear! also scans for Mixed Content, Revoked (Intermediate) Certificates, the use of bad or insecure ciphers, etc.

Knowing that this service is built by Dries Vints, Freek Van der Herten, and Mattias Geniar tells me that this app will be ace!

Oh Dear! →

Mixed Content and Responsive Images

Interesting issue Jonathan Snook ran into when switching a site over to HTTPS. Even though images from HTTP resources should still get loaded by the browser (as they are Passive Mixed Content, and thus tolerated), they weren’t:

After some digging, I noticed that the images that weren’t loading were those defined using the <picture> element. Surely that’s not expected behaviour, is it? Turns out, it is.

As Jonathan found out the hard way, the Mixed Content Spec does actively block images, but only when they are defined in an imageset – which is the case for <picture>.

Of course it is recommended to migrate all resources to HTTPS and not to run with Passive Mixed Content because browsers, today, will not give your site the green checkmark and, one day, will most likely block Passive Mixed Content too.

Mixed Content and Responsive Images →

Want to scan your site for Mixed Content? Use Mixed Content Scan, a tool I wrote to do just that 🙂

Marking HTTP As Non-Secure

My name is Bramus and I approve this message:

We, the Chrome Security Team, propose that user agents (UAs) gradually change their UX to display non-secure origins as affirmatively non-secure. We intend to devise and begin deploying a transition plan for Chrome in 2015. The goal of this proposal is to more clearly display to users that HTTP provides no data security.

It soon will be 2015, time to embrace HTTPS.

Marking HTTP As Non-Secure →

Mixed Content Scan: Scan your HTTPS-enabled website for Mixed Content


With my recent move to HTTPS I wasn’t sure if there were any pages left on my site that had Mixed Content or not.

If an HTTPS page includes content retrieved through regular, cleartext HTTP, then the connection is only partially encrypted. […] When a webpage exhibits this behavior, it is called a mixed content page. (src)

As modern browsers block most Mixed Content from being downloaded this may leave your HTTPS-enabled website broken.

To check this I wrote a little PHP CLI app to scan an HTTPS website for Mixed Content. The script starts crawling at a given URL, and processes the page:

  • All contained img[src], iframe[src], script[src], and link[href][rel="stylesheet"] elements are checked for being Mixed Content or not.
  • All contained a[href] elements linking to the same or a deeper level are successively crawled and scanned for Mixed Content.

The script itself will start scanning and give feedback whilst running. When Mixed Content is found, the URLs will be shown on screen:

[2014-12-10 15:38:31] 00000 -
[2014-12-10 15:38:32] 00001 -
[2014-12-10 15:38:33] 00002 -
[2014-12-10 15:38:33] 00003 -
[2014-12-10 15:38:33] 00004 -
[2014-12-10 15:38:34] 00005 -
[2014-12-10 15:38:34] 00006 -
[2014-12-10 15:38:36] 00007 -
[2014-12-10 15:38:37] 00008 -
[2014-12-10 15:38:37] 00009 -
[2014-12-10 15:38:38] 00010 -
[2014-12-10 15:38:38] 00011 -
[2014-12-10 15:38:40] 00012 -
[2014-12-10 15:38:40] 00013 -
[2014-12-10 15:38:41] 00014 -
[2014-12-10 15:38:41] 00015 -
[2014-12-10 15:38:41] 00016 -
[2014-12-10 15:38:42] 00017 -
[2014-12-10 15:38:42] 00018 -
[2014-12-10 15:38:43] 00019 -
[2014-12-10 15:38:43] 00020 -
[2014-12-10 15:38:44] 00021 -


[2014-12-10 15:38:56] 00050 -
[2014-12-10 15:38:56] 00051 -
[2014-12-10 15:38:57] 00052 -
[2014-12-10 15:38:57] 00053 -
[2014-12-10 15:38:57] 00054 -
[2014-12-10 15:38:57] 00055 -
[2014-12-10 15:38:57] 00056 -


Invoke the script as such:

$ php bin/scanner.php

To speed things up it’s also possible to define a set of ignore patterns. The default ignore patterns defined are those for a WordPress installation:

return [
    '^{$rootUrl}/page/(\d+)/$', // Paginated Overview Links
    // '^{$rootUrl}/(\d+)/(\d+)/', // Single Post Links
    '^{$rootUrl}/tag/', // Tag Overview Links
    '^{$rootUrl}/author/', // Author Overview Links
    '^{$rootUrl}/category/', // Category Overview Links
    '^{$rootUrl}/(\d+)/(\d+)/$', // Monthly Overview Links
    '^{$rootUrl}/(\d+)/$',  // Year Overview Links
    '^{$rootUrl}/comment-subscriptions', // Comment Subscription Link
    '^{$rootUrl}/(.*)?wp\-(.*)\.php', // WordPress Core File Links
    '^{$rootUrl}/archive/', // Archive Links
    '\?replytocom\=', // Replyto Links

The {$rootUrl} token in each pattern will be replaced with the (root) URL passed into the script.

Mixed Content Scan →

Special thanks go out to Mathias Bynens for making a few suggestions and additions to Mixed Content Scan.

Did this help you out? Like what you see?
Consider donating.

I don’t run ads on my blog nor do I do this for profit. A donation however would always put a smile on my face though. Thanks!

☕️ Buy me a Coffee ($3)