One of the nice things of Laravel Valet is that it includes an easy way to make your local site available publicly. For this it has the aforementioned Ngrok built-in. To use it, just run the valet share command, and your local site will be shared through a *.ngrok.io subdomain.
However, when combining valet share with valet secure(which serves your local site over HTTPS) it won’t work: you’ll end up in a 301 Redirect Loop when visiting your site through the *.ngrok.io domain.
To fix this issue there are two options:
The old way: Manually edit your sites Nginx config file and remove the block that listens on port 80.
As valet issue#382 details, you can fix this error by manually editing the Nginx configuration for your site.
Say your site is mysite.test, then its Nginx config file can be found at ~/.config/valet/Nginx/mysite.test.
Chrome is really tightening up the security game here. In Chrome 66 it will untrust Symantec-issued SSL/TLS certificates, after Symantec has repeatedly screwed up by wrongly issuing certificates for domains, including google.com itself.
Thanks to a decision in September by Google to stop trusting Symantec-issued SSL/TLS certs, from mid-April Chrome browser users visiting websites using a certificate from the security biz issued before June 1, 2016 or after December 1, 2017 will be warned that their connection is not private and someone may be trying to steal their information. They will have to click past the warning to get to the website.
This will also affect certs that use Symantec as their root of trust even if they were issued by an intermediate organization. For example, certificates handed out by Thawte, GeoTrust, and RapidSSL that rely on Symantec will be hit by Google’s crackdown. If in doubt, check your cert’s root certificate authority to see if it’s Symantec or not.
Arkadiy Tetelman has recently done an experiment and made an inventory of how many sites in the Alexa Top 1 Million that will be affected by this.
Included in the 100,000 affected sites we find/found (some have gotten a new certificate by now)icloud.com, tesla.com, wechat.com, etc.
Yes, that’s one for “Stripe, Inc” … but, that’s not “Stripe, Inc” is it?
This site uses an EV certificate for “Stripe, Inc”, that was legitimately issued by Comodo. However, when you hear “Stripe, Inc”, you are probably thinking of the payment processor incorporated in Delaware. Here, though, you are talking to the “Stripe, Inc” incorporated in Kentucky. This problem can also appear when dealing with different countries.
Yes, what Ian did was register a company with the same name in another state. And it’s easy-peasy to do so:
From incorporation to issuance of the EV certificate, I spent less than an hour of my time and about $177. $100 of this was to incorporate the company, and $77 was for the certificate. It took about 48 hours from incorporation to the issuance of the certificate.
When it comes to homograph attacks browsers use punycode in the address bar, yet I’m very curious if and how this can be fixed.
Let this be a reminder to always be cautious. Type in addresses manually when in doubt. When still in doubt after that, don’t proceed.
One of the things that’s still not really frictionless for your local development domains is the use of certificates.
To circumvent the use of self-signed certificates I explained in said article how I have a dedicated domain with all subdomains pointing to 127.0.0.1. That way I can use proper certificates for my local dev needs.
In How to get HTTPS working on your local development environment in 5 minutes another technique – one which works with all domains, even localhost itself – is unearthed: create a root certificate yourself, and trust it on your system. Any certificate linked to that root CA will then also be trusted, thus any host using it will have valid certificate no matter how it’s being accessed (curl, browser, etc.)
I like this “hack” because you only have to trust the generated root CA once (and not each certificate separately). I guess if you combine it with xip.io you can also get it to work on your smartphone talking to local dev stuff (or is there any way to have manual DNS entries on your iOS device?).
Earlier today, in a post entitled Secure Contexts Everywhere, it was announced on the Mozilla Security Blog that Firefox from now on will only expose new features such as new CSS properties to secure contexts:
Whilst I somewhat applaud this move, especially in the age of data theft and digital espionage, it’s – pardon my French – quite a ballsy thing to do. CSS Properties, really?
A context will be considered secure when it’s delivered securely (or locally), and when it cannot be used to provide access to secure APIs to a context that is not secure. In practice, this means that for a page to have a secure context, it and all the pages along its parent and opener chain must have been delivered securely.
Roughly translated: http://localhost/ is considered to be a secure context, so that one will have the latest and greatest. However, the docs aren’t that clear on whether the definition of “local” involves actual DNS resolving or not:
Locally delivered files such as http://localhost and file:// paths are considered to have been delivered securely.
What about development domains – think project.local or project.test – pointing to 127.0.0.1? Right now we’re left in the blue here …
Do note however that Mozilla “will provide developer tools to ease the transition to secure contexts and enable testing without an HTTPS server”, but the related bug is still unassigned.
It’s best to have your local web development stack mimic the production environment as closely as possible. Over the years we’ve seen the rise of tools like Vagrant and Docker to easily allow us to do so: you can run the same OS and version with the exact same Apache version with the exact same PHP version with the exact same MySQL version with … oh you get the point.
One of the things that’s still not really frictionless right now for your local development domains is the use of certificates.
Yes, you could use self-signed certificates for this, but that’s:
Not always possible: The Fetch API for example doesn’t allow the ignoring of self signed certificates.
Asking for potential security problems the moment you go into production: I once read a post about some production code that still had the --insecure flag enabled on the curl requests 🤦♂️. It was put there in the first place because of to the self-signed certificates used during development.
Another approach – and one that I am taking – is to use an actual domain for your dev needs. Each project I build is reachable via a subdomain on that dedicated domain, all pointing to 127.0.0.1 thanks to a an entry in my hosts file. Since it’s an actual domain I am using, I can use an actual and correctly signed certificate along with that when it comes to HTTPS.
At the cost of only a domain name (you can get your certificates for free via Let’s Encrypt, which now support wildcards too) that’s quite a nice solution I think.
Back in December 2017 version 63 of Google Chrome was released. One of the changes included is that Chrome now requires https for .dev and .foo domains. This was ill-received by quite a lot of developers as most of them – including me before – use(d) the .dev TLD for local development purposes.
Yes, there’s a (lengthy) workaround available to get the green lock in Chrome, but I wouldn’t recommend it really. It would require you to repeat a likewise procedure in all browsers you have running/are testing in, and this per .dev domain you have. Above that it won’t fix any out-of-browser contexts.
Could Google have prevented this .dev debacle? Of course. But I don’t see a reversal of this change coming any time soon though …
Can Google still do something about it? Well … I have an idea about this:
Imho Google should have opened it up, pointing all domains to 127.0.0.1 / ::1 (and more, cfr. xip.io), and providing free SSL certs for all.
Just imagine if Google were to point all .dev domains to 127.0.0.1, that’d truly be paving the cowpaths! Add free SSL certificates for all .dev domains along with that and we’d jump quite far. With announcements like Secure Contexts Everywhere(referenced at the very top of this page) this idea seems like a real winner:
(Chrome) Us developers can keep using the .dev TLD for projects in development.
Us developers no longer need to use self-signed certificates anymore for projects in development. We could use certificates directly signed by Google’s CA (or by Let’s Encrypt in case Google doesn’t feel like providing certificates)
(Firefox) Domains with the .dev TLD (and with an SSL Certificate) can be considered as a Secure Context, just like any other domain reachable via HTTPS.
One can dream, right?
Did this help you out? Like what you see? Consider donating.
I don’t run ads on my blog nor do I do this for profit. A donation however would always put a smile on my face though. Thanks!
Interesting issue Jonathan Snook ran into when switching a site over to HTTPS. Even though images from HTTP resources should still get loaded by the browser (as they are Passive Mixed Content, and thus tolerated), they weren’t:
After some digging, I noticed that the images that weren’t loading were those defined using the <picture> element. Surely that’s not expected behaviour, is it? Turns out, it is.
As Jonathan found out the hard way, the Mixed Content Spec does actively block images, but only when they are defined in an imageset – which is the case for <picture>.
Of course it is recommended to migrate all resources to HTTPS and not to run with Passive Mixed Content because browsers, today, will not give your site the green checkmark and, one day, will most likely block Passive Mixed Content too.
We, the Chrome Security Team, propose that user agents (UAs) gradually change their UX to display non-secure origins as affirmatively non-secure. We intend to devise and begin deploying a transition plan for Chrome in 2015. The goal of this proposal is to more clearly display to users that HTTP provides no data security.
If an HTTPS page includes content retrieved through regular, cleartext HTTP, then the connection is only partially encrypted. […] When a webpage exhibits this behavior, it is called a mixed content page. (src)
As modern browsers block most Mixed Content from being downloaded this may leave your HTTPS-enabled website broken.
To check this I wrote a little PHP CLI app to scan an HTTPS website for Mixed Content. The script starts crawling at a given URL, and processes the page:
All contained img[src], iframe[src], script[src], and link[href][rel="stylesheet"] elements are checked for being Mixed Content or not.
All contained a[href] elements linking to the same or a deeper level are successively crawled and scanned for Mixed Content.
The script itself will start scanning and give feedback whilst running. When Mixed Content is found, the URLs will be shown on screen: