'Thousands' of businesses at mercy of miscreants thanks to unpatched Ray AI flaw

8 months ago 85
BOOK THIS SPACE FOR AD
ARTICLE AD

Thousands of companies remain vulnerable to a remote-code-execution bug in Ray, an open-source AI framework used by Amazon, OpenAI, and others, that is being abused by miscreants in the wild to steal sensitive data and illicitly mine for cryptocurrency.

This is according to Oligo Security, which dubbed the unpatched vulnerability ShadowRay. The oversight is tracked as CVE-2023-48022, with a critical 9.8 out of 10 CVSS severity rating.

On Tuesday the security shop's Avi Lumelsky, Guy Kaplan, and Gal Elbaz warned that the flaw has been under active exploitation for the past seven months, with criminals using it to compromise medical and video analytics businesses, educational institutes, and others that use the machine-learning software.

"Researchers at Oligo Security have observed instances of CVE-2023-48022 being actively exploited in the wild, making the disputed CVE a 'shadow vulnerability' — a CVE that doesn't show up in static scans but can still lead to breaches and significant losses," the trio wrote.

Ray is a popular open source project overseen by Anyscale, and is used to develop and scale Python-based applications that incorporate machine-learning workloads.

Berenice Flores at Bishop Fox, Bryce Bearchell, and Protect AI disclosed CVE-2023-48022, which exists because of Ray's lack of authorization in its job submission API, to the project's maintainers last year. They also alerted Anyscale about four other flaws, CVE-2023-6019, CVE-2023-6020, CVE-2023-6021 and CVE-2023-48023, all of which were fixed in November as part of  Ray 2.8.1.

At the time, Anyscale said CVE-2023-48022 wasn't a bug, but rather a "long-standing design decision based on how Ray's security boundaries are drawn and consistent with Ray deployment best practices."

Essentially, the jobs submission API performs, by default, no checks for authorization, allowing anyone who can reach the endpoint to add and remove work, access information, and perform other things they really shouldn't be able to. Anyscale says this service should be placed behind some kind of protection to prevent that from happening; people deploying the software probably don't realize that and end up exposing the API to the world for miscreants to exploit.

And as such, organizations have been hit by cyber-crooks exploiting the CVE.

The project's maintainers did, however, say they planned to offer authentication in a future version of the open-source framework. But, as of now, the vulnerability still allows remote attackers to execute code via the job submission API in Ray 2.6.3 and 2.8.0. 

We asked Anyscale what the current state of play is with CVE-2023-48022, and a spokesperson assured us the biz is on the case: "We are currently working on a script that will make it easy for users to verify their configuration and avoid accidental exposure. Additionally, we have notified all Anyscale customers of the vulnerability and that they are not affected."

In the rush to build AI apps, please, please don't leave security behind Google reveals zero-day exploits in enterprise tech surged 64% last year Over 170K users caught up in poisoned Python package ruse NIST: If someone's trying to sell you some secure AI, it's snake oil US cyber chiefs warn AI will help crooks, China develop nastier cyberattacks faster

This CVE has led to a "trove" of sensitive data being leaked by compromised servers, we're told. This includes OpenAI, Stripe, Slack, and database credentials, and on some machines attackers could use this access to encrypt data stores using ransomware. To be clear, it's not that OpenAI and co are vulnerable, it's that Ray AI's API can be abused to grab creds for those services from organizations' vulnerable machines.

The Oligo lot also said they saw evidence that miscreants had stolen password hashes and private SSH keys via the flaw. Because many of the compromised vulnerable deployments ran with root privileges, the flaw also gave attackers access to victims' entire cloud environments and other services running in AWS, Google, and Microsoft Azure.

Plus, these hijacked clusters are also being abused for cryptocurrency mining, according to Oglio. Most of these nodes have powerful GPUs, which allow attackers to mine coins at the victim organization's expense. 

"In other words, attackers choose to compromise these machines not only because they can obtain valuable sensitive information, but because GPUs are very expensive and difficult to obtain, especially these days," the trio said, noting that the on-demand GPU costs in AWS can run to $858,480 a year, per machine. ®

Read Entire Article