GhostStripe attack haunts self-driving cars by making them ignore road signs

6 months ago 39
BOOK THIS SPACE FOR AD
ARTICLE AD

Six boffins mostly hailing from Singapore-based universities have proven it's possible to attack autonomous vehicles by exploiting the system's reliance on camera-based computer vision and cause it to not recognize road signs.

The attack system, dubbed GhostStripe [PDF], is undetectable to the human eye, but could be deadly to Tesla and Baidu Apollo users as it manipulates the type of sensors employed by both brands – complementary metal oxide semiconductor (CMOS) sensors.

Cameras equipped with CMOS sensors capture an image line by line using an electronic rolling shutter – unlike their more expensive alternative, charge coupled devices (CCD), which collect an entire frame at once.

Due to the way CMOS cameras operate, rapidly changing light from fast flashing diodes can be used to vary the color. For example, the shade of red on a stop sign could look different on each line depending on the time between the diode flash and the line capture.

The result is the camera capturing an image full of lines that don't quite match each other. The information is cropped and sent to the classifier, usually based on deep neural networks, for interpretation. Because it's full of lines that don't match, the classifier doesn't recognize the image as a traffic sign.

So far, all of this has been demonstrated before.

Yet these researchers not only executed on the distortion of light, they did it repeatedly, elongating the length of the interference. This meant an unrecognizable image wasn't just a single anomaly among many accurate images, but rather a constant unrecognizable image the classifier couldn't assess, and a serious security concern.

A challenge to get a consistently distorted image is time and position, and it has to do this to keep a similar stripe pattern on the sign for a period of time.

"Thus, a stable attack … needs to carefully control the LED's flickering based on the information about the victim camera's operations and real-time estimation of the traffic sign position and size in the camera's [field of view]," wrote the researchers.

The researchers developed two versions of a stable attack. The first was GhostStripe1, which is not targeted and does not require access to the vehicle, we're told. It employs a vehicle tracker to monitor the victim's real-time location and dynamically adjust the LED flickering accordingly.

Musk schmoozes Chinese Premier as Tesla Full Self-Driving remains parked Tesla maps out new territory in China with Baidu deal Baidu's PR head has a PR problem after workaholic social media posts Researchers claim Windows Defender can be fooled into deleting databases

GhostStripe2 is targeted and does require access to the vehicle, which could perhaps be covertly done by a hacker while the vehicle is undergoing maintenance. It involves placing a transducer on the power wire of the camera to detect framing moments and refine timing control.

"Therefore, it targets a specific victim vehicle and controls the victim's traffic sign recognition results," according to the report's authors.

The team tested their system out on a real road and car equipped with a Leopard Imaging AR023ZWDR, the camera used in Baidu Apollo's hardware reference design. They tested the setup on stop, yield, and speed limit signs.

GhostStripe1 presented a 94 percent success rate and GhostStripe2 a 97 percent success rate, the researchers claim.

One thing of note was that stronger ambient light decreased the attack's performance. "This degradation occurs because the attack light is overwhelmed by the ambient light," said the team. This suggests hackers would need to consider time and location when planning an attack.

Countermeasures are available. Most simply, CMOS cameras could be replaced with CCD or the capturing of the line image could be randomized. Also, more cameras could lower the success rate or require a more complicated hack, or the attack could be included in the AI training model.

The study joins ranks of others that have used adversarial inputs to trick the neural networking of autonomous vehicles, including one that forced a Tesla Model S to swerve lanes.

The research indicates there are still plenty of AI and autonomous vehicle safety concerns to answer.

The Register has asked Baidu to comment on its Apollo camera system and will report back should a substantial reply materialize. ®

Read Entire Article