Skip to main content

Smart camouflage patch could conceal fighter jets from A.I. recognition tools

No, it’s not a deleted Q gadget from some late-stage Pierce Brosnan 007 movie. Researchers really have created a patch that could effectively disguise aerial vehicles from A.I. image recognition systems designed to autonomously identify military objects.

The technology, developed by researchers at the Netherlands Organisation for Applied Scientific Research, is capable of consistently fooling the state-of-the-art YOLO (You Only Look Once) real-time object-detection system. And, potentially, others as well. It could be used to help defend fighter planes from enemy drones.

“We have shown that a relatively small patch, roughly 10% of the size of the plane, is effective in camouflaging the whole plane against automatic detection,” Ajaya Adhikari, one of the researchers on the project, told Digital Trends. “These small patches seem to be a more practical solution for camouflage than covering the whole plane.”

The high-tech patches are a different twist on camouflage: A type of disguise that is intended to fool machine, rather than human, vision. In recent years, research into the field called “adversarial A.I.” has continued to grow. Adversarial A.I. is capable of exploiting vulnerabilities in the way that A.I. systems look at images and classify them. Previous examples include work by researchers who were able to get an image-recognition system to classify a 3D-printed turtle as a gun and a baseball as an espresso simply by tweaking their surface pattern.

“To the best of our knowledge, we are the first who have explored adversarial A.I. techniques for camouflage in aerial surveillance,” Richard den Hollander, the other lead researcher on this latest project, told Digital Trends. “The results of our work show that adversarial camouflage can be considered as a potential alternative to traditional camouflage when using deep learning models for automatic analysis.”

In an abstract describing their work, the researchers note the following: “Our results show that adversarial patch attacks form a realistic alternative to traditional camouflage activities, and should therefore be considered in the automated analysis of aerial surveillance imagery.”

Don’t expect the military to start slapping these patches on planes and drones just yet, though. The investigators said that more research is still needed to validate the approach. This will include performing field tests with a printed adversarial patch on actual objects in aerial views, along with investigating the effect of camouflage on other detection models.

A paper describing the work, titled “Adversarial Patch Camouflage against Aerial Detection,” is available to read online.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
A.I. could play a vital role in the birth of tomorrow’s IVF children
microwave a sponge baby

Since the first “test-tube baby” was born in 1978, in-vitro fertilization (IVF) has been an astonishing game changer when it comes to helping people to conceive. However, as amazing as it is, its success rate still typically hovers around 30 percent. That means that seven out of ten attempts will fail. This can be extremely taxing to would-be parents not only financially, but also mentally and physically. Could A.I. help improve those odds and, in the process, play an important role in the birth of many of tomorrow’s babies?

According to investigators from Brigham and Women's Hospital and Massachusetts General Hospital, the answer looks to be a resounding “yes.” They are working on a deep-learning A.I. that can help decide on which embryos should be transferred during an IVF round.

Read more
This clever new A.I. assistant will screen and block robocallers for you
fcc robocall blocking legistation robocalls mem 2

Robocalls are a massive problem, with the average American citizen being on the receiving end of an average of 18 of these automated calls every month. While block lists can put a stop to some of those, they remain a big headache for most people. What if you could turn to yet more robots, these ones fighting on the side of good, to help?

That’s the idea behind a new project carried out by researchers at the Georgia Institute of Technology. They have created a prototype virtual assistant -- think Google’s Duplex or Amazon’s Alexa -- that could act as intermediary between a caller and the recipient. In essence, it’s a bit like a robo version of a secretary, who answers the phone for you, and then passes it over if the call turns out to be worth taking. In tests involving some 8,000 robocalls, the virtual assistant was able to block every single one. When it was put through its paces with human participants, it was 97.8% effective at recognizing them as non-bots.

Read more
Facebook A.I. could fix one of the most annoying problems in video chat apps
Woman looking at videos on Facebook

Communication on Facebook might be predominantly carried out via text, but the social media giant may nonetheless help to solve some of the biggest challenges with audio communication. Announced on Friday, July 10, ahead of the International Conference on Machine Learning, Facebook has developed a new, cutting-edge artificial intelligence that’s able to distinguish up to five voices speaking simultaneously.

That could be transformative for everything from next-gen hearing aids or smart speakers dialing in and amplifying certain voices to future Zoom-style video conferencing learning to better prioritize speakers to stop everyone talking over each other.

Read more