cheating artificial intelligence programs is as easy as using a pen and paper



Researchers at the OpenAI laboratory, specialized in machine learning and artificial intelligence, have discovered that your next-generation computer vision system could have a huge bug. And it is that with tools that we all have and know how to use: a pen and a piece of paper, the system could easily be fooled.



As shown in an image posted by The Verge, just write the name of an object and paste it in another to fool the software and misidentify what you see. "We refer to these attacks as typographical attacks," explained the OpenAI researchers on their blog. As you can see in the following image, shared by OpenAI, the program can identify a green apple as an iPod just because someone wrote the word iPod on a piece of paper about this fruit.





Images



The possibility of fooling Tesla's autonomous cars




open ai



So much so that researchers have shown, for example, that Tesla's self-driving car software could be tricked into changing lanes without warning simply by placing certain stickers on the road.



This type of attack are a serious threat to a wide variety of Artificial Intelligence applications in many areas such as the medical or the military. As the researchers publish on their blog, "Contradictory images (or images that are ambiguous or difficult to identify) represent a real danger for systems that rely on machine vision."



According to the official OpenAI blog “these attacks are similar to the" contradictory images "that can fool commercial computer vision systems, but the case discovered shows that they can be much simpler to produce"Than what was known. This is writing a word on a piece of paper and putting it on an object.



The errors of the newly released CLIP




openai



It should be remembered that in January, this firm launched CLIP, a tool trained with 400 million parts of images and text from the Internet, which is capable of instantly recognizing which category the images shown to it belong to. The system recognizes objects, characters, locations, activities, subjects and more.







Deep Web, Dark Web and Darknet: these are the differences





Now, the researchers say that "many of the associations that we have discovered appear to be benign, but we have still discovered several cases where CLIP maintains associations that could be harmful for representation ".



We have observed, for example, that the concept "Middle East" [1895] is associated with terrorism or that dark-skinned people are associated with gorillas, "reflecting past incidents of photo tagging on other models that we find unacceptable", they say from OpenAI.