Slight Street Sign Modifications Can Completely Fool Machine Learning Algorithms

Image Source: Linked In
Machine Learning is a hot I+D topic.

One of the strategies used in Machine Learning is to learn by means of neural networks. You can get a free introduction to neural networks here.
I also warmly recommend Andrew Ng's introductory course to Machine Learning on Coursera.

Machine Learning neural networks were inspired by biological neural networks, and are easily applied but highly effective in image processing algorithms, like handwritten text recognition.

More complex neural networks algorithms are being implemented on what is called Deep Machine Learning, using neural networks with many layers of complexity.

Typically a neural network is trained, or it learns, from its exposure to thousands of 'good' and 'bad' examples of the image to be recognized or classfied. For example, a neural network that has to recognize handwritten numbers, will be exposed to thousands of examples of numbers written by different people, and even with changes in the orientation of the texts.

In the past, some papers have shown that neural networks can be easily attacked, or confused, by adding special filters to images. While those attacks cannot confuse humans at all, they easily defeat even deep learning neural networks.

In this article from IEEE Spectrum, the novelty is that the 'highly efficient attacks' on the ML algorithms are as simple as stickers or grafitti painted in real life traffic signs. Again, those attacks can barely confuse humans, but completely fool ML algorithms
.



Comments

Popular posts from this blog

Xilinx AXI Stream tutorial - Part 1

Analysis, elaboration and synthesis