The reason that people pay attention to these adversarial examples is because of *security*. These example are like computer virus, specifically tuned and tailored through inspecting a specific neural network in order to fool it.
The examples are mostly on classification tasks: is this a stop sign or not? There also some remedies being researched but the main concern is a car that gets hacked into taking a turn, a drone that gets fooled into attacking the wrong people. These adversarial examples won't pop up under normal conditions, but a nefarious person could potentially hack a neural network into illusions.
E.g. suppose you succeed into training a NN to classify these images into either a chihuahua or a muffin, and it has 99% accuracy. What will it do when presented with a picture of a Kiwi? Whatever it should do, it will very likely do a very bad job. It has learned to look at subtle textures, number of black dots and their relative positions. But a Kiwi lives in a tangent dimension, al the way up there with the shrew.
