The Voronoi partition would create the classification boundary assuming the populations really are separable. And, yes, it can be a bit random-seeming because the directions of the separating lines (or hyperplanes) that radiate off into empty space are defined entirely by only two data points which makes them extremely sensitive to the locations of those data points.Another crazy question. I'm wondering about the opposite to what you wrote about the proximity of the classification boundary: your input does not cover the full space - the points are probably denser in some regions than others and there are empty areas in this space. Based on my understanding of Voronoi partition, these empty areas will also be assigned to some clusters - practically at random. What if the problem is like in situation 1 from the attached picture?
Let's take Goodfellow's adversarial example again, page 261@Cuchilainn: "This probably means that the problem is "ill-posed" in some way."
Sometimes it's easier to use a scotch tape rather than shift a paradigm
Is the "perturbation" a physically accurate one? And was the net trained with examples perturbed in this way?Let's take Goodfellow's adversarial example page 261@Cuchilainn: "This probably means that the problem is "ill-posed" in some way."
Sometimes it's easier to use a scotch tape rather than shift a paradigm
Problem space: it is a panda even when the image is perturbed by a small amount. That's a fact. .
Representation space (algorithm). The algo recognises a panda in the unperturbed and in the perturbed case it a funky gibbon. Your algo must also return a panda in both cases.
Conclusion: the algorithm is wrong. I can't think of any other explanation.
//
The analogy in another domain in inventing negative probabilities and other profanities (name-dropping names like Dirac and Feynman) to explain away when the binomial breaks down. It's quite possibly fallacious thinking. I suspect that this is becoming the standard 'scientific method', i.e fixing and fudging.
Some linksAnother crazy question. I'm wondering about the opposite to what you wrote about the proximity of the classification boundary: your input does not cover the full space - the points are probably denser in some regions than others and there are empty areas in this space. Based on my understanding of Voronoi partition, these empty areas will also be assigned to some clusters - practically at random. What if the problem is like in situation 1 from the attached picture?
It would be a problem if a scientist did that (use Scotch tape). It's fine though when it's an engineer. One rolls up the sleeves and get hands dirty in the guts of the algorithm. There are probably several effects that make it fail - they need to be identified and fixed. That can be the way to developing a more robust approach. Just my basic philosophy - you probably know better how it works in both cases.Let's take Goodfellow's adversarial example again, page 261@Cuchilainn: "This probably means that the problem is "ill-posed" in some way."
Sometimes it's easier to use a scotch tape rather than shift a paradigm
Problem space: it is a panda even when the image is perturbed by a small amount. That's a fact. And this is reality to be modelled.
Representation space (algorithm). The algo recognises a panda in the unperturbed case and in the perturbed case it is a funky gibbon. Your algo must also return a panda in both cases.
Conclusion: the algorithm is wrong. I can't think of any other explanation.
//
The analogy in another domain in inventing negative probabilities and other profanities (name-dropping names like Dirac and Feynman to help the cause) to explain away when the binomial breaks down. It's quite possibly fallacious thinking. I suspect that this is becoming the standard 'scientific method', i.e fixing and fudging.
Maybe you just have too little data to train the classifier properly. If you want to achieve a precision higher than a human eye (vide panda + returbed panda), I'm wondering if enough data exists.I think if this was the whole story, then we wouldn't observe adversarial examples arising from the training set, because then all points would have been comfortably away from the classification boundary. But we do observe them also in the training sets.Another crazy question. I'm wondering about the opposite to what you wrote about the proximity of the classification boundary: your input does not cover the full space - the points are probably denser in some regions than others and there are empty areas in this space. Based on my understanding of Voronoi partition (as I know it from research in crystallography), these empty areas will also be assigned to some clusters - practically at random. What if the problem is like in situation 1 from the attached picture (you use the classifier trained on black points to assign the red x)?
Humans and animals get a staggering amount of data just by wandering around their environment and interacting with things.I wonder about that too! But somehow, humans and other animals cope with that.
Nice.On the other hand:In general, there is so much we simply don't know about how biological brains work that building algorithms based on what we guess about how they work can be a road to nowhere. It's clear we need more than CNNs, not clear what exactly it is, and also not at all clear that we should replicate closely the messy result of evolution of natural selection. E.g. biological brains have a lot of redundancy in them because they have to be resistant to damage (getting whacked on the head). Artificial neural networks don't have to be.
- most of this data is not labelled
- we don't actually know how much of this data is remembered and available for more than one pass of the "learning algorithm"
- we don't know how detailed a representation is remembered
- we know that a lot of what we think we "see" is actually our brain filling in the gaps in what the eyes send to it (this is where many optical illusions come from)
- neural networks also rely on contextual information to classify objects, e.g. the presence of waves in the photo makes it more likely that a CNN will think it saw a whale in it
You also seem to engage in a circular argument in the last two paragraphs of your post, by attempting to explain the superior vision abilities of animals by referring to data which are available to them thanks to the same vision or other cognitive abilities ("get data from objects" - how do they know they see objects? how do they combine points of light into objects? how do they represent an object in their memory for further processing?). If we train our CNNs to segment bitmaps into objects and remember them, half of our work will be done
It's a very tricky subject. It's easy to fall into these traps. You should carefully define terms you're using.
Creature only passs the mirror test after extensive training about life. Human babies, for example, can't pass the mirror test until somewhere between 13 and 24 months.That's not true: at least primates can recognise photographs. Many animals also pass the infamous "mirror test".There is much we don't know about how humans and animals learn to see. But it is clear that they fail to learn to see if only exposed to disembodied snapshots the way current AI systems are. Even streaming imagery is insufficient. Interacting with the world, moving through it, reaching out and touching things seems totally essential.