Serving the Quantitative Finance Community

 
User avatar
katastrofa
Posts: 7440
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: If you are bored with Deep Networks

June 12th, 2018, 8:18 pm

Do we? Like stupid cows eating wild cherry leaves or, similarly, your very own wife deciding to have some rest in the shadow of Manchineel :-)
 
User avatar
Traden4Alpha
Posts: 3300
Joined: September 20th, 2002, 8:30 pm

Re: If you are bored with Deep Networks

June 12th, 2018, 10:02 pm

I wonder about that too! But somehow, humans and other animals cope with that.
Humans and animals get a staggering amount of data just by wandering around their environment and interacting with things.

The "frames" of the eye's video stream may not be truly independent samples but they are something much more powerful on two levels. First, the motion of the observer or the subject object enhances the ability to segment the focal object from the background. Second, the stream of frames traces a meandering but continuous curve in the classifier space. That would seem to be more powerful data than disconnected points in space with no clues as to whether adjacent points in the space are the same object from a slightly different angle (with the empty space between frame-points being a member of the same subject class) or totally different sample (with the empty space between independent points potentially being a member of other classes).

And within the limits of foveal vision, the humans and animals actually get parallel classification data from the multiple objects in the environment as well as learning cues for correlations in object occurrences (e.g., having recognized a hamburger, the pale rod-like objects next to the hamburger are probably french fries, not sticks).

Even at only 10 frames per second, only 6 salient objects in the visual field, and 12 hours of awareness per day, a young creature gathers a billion object-impression-samples per year, which is 10X the ImageNet database.
 
User avatar
ISayMoo
Topic Author
Posts: 2332
Joined: September 30th, 2015, 8:30 pm

Re: If you are bored with Deep Networks

June 12th, 2018, 10:18 pm

On the other hand:
  • most of this data is not labelled
  • we don't actually know how much of this data is remembered and available for more than one pass of the "learning algorithm"
  • we don't know how detailed a representation is remembered
  • we know that a lot of what we think we "see" is actually our brain filling in the gaps in what the eyes send to it (this is where many optical illusions come from)
  • neural networks also rely on contextual information to classify objects, e.g. the presence of waves in the photo makes it more likely that a CNN will think it saw a whale in it
In general, there is so much we simply don't know about how biological brains work that building algorithms based on what we guess about how they work can be a road to nowhere. It's clear we need more than CNNs, not clear what exactly it is, and also not at all clear that we should replicate closely the messy result of evolution of natural selection. E.g. biological brains have a lot of redundancy in them because they have to be resistant to damage (getting whacked on the head). Artificial neural networks don't have to be.

You also seem to engage in a circular argument in the last two paragraphs of your post, by attempting to explain the superior vision abilities of animals by referring to data which are available to them thanks to the same vision or other cognitive abilities ("get data from objects" - how do they know they see objects? how do they combine points of light into objects? how do they represent an object in their memory for further processing?). If we train our CNNs to segment bitmaps into objects and remember them, half of our work will be done ;-)

It's a very tricky subject. It's easy to fall into these traps. You should carefully define terms you're using.
 
User avatar
Traden4Alpha
Posts: 3300
Joined: September 20th, 2002, 8:30 pm

Re: If you are bored with Deep Networks

June 13th, 2018, 2:04 am

I'm not saying animals have superior vision per se only that they have access to superior training data.

There is much we don't know about how humans and animals learn to see. But it is clear that they fail to learn to see if only exposed to disembodied snapshots the way current AI systems are. Even streaming imagery is insufficient. Interacting with the world, moving through it, reaching out and touching things seems totally essential.

It would seem that infants at 12 months already have a sense of "objects" in that they anticipate that a object passing behind another object will appear on the other side. I would surmise that "object" recognition arises from interacting with the world and discovering that some things can move independently of others or noticing patches of visual input moving independently of other patches of visual input. Probably the first patches of visual input that resolve into objects would be the baby's own hands and feet as they move toward the face and mouth.

The data collected by young animals may not be labelled in the linguistic or set theoretic sense but they are clustered by continuity over time, similarity across instances, and distinctness from other elements of sensory inputs. I would suspect that many children confuse cats and dogs when young but then learn quickly that the two categories of creatures have distinct visual and behavioral signatures. Later they learn the labels.
 
User avatar
Cuchulainn
Posts: 20250
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: If you are bored with Deep Networks

June 13th, 2018, 8:41 am

On the other hand:
  • most of this data is not labelled
  • we don't actually know how much of this data is remembered and available for more than one pass of the "learning algorithm"
  • we don't know how detailed a representation is remembered
  • we know that a lot of what we think we "see" is actually our brain filling in the gaps in what the eyes send to it (this is where many optical illusions come from)
  • neural networks also rely on contextual information to classify objects, e.g. the presence of waves in the photo makes it more likely that a CNN will think it saw a whale in it
In general, there is so much we simply don't know about how biological brains work that building algorithms based on what we guess about how they work can be a road to nowhere. It's clear we need more than CNNs, not clear what exactly it is, and also not at all clear that we should replicate closely the messy result of evolution of natural selection. E.g. biological brains have a lot of redundancy in them because they have to be resistant to damage (getting whacked on the head). Artificial neural networks don't have to be.

You also seem to engage in a circular argument in the last two paragraphs of your post, by attempting to explain the superior vision abilities of animals by referring to data which are available to them thanks to the same vision or other cognitive abilities ("get data from objects" - how do they know they see objects? how do they combine points of light into objects? how do they represent an object in their memory for further processing?). If we train our CNNs to segment bitmaps into objects and remember them, half of our work will be done ;-)

It's a very tricky subject. It's easy to fall into these traps. You should carefully define terms you're using.
Nice.
 
User avatar
ISayMoo
Topic Author
Posts: 2332
Joined: September 30th, 2015, 8:30 pm

Re: If you are bored with Deep Networks

June 13th, 2018, 10:18 am

There is much we don't know about how humans and animals learn to see.  But it is clear that they fail to learn to see if only exposed to disembodied snapshots the way current AI systems are.  Even streaming imagery is insufficient.  Interacting with the world, moving through it, reaching out and touching things seems totally essential.  
That's not true: at least primates can recognise photographs. Many animals also pass the infamous "mirror test".
 
User avatar
Traden4Alpha
Posts: 3300
Joined: September 20th, 2002, 8:30 pm

Re: If you are bored with Deep Networks

June 13th, 2018, 12:03 pm

There is much we don't know about how humans and animals learn to see.  But it is clear that they fail to learn to see if only exposed to disembodied snapshots the way current AI systems are.  Even streaming imagery is insufficient.  Interacting with the world, moving through it, reaching out and touching things seems totally essential.  
That's not true: at least primates can recognise photographs. Many animals also pass the infamous "mirror test".
Creature only passs the mirror test after extensive training about life. Human babies, for example, can't pass the mirror test until somewhere between 13 and 24 months.
 
User avatar
Cuchulainn
Posts: 20250
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: If you are bored with Deep Networks

June 13th, 2018, 1:11 pm

There is much we don't know about how humans and animals learn to see.  But it is clear that they fail to learn to see if only exposed to disembodied snapshots the way current AI systems are.  Even streaming imagery is insufficient.  Interacting with the world, moving through it, reaching out and touching things seems totally essential.  
That's not true: at least primates can recognise photographs. Many animals also pass the infamous "mirror test".
What about bats?
 
User avatar
Traden4Alpha
Posts: 3300
Joined: September 20th, 2002, 8:30 pm

Re: If you are bored with Deep Networks

June 13th, 2018, 1:36 pm

There is much we don't know about how humans and animals learn to see.  But it is clear that they fail to learn to see if only exposed to disembodied snapshots the way current AI systems are.  Even streaming imagery is insufficient.  Interacting with the world, moving through it, reaching out and touching things seems totally essential.  
That's not true: at least primates can recognise photographs. Many animals also pass the infamous "mirror test".
What about bats?
They wing it.
 
User avatar
ISayMoo
Topic Author
Posts: 2332
Joined: September 30th, 2015, 8:30 pm

Re: If you are bored with Deep Networks

June 13th, 2018, 3:25 pm

There is much we don't know about how humans and animals learn to see.  But it is clear that they fail to learn to see if only exposed to disembodied snapshots the way current AI systems are.  Even streaming imagery is insufficient.  Interacting with the world, moving through it, reaching out and touching things seems totally essential.  
That's not true: at least primates can recognise photographs. Many animals also pass the infamous "mirror test".
Creature only passs the mirror test after extensive training about life.  Human babies, for example, can't pass the mirror test until somewhere between 13 and 24 months.
OK, so you meant that all animals need multi-modal experience in order to develop vision. I can take it on faith, but I doubt anyone tested this experimentally (and I wouldn't want any animal to be subjected to such an experiment).
 
User avatar
Cuchulainn
Posts: 20250
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: If you are bored with Deep Networks

June 13th, 2018, 3:50 pm

That's not true: at least primates can recognise photographs. Many animals also pass the infamous "mirror test".
What about bats?
They wing it.
What?
 
User avatar
ISayMoo
Topic Author
Posts: 2332
Joined: September 30th, 2015, 8:30 pm

Re: If you are bored with Deep Networks

June 13th, 2018, 3:55 pm

 
User avatar
Cuchulainn
Posts: 20250
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: If you are bored with Deep Networks

June 13th, 2018, 4:10 pm

 
User avatar
katastrofa
Posts: 7440
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: If you are bored with Deep Networks

June 13th, 2018, 4:15 pm

That's not true: at least primates can recognise photographs. Many animals also pass the infamous "mirror test".
Creature only passs the mirror test after extensive training about life.  Human babies, for example, can't pass the mirror test until somewhere between 13 and 24 months.
OK, so you meant that all animals need multi-modal experience in order to develop vision. I can take it on faith, but I doubt anyone tested this experimentally (and I wouldn't want any animal to be subjected to such an experiment).
BTW, based on fMRI studies, different regions of the brain are responsible for corporal and mental identity perception (elephants, humans, chimps, dolphins and one of my cats are indeed confirmed to have the first based on the mirror test). Still, it's something different than recognising any object.
 
User avatar
Traden4Alpha
Posts: 3300
Joined: September 20th, 2002, 8:30 pm

Re: If you are bored with Deep Networks

June 13th, 2018, 4:42 pm

That's not true: at least primates can recognise photographs. Many animals also pass the infamous "mirror test".
Creature only passs the mirror test after extensive training about life.  Human babies, for example, can't pass the mirror test until somewhere between 13 and 24 months.
OK, so you meant that all animals need multi-modal experience in order to develop vision. I can take it on faith, but I doubt anyone tested this experimentally (and I wouldn't want any animal to be subjected to such an experiment).
There's a long line of such research but you and kata would be truly appalled by it. Perhaps the most interesting one (http://marom.net.technion.ac.il/files/2 ... d-1963.pdf) gave two kittens identical visual experiences but only one kitten of the pair got to explore its environment. The kitten that only saw the world passively never developed binocular vision as judged by failing the blink test, visual cliff test, and paw-eye coordination test. Because the one kitten never got to explore on its own, it never learned near and far (which would seem to require experiencing self-propelled motion toward and away from things) or that visual data gave any indication of distance.