The Deep Learning Revolution
Book By TERRENCE J. SEJNOWSKI
Sharks and Rays(Which include skates) are able to sense very weak electrical fields; indeed, they can detect the signal from a 1.5-volt battery clear across Atlantic Ocean. With this 6th sense, skates can navigate by the weak electrical signals from their motion through the earth's magnetic field, which generates microvolt signals in their electroreceptors.
Hermann von Helmholtz was a nineteenth-century physicist and physician who developed a mathematical theory and an experimental approach to vision that forms the basis for our current understanding of visual perception.
Vision is our most acute and also our most studied sense. With two frontal eyes, we have exquisite binocular depth perception, and half of our cortex is visual.
In one tenth of a second, ten billion neurons in our visual cortex working together in parallel can identify a cup in a cluttered scene, even though we may have never seen that particular cup before and even when it might be in any location.
Inferotemporal cortex of monkeys.
Visual of macaque is similar to ours and we have the same stages of visual processing.
LGN(Lateral geniculate nucleus)
V1: Primary Visual Cortex [Simple Visual forms, edges, corners]
V2: Secondary Visual Cortex
V4: Visual Area 4. [Intermediate Visual forms, feature, groups, etc]
PIT: Posterior Inferotemporal Cortex.
AIT: Anterior Inferotemporal Cortex. [High level object descriptions, faces, objects]
PFC: Prefrontal Cortex. [Categorical judgements, decision making]
PMC: Premotor Cortex.
MC: Motor Cortex. [Motor Command]
Retina(20-40ms) --> LGN(30-50ms) -->V1(40-60ms) -->V2(50-70ms) -->V4(60-80ms) -->PIT(70-90ms) -->AIT(80-100ms) -->PFC(100-130ms) -->PMC(120-160ms) -->MC(140-190ms) -->Spinal Cord(160-220ms) --> To finger Muscle(180-260ms)
Vision from the Bottom Up
If we follow the signals generated by an image into the brain, we can see how it is transformed over and over again as it passes from one stage pf processing to the next. Vision starts in the retina, where photoreceptors convert light into electrical signals. There are two layers of neurons within the retina that process the visual signals in the space and time, ending with the ganglion cells that project out into the optic nerves.
In ca classic 1953 experiment whose results hold for all mammals, Stephen Kuffler recorded from the output of the neurons of the retina of a living cat while stimulating them to fire spikes in response to spots of light. He reported that some output neurons responded to a spot of light in their center when it went on, and others responded to a spot of light in their center when it went off. But, just outside the centers, the surrounding annulus had the opposite polarity: on-centers with off-surrounds and off-centers with on-surrounds. The responses of ganglion cells to patterns of light are called "receptive field" properties.
Synapse Platicity
If one eye of a cat is close for the first few months of its life, cortical neurons that normally would be driven by both eyes become monocular, exclusively driven by the open eye. Monocular deprivation drives changes in the strengths of synapses in the primary cortex, where inputs to neurons receive converging inputs from the two eyes for the first time. After the critical period of cortical plasticity in the primary visual cotrtex is over, the close eye can no longer influence cortical neurons, resulting in a condition called "amplyopia". Although, uncorrected, misalignment or strabismus, which is common in babies, will greatly reduce the number of cortical neurons that are binocular and preclude binocular depth perception, a timely operation to align the eyes within the critical period can rescue binocular neurons.
Even though most of the neurons in our brains are the same ones we had at birth, nearly every component of those neurons and the synapses that connect then turns over every day. Proteins are replaced as they wear out, and lipids in the membrane are renewed. With so much dynamic turnover, it is mystery how our memories are maintained over our lifetimes.
There is another possible explanation for the apparent longevity of memories: they may be like scars our bodies that have survived as markers of past events in our lives. The place to look for these markers is not inside neurons, where there is constant turnover, but outside, in the space between neurons, where the extracellular matrix, made from proteoglycans that are like that collagen in scar tissues, is tough material that lasts many years. If this conjecture is ever proven, t means that our long-term memories are embedded in the brains "exoskeleton", and we have been looking for them in the wrong places.
Shape from Shading
Steven Zucker recently was able to explain how we see folds in shaded images, based on the close relationship between the three-dimensional contours of the surface as seen on contour maps of mountains and the constant-intensity contours on images. The link is provided by the geometry of surfaces. This explains the mystery of why our perception of shape is so insensitive to differences in the lighting and the surface properties of the objects. It may also explain why we are so good at reading contour maps, where the contours are made explicit, and why we need only few good special internal lines to see shapes of objects in cartoons.
Curvature from shading. Our visual system can extract the shape of an object from the slowly varying changes in the brightness across an image within the bounding contour. You see eggs or egg cartons depending on direction of the shading and your assumption about the direction of lighting(usually assumed to be overhead).
Ref: "Perception of shape from Shading" from VS Ramachandran.
The function of a neuron is determined not simply by how it responds to inputs, but also by the neurons it activates downstream-by its "projective field". Until recently, the output of a neuron was much more difficult to determine than its inputs, but new genetic and anatomical techniques make it possible to track the axonal projections downstream with great precision, and new optogenetic techniques make it possible to selective stimulate specific neurons to probe their impact on perception and behavior. Even so, our small network could only identify the curvature of hills or bowls, and we still don't know how globally organized perceptions, called "gestalts" in the psychology literature, are organized in the cortex.
Sharks and Rays(Which include skates) are able to sense very weak electrical fields; indeed, they can detect the signal from a 1.5-volt battery clear across Atlantic Ocean. With this 6th sense, skates can navigate by the weak electrical signals from their motion through the earth's magnetic field, which generates microvolt signals in their electroreceptors.
Hermann von Helmholtz was a nineteenth-century physicist and physician who developed a mathematical theory and an experimental approach to vision that forms the basis for our current understanding of visual perception.
Vision is our most acute and also our most studied sense. With two frontal eyes, we have exquisite binocular depth perception, and half of our cortex is visual.
In one tenth of a second, ten billion neurons in our visual cortex working together in parallel can identify a cup in a cluttered scene, even though we may have never seen that particular cup before and even when it might be in any location.
Inferotemporal cortex of monkeys.
Visual of macaque is similar to ours and we have the same stages of visual processing.
LGN(Lateral geniculate nucleus)
V1: Primary Visual Cortex [Simple Visual forms, edges, corners]
V2: Secondary Visual Cortex
V4: Visual Area 4. [Intermediate Visual forms, feature, groups, etc]
PIT: Posterior Inferotemporal Cortex.
AIT: Anterior Inferotemporal Cortex. [High level object descriptions, faces, objects]
PFC: Prefrontal Cortex. [Categorical judgements, decision making]
PMC: Premotor Cortex.
MC: Motor Cortex. [Motor Command]
Retina(20-40ms) --> LGN(30-50ms) -->V1(40-60ms) -->V2(50-70ms) -->V4(60-80ms) -->PIT(70-90ms) -->AIT(80-100ms) -->PFC(100-130ms) -->PMC(120-160ms) -->MC(140-190ms) -->Spinal Cord(160-220ms) --> To finger Muscle(180-260ms)
Vision from the Bottom Up
If we follow the signals generated by an image into the brain, we can see how it is transformed over and over again as it passes from one stage pf processing to the next. Vision starts in the retina, where photoreceptors convert light into electrical signals. There are two layers of neurons within the retina that process the visual signals in the space and time, ending with the ganglion cells that project out into the optic nerves.
In ca classic 1953 experiment whose results hold for all mammals, Stephen Kuffler recorded from the output of the neurons of the retina of a living cat while stimulating them to fire spikes in response to spots of light. He reported that some output neurons responded to a spot of light in their center when it went on, and others responded to a spot of light in their center when it went off. But, just outside the centers, the surrounding annulus had the opposite polarity: on-centers with off-surrounds and off-centers with on-surrounds. The responses of ganglion cells to patterns of light are called "receptive field" properties.
Synapse Platicity
If one eye of a cat is close for the first few months of its life, cortical neurons that normally would be driven by both eyes become monocular, exclusively driven by the open eye. Monocular deprivation drives changes in the strengths of synapses in the primary cortex, where inputs to neurons receive converging inputs from the two eyes for the first time. After the critical period of cortical plasticity in the primary visual cotrtex is over, the close eye can no longer influence cortical neurons, resulting in a condition called "amplyopia". Although, uncorrected, misalignment or strabismus, which is common in babies, will greatly reduce the number of cortical neurons that are binocular and preclude binocular depth perception, a timely operation to align the eyes within the critical period can rescue binocular neurons.
Even though most of the neurons in our brains are the same ones we had at birth, nearly every component of those neurons and the synapses that connect then turns over every day. Proteins are replaced as they wear out, and lipids in the membrane are renewed. With so much dynamic turnover, it is mystery how our memories are maintained over our lifetimes.
There is another possible explanation for the apparent longevity of memories: they may be like scars our bodies that have survived as markers of past events in our lives. The place to look for these markers is not inside neurons, where there is constant turnover, but outside, in the space between neurons, where the extracellular matrix, made from proteoglycans that are like that collagen in scar tissues, is tough material that lasts many years. If this conjecture is ever proven, t means that our long-term memories are embedded in the brains "exoskeleton", and we have been looking for them in the wrong places.
Shape from Shading
Steven Zucker recently was able to explain how we see folds in shaded images, based on the close relationship between the three-dimensional contours of the surface as seen on contour maps of mountains and the constant-intensity contours on images. The link is provided by the geometry of surfaces. This explains the mystery of why our perception of shape is so insensitive to differences in the lighting and the surface properties of the objects. It may also explain why we are so good at reading contour maps, where the contours are made explicit, and why we need only few good special internal lines to see shapes of objects in cartoons.
Curvature from shading. Our visual system can extract the shape of an object from the slowly varying changes in the brightness across an image within the bounding contour. You see eggs or egg cartons depending on direction of the shading and your assumption about the direction of lighting(usually assumed to be overhead).
Ref: "Perception of shape from Shading" from VS Ramachandran.
The function of a neuron is determined not simply by how it responds to inputs, but also by the neurons it activates downstream-by its "projective field". Until recently, the output of a neuron was much more difficult to determine than its inputs, but new genetic and anatomical techniques make it possible to track the axonal projections downstream with great precision, and new optogenetic techniques make it possible to selective stimulate specific neurons to probe their impact on perception and behavior. Even so, our small network could only identify the curvature of hills or bowls, and we still don't know how globally organized perceptions, called "gestalts" in the psychology literature, are organized in the cortex.
8238976713
ReplyDelete8238976713
ReplyDelete8051588316
ReplyDelete