Archive
A Deeper Look at Peripheral Vision
Often, when peripheral vision is being explained, an image like the one on the right is often used to show how only a small area around our point of focus is defined in high resolution. The periphery is shown to be blurry. While this gets the point across, I think that it actually obscures the subtle nature of perception.
If I focus on some part of the image on the left, while it is true that my visual experience of the other quadrants is diminished, it is somehow less available experientially rather than degraded visually. At all times I can clearly tell the difference between the quality of left image and the right image. If I focus on a part of the right hand image, the unfocused portion does not blur further into a uniform grey, but retains the suggestion of separate fuzzy units.
If peripheral vision were truly a blur, I would also expect that when focusing on the left hand image, the peripheral boxes would look more like the one on the right, but it doesn’t. I can see that the peripherized blocks of the left image are not especially blurry. No matter how I squint or unfocus or push both images wayy into the periphery of my sight, I can easily tell that the two images are quite different. I can’t resolve detail, but I can see that there is potentially detail to be resolved. If I look directly at any part of the blurry image on the right I can easily count the fuzzies when I look at them, even through they are blurred. By contrast, with the image on the left, I can’t count the number of blocks or dots that are there even though I can see that they are block-like. There is an attenuation of optical acuity, but not in a way which diminishes the richness of the visual textures. There is uncertainty but only in a top-down way. We still have a clear picture of the image as a whole, but the parts which we aren’t looking at directly are seen as in a dream – distinct but generic, and psychologically slippery.
What I think this shows that there are two different types of information-related entropy and two different categories of physics – one public and quantitative, and one private and qualitative or aesthetic. Peripheral vision is not a lossy compression in any aesthetic sense. If perception were really driven by bottom up processing exclusively, we should be able to reproduce the effect of peripheral vision in an image literally, but we can’t. The best we can do is present this focused-in-the-center, blurry-everywhere-else kind of image which suggests peripheral vision figuratively, but the aesthetic quality of the peripheral experience cannot be represented.
I suggest that the capacity to see is more than a detection of optical information, and it is not a projection of a digital simulation (otherwise we would be able to produce the experience literally in an image). Seeing is the visual quality of attention, not a quantity of data. It is not only a functional mechanism to acquire data, it is more importantly an aesthetic experience.
Why Light Isn’t Made of Photons
1. Your rod and cone cells are constantly pumping glutamate into the synapse, and when light hits the Vitamin A molecules stuck inside the opsin proteins that make up the rod.
2. The Vitamin A molecule changes from it’s alcohol-shaped isomer to longer aldehyde shape, which pushes the protein around in whatever way it can.
“The molecule undergoes a series of shape changes to try and better fit the binding site. Therefore, a series of changes in the protein occurs to expel the trans-retinal from the protein.”(source)
3. The mechanical changes in the opsin protein cause the rod cell to change its electric charge.
4. Hyperpolarization of the rod cell stops it from releasing glutamate, which has the effect of simultaneously
5. Turning off (hyperpolarizing) the main on-center group of cells that stimulate each ganglion and turns on the surrounding off-center group of bipolar cells that lead to the ganglion. (YouTube)
6. It appears that the elongating of the retinal (Vitamin A isomer) molecule allows the rod cell as a whole to absorb more visible light – so that detecting light makes your eye become more sensitive to light. Sort of like your eyes are opening their eyes.
7. “The nerves reach the optic chasm, where the nerve fibers from the inside half of each retina cross to the other side of the brain, but the nerve fibers from the outside half of the retina stay on the same side of the brain.” (each side of your brain gets a compete stereoscopic image, one L+R and the other R+L. It’s really a stereo stereo image.
This is just a casual overview. Feel free to correct me if I have it wrong.
The impression I get only makes me more convinced of my interpretation that photons cannot be considered light in any way. Photons are quorum synchronized reciprocal changes among atoms.
What our visual cortex would ‘see’ is nothing more than interruptions in the flow of glutamate in bipolar cells, which in turn are nothing more than responses of a stack of protein sheets to the adjustments in the shape of Vitamin A molecules. What is tickling our nervous system is not photons, but orchestrations of symmetric changes in cellular biochemistry.
The claims of vision being a transduction of optical information is misleading. It implies that we are getting a directly anamorphic imprint of photon impacts, when in fact, our visual experience, even if it could be described in biochemical terms, is quite indirect. What the brain detects is like news coverage of an electoral college voting on an issue in another country.
Of course, none of this begins to address the hard problem. Photons, molecule, cells and brains would have no way of producing seemingly non-molecular qualia like color, orientation, and beauty if they were the simple mechanical objects that we presume. The brain does not need to make an image out of glutamate fluctuation to be functionally informed by it. The data is already there, what more would be required?
Light is not a representation of photons, or glutamate, or cell polarizations, it is an anthropological scale sense of visual relation. Not a substance or an ‘energy’ but a sensitivity to objects being energized. As the so called ‘dark current’ of our retinal cells suggest, it is the job of our eyes to silence the noise of our brain and to open the bidirectional pathways of sense and motive; of receptive understanding, and projection of attention.
On Color Perception
Color Perception Is Not in the Eye of the Beholder: It’s in the Brain
Images of living human retinas showing the wide diversity of number of cones sensitive to different colors. (Photo credit: University of Rochester) High-resolution photo for download
(please include photo credit)First-ever images of living human retinas have yielded a surprise about how we perceive our world. Researchers at the University of Rochester have found that the number of color-sensitive cones in the human retina differs dramatically among people—by up to 40 times—yet people appear to perceive colors the same way. The findings, on the cover of this week’s journal Neuroscience, strongly suggest that our perception of color is controlled much more by our brains than by our eyes.
“We were able to precisely image and count the color-receptive cones in a living human eye for the first time, and we were astonished at the results,” says David Williams, Allyn Professor of Medical Optics and director of the Center for Visual Science. “We’ve shown that color perception goes far beyond the hardware of the eye, and that leads to a lot of interesting questions about how and why we perceive color.”
Williams and his research team, led by postdoctoral student Heidi Hofer, now an assistant professor at the University of Houston, used a laser-based system developed by Williams that maps out the topography of the inner eye in exquisite detail. The technology, known as adaptive optics, was originally used by astronomers in telescopes to compensate for the blurring of starlight caused by the atmosphere.
Williams turned the technique from the heavens back toward the eye to compensate for common aberrations. The technique allows researchers to study the living retina in ways that were never before possible. The pigment that allows each cone in the human eye to react to different colors is very fragile and normal microscope light bleaches it away. This means that looking at the retina from a cadaver yields almost no information on the arrangement of their cones, and there is certainly no ability to test for color perception. Likewise, the amino acids that make up two of the three different-colored cones are so similar that there are no stains that can bind to some and not others, a process often used by researchers to differentiate cell types under a microscope.
Imaging the living retina allowed Williams to shine light directly into the eye to see what wavelengths each cone reflects and absorbs, and thus to which color each is responsive. In addition, the technique allows scientists to image more than a thousand cones at once, giving an unprecedented look at the composition and distribution of color cones in the eyes of living humans with varied retinal structure.
Each subject was asked to tune the color of a disk of light to produce a pure yellow light that was neither reddish yellow nor greenish yellow. Everyone selected nearly the same wavelength of yellow, showing an obvious consensus over what color they perceived yellow to be. Once Williams looked into their eyes, however, he was surprised to see that the number of long- and middle-wavelength cones—the cones that detect red, green, and yellow—were sometimes profusely scattered throughout the retina, and sometimes barely evident. The discrepancy was more than a 40:1 ratio, yet all the volunteers were apparently seeing the same color yellow.
“Those early experiments showed that everyone we tested has the same color experience despite this really profound difference in the front-end of their visual system,” says Hofer. “That points to some kind of normalization or auto-calibration mechanism—some kind of circuit in the brain that balances the colors for you no matter what the hardware is.”
In a related experiment, Williams and a postdoctoral fellow Yasuki Yamauchi, working with other collaborators from the Medical College of Wisconsin, gave several people colored contacts to wear for four hours a day. While wearing the contacts, people tended to eventually feel as if they were not wearing the contacts, just as people who wear colored sunglasses tend to see colors “correctly” after a few minutes with the sunglasses. The volunteers’ normal color vision, however, began to shift after several weeks of contact use. Even when not wearing the contacts, they all began to select a pure yellow that was a different wavelength than they had before wearing the contacts.
“Over time, we were able to shift their natural perception of yellow in one direction, and then the other,” says Williams. “This is direct evidence for an internal, automatic calibrator of color perception. These experiments show that color is defined by our experience in the world, and since we all share the same world, we arrive at the same definition of colors.”
Williams’ team is now looking to identify the genetic basis for this large variation between retinas. Early tests on the original volunteers showed no simple connection among certain genes and the number and diversity of color cones, but Williams is continuing to search for the responsible combination of genes.
I interpret this study as supporting multisense realism in the following two ways:
1) It opens the possibility that perception is not a machine that simulates an external factual reality but rather an interactive sensitivity on many levels of material organization.
2) It suggests that we see though our retina rather than retina being responsible for what we see. Our cone cells, like antennae, faithfully amplify their photosensitivity for us, like a radio antenna can facilitate our access to radio programs, but do not dictate the content of them.
While I don’t claim to know the origin of our color qualia, I have a conjecture that what we see is color of microbiological origin – specifically an inheritance from our earliest photosynthesizing single cell ancestors. Our eyeballs seem to recapitulate in microcosm the warm saline marine environment of the Pre-Cambrian Era. Metalloproteins such as hemoglobin, chlorophyll, and hemocyanin (red, green, and blue respectively) perhaps can give us clues which link eukaryotic metabolism with our qualitative presentation of their sensitivity to oxygen, heat, and light.
For a billion years, life on Earth probably consisted of oceans full of blue-green algae, blooming and shrinking together in enormous communities. The photosynthetic impact of circadian rhythms and the seasonal cycles over those hundreds of millions of years are a primordial heartbeat or alphabet of optical sensitivity. Chlorophyll, with it’s room temperature quantum mechanical properties, may very well have a sophisticated palette for light frequencies and incident angles which is passed on to the cell as a whole through DNA or microtubules or both.
This kind of a scenario makes more sense to me than the rather disjointed story of visual perception we have now. Colorless wavelengths of light magically turning into colors through a pinball machine of cells and signals. An arbitrary yet immutable palette of hues and hue combinations. Qualia which represents with a nothing-like something that which is presented as a something-like nothing. A universe devoid of sense coming into sensation for no explainable purpose through no explainable mechanism.
I say that sooner or later, something has to sense something. Whether it is microtubules, neurons, retina cells, or some larger clump of neural tissue, something has to be us having a visual sensory experience. It really makes no difference at what level this matter to mind transduction occurs, as it is equally improbable on any level. Sweeping it under the rug of microcosm or emergence only makes it more obvious to me that we are missing the big picture. The fact that we see means that matter sees. I don’t even know that matter sees light, I think it may be more accurate to say that matter sees itself feel things when it is separated by space, and that ability to see is what we call light.
Recent Comments