It is tempting to think that brains are incredibly precise machines. As we move about the world, it seems like what we see is certain, unambiguous, and unchanging. So it's only a natural extension of this to assume that what goes on between our ears is just as precise. In reality, this may be completely incorrect.
While a lot of past brain research has treated the human brain as a computational monster, crunching the numbers and using the powers of logic to represent the world around it, such an approach has proven to be difficult to connect with reality. While brains do carry out a lot of computation, the fact of the matter is that trying to process every aspect of the world around you would simply be too much to handle. What the brain needs is a way of making things more efficient, more manageable. What the brain needs is statistics.
A growing body of scientific literature has emerged in the past decade that takes a slightly different approach to understanding what it is our brains are actually doing. Rather than treating the world as a black-and-white environment where certainty is the end goal, perhaps what we need is probability, likelihood.
Here's an example of such an approach. It details a recent project of Ruth Rosenholtz, a vision scientist in the Department of Brain and Cognitive Sciences at MIT. She's got a new model of vision that uses the statistics of the visual field as a key component in the visual computations that the brain carries out.
In the model, the brain breaks the visual field down into small areas of focus. In each area, information is gathered about the basic shapes and visual components that lie within its boundaries, and a kind of assumption is made about what the area as a whole contains. In the center of your vision, these areas are relatively small, allowing for more fine-grained discrimination of your visual field. However, towards the periphery, the areas grow larger, allowing for more cluttered images to become noisy.
The model makes some interesting predictions about visual discrimination that seem to match well with our behavioral data. For example, an "A" that is seen in your periphery will be relatively easy to spot if it is alone; however, if the "A" is surrounded by other letters (such as in "TOAST"), then the brain will not be able to detect it. This is because all these letters fall within the larger fields of the periphery, and any individual member of the group is lost in the noise.
Such an approach to vision seems to be quite fruitful, and the underlying assumptions of statistics have a lot of interesting implications for other aspects of cognition as well. This is but one example of research going on in this field...I urge you to check others out as well!
via Science Daily