It is tempting to think that brains are incredibly precise machines. As we move about the world, it seems like what we see is certain, unambiguous, and unchanging. So it's only a natural extension of this to assume that what goes on between our ears is just as precise. In reality, this may be completely incorrect.
While a lot of past brain research has treated the human brain as a computational monster, crunching the numbers and using the powers of logic to represent the world around it, such an approach has proven to be difficult to connect with reality. While brains do carry out a lot of computation, the fact of the matter is that trying to process every aspect of the world around you would simply be too much to handle. What the brain needs is a way of making things more efficient, more manageable. What the brain needs is statistics.
A growing body of scientific literature has emerged in the past decade that takes a slightly different approach to understanding what it is our brains are actually doing. Rather than treating the world as a black-and-white environment where certainty is the end goal, perhaps what we need is probability, likelihood.
Here's an example of such an approach. It details a recent project of Ruth Rosenholtz, a vision scientist in the Department of Brain and Cognitive Sciences at MIT. She's got a new model of vision that uses the statistics of the visual field as a key component in the visual computations that the brain carries out.
In the model, the brain breaks the visual field down into small areas of focus. In each area, information is gathered about the basic shapes and visual components that lie within its boundaries, and a kind of assumption is made about what the area as a whole contains. In the center of your vision, these areas are relatively small, allowing for more fine-grained discrimination of your visual field. However, towards the periphery, the areas grow larger, allowing for more cluttered images to become noisy.
The model makes some interesting predictions about visual discrimination that seem to match well with our behavioral data. For example, an "A" that is seen in your periphery will be relatively easy to spot if it is alone; however, if the "A" is surrounded by other letters (such as in "TOAST"), then the brain will not be able to detect it. This is because all these letters fall within the larger fields of the periphery, and any individual member of the group is lost in the noise.
Such an approach to vision seems to be quite fruitful, and the underlying assumptions of statistics have a lot of interesting implications for other aspects of cognition as well. This is but one example of research going on in this field...I urge you to check others out as well!
via Science Daily
A new year is upon us, and that’s always a great time to clean out the skeletons in your closet. So without further ado, let’s take a look at Jonah Lehrer’s explanation of “the decline effect” (published in The New Yorker last month). Lehrer describes this odd phenomenon whereby statistical significance of previous scientific findings seems to decrease with age, as we get further and further away from the time that it was initially reported in literature.
As any scientist can tell you, the holy grail of an experiment is a low p-value, a statistical measure that tells whether your findings are indicative of an actual effect, not just randomness and chance. This sounds fairly straightforward – of course we want to find things of actual importance, rather than being lulled into a false discovery by arbitrary data – but it turns out to be much hazier than a simple “yes” or “no.” P-values depend on a number of factors that can change the statistical outcome of your experiment. Things like experimental design, subject choice, even the time of day can have drastic effects on the results of an experiment.
Scientists’ answer to such imperfections is to run the experiment over and over in a number of different environments. This is the beauty of scientific empiricism; at its best, it has the ability to extract truth from the noisy world around us. However, as Lehrer notes, there is one variable that we never change: the fact that people are the ones running these studies. This statement may seem annoyingly obvious, but it’s incredibly important to consider for any scientific study. While the empirical process is designed to provide an objective method of analyzing data, humans are inherently imperfect at being objective and unbiased, and this can manifest itself in the conclusions we take from our studies.
Suppose that you run a study with 90 subjects. The first 80 subjects show a fantastic result. You eagerly begin working on your forthcoming journal article, ready to share your findings with the world. However, upon running the final 10 subjects, you find that this result almost totally disappears. Bummer. An objective machine might say “maybe there isn’t anything here after all” and move on. But people aren’t objective, and they’ve got a stake in giving the world something that is deemed significant. So you decide to leave out those last few subjects, citing them as outliers and thus non-representative of the general population, and publish an article. A few years pass, and a number of researchers (less invested in your discovery) decide to take another look at that paper. They replicate your experimental design, but they fail to replicate your stellar results.
I don’t mean to cast a shadow of doom and gloom over the scientific enterprise. I just want to remind everyone that as long as human beings carry out scientific work, human faults will continue to plague our results. If we hope to come to an understanding of the world around us, it is important that we accept and anticipate the flaws inherent in our system.
via The New Yorker
So this isn't technically about science, although it's about a field that is of extreme importance to science: statistics! I feel like stats is one of those fields that is, upon first glance, incredibly boring...however, if you spend the time to delve in a bit deeper and figure out the many things that stats can tell you, it becomes fascinating.
And thank god that there are people like Hans Rosling around to do just that. You may have seen a number of his TED talks in which he uses historical data to illustrate the history of the world with beautiful clarity. Well, he's got another video out, and it's a doozy. It comes from BBC Fours special program "The Joy of Stats" in which Professor Rosling takes us on a statistical tour that covers all kinds of different topics.
One of the coolest thing about stats (and number crunching in general) is that they allow you to reveal hidden patterns within seemingly random and jumbled data. The more you look at the world, the more you realize that it tends to be both predictable and noisy at the same time. Statistics are a way for us to make sense of this ambiguous universe, sketching out the underlying rules behind a seemingly uninterpretable world.
It is rare these days to find a person who is knowledgeable and engaging enough to get people excited about stats, but take a look at the above video and you may find yourself a convert!
via BBC Four