A new study of human vision has come out of the fantastic labs at MIT, this time acting as a proof-of-concept for current predictions about how humans go about making sense of their visual world.
Researchers at the McGovern Institute for Brain Research developed computational algorithms for parsing through a visual scene and marking "areas of interest" that might mimic those a human would choose. In order to test these predictions out, the researchers had the program predict areas that humans would inspect first in a visual scene, then recorded the eye movements of actual people looking through the scene.
They theorized that, rather than identifying each object in a visual field, people were more likely to mark out a coarse topography of what they were seeing first, marking certain areas as more important than others. By making certain kinds of features more "important" and other features less-so, the process of searching through a visual field would be more efficient and focused on the specific task at hand.
Ultimately, the program was highly successful in marking areas that people would look at first, suggesting that humans may be employing the same kinds of algorithms in deciding what to look at first. While it may not be a perfect match with how our brains are wired, it's an interesting twist on the old "what and where" dual-stream paradigm.
It seems to me that this research might suggest a third parallel process - something along the lines of "how important." Whether this is an integral part of the basic visual process, or a higher-order function that comes after the fact, remains to be seen.
from MIT News