This is Your Brain On Awesome Thoughts on the world from a student of the mind

21Jul/10

Seeing double?

A new study of human vision has come out of the fantastic labs at MIT, this time acting as a proof-of-concept for current predictions about how humans go about making sense of their visual world.

Researchers at the McGovern Institute for Brain Research developed computational algorithms for parsing through a visual scene and marking "areas of interest" that might mimic those a human would choose.  In order to test these predictions out, the researchers had the program predict areas that humans would inspect first in a visual scene, then recorded the eye movements of actual people looking through the scene.

They theorized that, rather than identifying each object in a visual field, people were more likely to mark out a coarse topography of what they were seeing first, marking certain areas as more important than others.  By making certain kinds of features more "important" and other features less-so, the process of searching through a visual field would be more efficient and focused on the specific task at hand.

Ultimately, the program was highly successful in marking areas that people would look at first, suggesting that humans may be employing the same kinds of algorithms in deciding what to look at first.  While it may not be a perfect match with how our brains are wired, it's an interesting twist on the old "what and where" dual-stream paradigm.

It seems to me that this research might suggest a third parallel process - something along the lines of "how important."  Whether this is an integral part of the basic visual process, or a higher-order function that comes after the fact, remains to be seen.

from MIT News

Comments (2) Trackbacks (0)
  1. that’s interesting. having not read the article or clicked the link you shared, i’m just going to ask – how would the algorithm do when presented with visual scenes where humans employ parallel versus serial search patterns? for example, if presented with a bookcase of full of books and the task of locating a specific one, humans tend to do a fairly serial scan of each item. how would the algorithm deal with that? and what about how visual scan patterns change significantly in expert v. novice? how do they account for that?

  2. I don’t know – that’s a good question…not to mention that the method of search people adopt depends on the environment in which they’re searching (eg: if you’re looking for the one red book out of a shelf of multicolored books, you’ll do a parallel search)


Leave a comment


*

No trackbacks yet.