Human beings are a wonderful species, indeed. We've got the ability to think critically in difficult situations, to be flexible in the face of great adversity and challenge, and to create systems that were previously unthinkable. Our brains seem to be nature's ultimate machine, a unique network of neurons in a storm of electrical activity. This fantastic assemblage of complex components has been the sole occupant of the throne of "consciousness" (whatever that is) for thousands of years now. However, our tenure as the known universe's only sentient beings may be coming to an end.
This concept was recently discussed in an article in The Atlantic. Written by Brian Christian , a bona-fide flesh-and-blood human (honest), it covers one of the oldest questions facing humanity: what makes us special? Christian describes his experience in 2009 as a contestant in the famed "Turing test", in which computer programs attempt to fool judges into believing that they're conversing with an actual person, rather than a machine. The annual competition is a gathering place for AI enthusiasts and critics alike, where the intricacies of modern machines are pitted against the "unique" human mind. The result is an eerie battleground where the lines between silicon and carbon are blurred.
While we are still a far cry from creating computers that are truly indistinguishable from people, the race is a heated one. Each year brings with it a new host of artificial intelligence that is more flexible, more sophisticated, and more human than previous iterations. This allows us to dive deeper into the world of machines, better understanding their capabilities and limitations. These Turing tests also give us a refined glimpse into what it means to be a human. In fact, Christian aims for a different goal than beating out the machines: he competes for (and ultimately wins) the award of "the most human human."
What should we take from the fact that such an award even exists? Is it that technology has somehow "watered down" the pool of consciousness for some of us? Have the poignant sensations of being human become dulled after many hours in front of glowing screens and status updates? Or is it that machines are simply running a faster race than we are, zeroing in on some level of sentient perfection unhindered by the slow deliberateness of natural evolution?
Computers are becoming increasingly sophisticated, humanity's daily interaction with machines is constantly growing, and our interactions with one another seem to become defined more and more by our digital avatars rather than our flesh-and-blood selves. In such a world, it becomes difficult to predict the ramifications of advanced technology and our increasing reliance on it for work, entertainment, and sustenance. To illuminate the darkness of what it means to be human, we will have to follow the path of Brian Christian and utilize one of the most powerful and under-appreciated tools at our disposal: introspection.
Wow, many apologies for cutting off the (relatively) constant drip of science goodness I've had going on for the last few months. The past few weeks have been super hectic (I just got back from an interview in Seattle).
I've had to spend all of my writing time working on an article on memristors for the upcoming edition of the Berkeley Science Review. To that extent, I thought I'd share this interesting lecture by one Leon Chua, the original mind behind memristors and a huge supporter of their emergence into the scientific world today.
I'm not going to go into a ton of detail (you'll have to wait for the article for that!), but this is a general lecture on the ways in which memristors might be applied to physical models of the human brain.
(for those of you who have no idea what I'm talking about, check out this post on a team that is creating software to be used with memristor circuits)
Essentially, Chua is arguing that brains are already made up of memristors (though obviously not in the same sense that our circuit boards are). He points to the well-known behavior of synapses as strengthening/weakening their connection depending on whether the two neurons involved fire at the same time. This is a process called Hebbian learning, and Chua suspects memristors are just right for this job.
It's a bit long, so feel free to skip around to the parts that seem more interesting to you, but well worth the watch if you like thinking about how other physical systems might do things similar to natures method of biological computation.
Either way, I promise more regular posts from now on...that is, until my next interview period 🙂
via Memristor.org (detailing a conference on memristors last February)
At the heart of Artificial Intelligence lies the question of whether we might be able to create artificial systems that behave and compute in the same manner than human beings do. This would obviously be a mind-blowing breakthrough were it ever accomplished - it would give us a number of new applications for computers, would change the nature of work in our society, and would force us to redefine the very nature of being human.
Perhaps it is no surprise, then, that such a feat has proven to be incredibly difficult to achieve. Artificial Intelligence, while it has grown in complexity and scope, is still quite far from achieving any kind of accurate human resemblance.
However, this may change very soon.
Back in 2008, the world of science was abuzz with excitement over a new invention in electronics - the "memristor." This is an electrical component that behaves very similarly to "resistors" in an electrical circuit, but with one key difference. Memristors impede the flow of electricity - however, the amount that they do so is dependent on the current that has passed through the memristor in the past.
Now, this might not seem like such a big deal...essentially, this just means that how much resistance a memristor has right now changes over time and as a result of its previous inputs. But think about the implications of this - essentially, such a piece of hardware has the ability to store some information about its previous input. It has the electrical equivalent of memory. With that in mind, let's venture back into the realm of cognitive science.
The problem with traditional artificial intelligence is that it is based on a computer architecture that is inherently different from biological brains. Computers have a specific place where computations are carried out (CPU), a specific place for short-term memory (RAM), and a specific place for long-term memory (the hard disk) . What this means is that any time a particular bit of information needs to be altered, it has to pass through a number of bottlenecks that drastically reduce the efficiency and speed of the system.
For those of you who are familiar with brains, you know that they don't work in this fashion - there is no central processing unit embedded within your skulls, there is no "hard drive" area that stores all of your memories. Instead, there are only millions upon millions of neurons in an interconnected and never-ending chorus of electrical activity.
Such a system does not need to separate its various functions into discrete locations because, broadly speaking, every location in the brain carries out every function that a normal computer would. The neurons (and possibly their neighboring cells, Glia) both carry out computation as well as store information about the past.
And so all of our efforts to simulate brains have hit this fundamental roadblock - it is incredibly difficult to create machines that act like brains without being built like them. This is where memristors come in.
By allowing memory to be embedded directly within artificial networks, we are one giant step closer to mimicking the way that biological neural networks compute and store information. Such a revolution in information technology will allow us to create systems that behave very differently from those currently in use, allowing us to perform tasks that most computers have a lot of difficulty with.
There are a number of different research programs that have realized this and are in the process of doing some really interesting research into artificial intelligence, and I'll keep an eye out for any interesting information about what people are doing with this fascinating technology. For now, check out this article out of Boston University. It's written by a team working with HP labs to create one of the first "neural" artificial computers.
And so with these new tools at hand, we can begin to create systems that not only behave, but are built very similarly to human brains. We're just at the tip of the iceberg when it comes to understanding what these powerful networks are capable of, and the future is a bright one indeed.
via IEEE Spectrum