Worm’s Mind in a Robot’s Body

Categories: Brains
Comments: 1 Comment
Published on: December 18, 2014

Worms Brains

Worms

I recently read this interesting article on IFLScience’s blog about researchers who meticulously mapped the brain-structure of a small, milimeter-sized worm called Caenorhabditis elegans (C. elegans), then put the structure into a Lego Mindstorms EV3 robot. The concept is a little amusing on the surface, but there’s a serious consideration at play.

According to the post:

According to OpenWorm—an organization dedicated to creating the world’s first virtual organism in a computer—to understand the human brain, we must first be able to comprehend a simple worm.

There’s a lot of merit to this project, I think. At the root of the matter is the concept of humility in the face of understanding consciousness: you can’t start with the human brain. It’s clear to me that the anything approaching a consciousness that can ponder itself is built upon layers of consciousness that can comprehend danger, comprehend food, comprehend opportunities.

We’re fortunate in many ways in that we live in a world populated by our ancestors. While the difference between a human being and a worm is pretty vast, that difference is not so vast as, say, the difference between a human being and a fungus. Or phytoplankton. Or a virus. If you consider all we know about life, based on this sole planet in a vast universe, a worm and a human aren’t really so far removed.

This is really a step forward, though, in how we emphasize our own brand of consciousness vs. what’s all around us. We’re different from other animals, but how much so? I think that understanding our own minds starts with understanding a mind in general, and admitting we don’t have understand of that is a step away from the ego and towards learning. If we’re ever to understand the ego, we have to find a way to sever our ties with it.

What can we learn from this?

In the article, Kristy Hamilton remarks that the video of the robot in action isn’t much to look at, but what is interesting is how a robot can achieve certain basic functionality without programming and without learning. This is key, and part of the problem we’re faced with in understanding ourselves.

So far, perhaps the best way we’ve come up with to understand human reasoning and intelligence is computers. Machines built on the logic we understand to follow the commands we give in the languages we invent. Our interaction with computers is both the advantage and the curse. Because we build them and instruct them, we can look at them as both “human” in their reasoning and “not human” in their inability to reason without us. This is certainly a step beyond trying to figure ourselves out by watching worms. However, because we are so deeply connected with the activity of computers and the robots guided by them, we’re also unable to see the objective reality of human intelligence. In other words, there’s too much “us” in them for them to explain us to us.

This achievement of mapping and studying worms, however, is different. We’re now learning to observe the logic system we did not create: a worm’s brain, in a machine that we did create. The simulation isn’t perfect, perhaps partly because we still need to use a language we invented to translate the raw brain activity to a machine’s activity. But the real “win” in this achievement isn’t the ability to simulate an organic brain with code and machinery, but the understanding that we must find ways to cut ourselves out of the process to really observe. In order for us to observe consciousness, we need a creature that can exhibit consciousness without our dictation.

So what now?

You might think that the answer is to keep synthesizing more and more complicated brains into machines: an insect’s brain, then a frog’s brain, then maybe a bird’s brain, and a rat’s brain, and a monkey’s brain and then, voilá: a fully sentient robot with a human brain. I disagree, though. It’s not just the warnings of Stephen Hawking or reruns of Terminator 2, it’s that the process misses the point. We aren’t just trying to make other “us”‘s, we’re trying to understand the “us”‘s we already have. And to do that, we need lessons and concepts around consciousness: a new language to translate the apparent natural occurrence of consciousness into something we can understand. That starts with the ABC’s we’ll learn from translating a worm’s brain into something we can understand.

I think this achievement is much more significant than it seems, and the processing of the lessons we learn from observing a robot’s behavior with a worm’s cognitive structure will change the shape of consciousness study in the future.

Do you think this is a significant achievement? Or is it mere “playing around” with what we can do, with no effective learning able to be done? Leave me a comment below!

1 Comment - Leave a comment
  1. David Fallon says:

    And then I just read this interesting article: http://www.sciencealert.com/computers-think-these-random-patterns-are-real-objects

    It’s a similar situation but in this case it is our vision, not our consciousness, that is being explored. But there are deep similarities in how we see and how we understand what we see. Maybe, if the researchers could use a mapped animal brain instead of an algorithm to identify the images, we might learn more.


Welcome , today is Monday, November 20, 2017
Close Print