Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Deconstructing The Philosophies Of 'RoboCop'

Joel Kinnaman (left) as Alex Murphy and Gary Oldman as Dr. Dennett Norton in <em>Robocop</em>.
Kerry Hayes
/
Columbia Pictures
Joel Kinnaman (left) as Alex Murphy and Gary Oldman as Dr. Dennett Norton in Robocop.

I went to see the new RoboCop the other day with my colleague Hubert Dreyfus. As it happens, the movie features a character named Hubert Dreyfus. The character in the movie isn't based on Professor Dreyfus; it is an homage to him.

Dreyfus-in-the-movie is a senator who's bent on protecting the people of America from the dangers posed by police robots. The real Prof. Dreyfus is famous for his criticism of artificial intelligence (AI). He is the author of What Computers Can't Do, as well the equally influential What Computers Still Can't Do. Dreyfus believes that AI rests on a mistake. People are not robots, and our lives aren't grounded in rational computation. We live in a landscape of values; things matter to us, have saliences, capture our attention and our concern. We aren't detached the way we would be if we were purely rational.

We are not natural-born robots.

Sen. Dreyfus, and the movie's creators, don't do a very good job of presenting the philosopher's position. But they get the basic upshot right. We can't trust robots with life-and-death decisions. They lack the wisdom to make hard choices. They've got no feeling.

Prof. Dreyfus would have added: Powers of reasoning and online access to all the information in the world won't help. For there are no algorithms for dealing with the hard cases. This is something that wise people understand. To be a good judge is not to apply rules blindly. It is to make hard decisions in the absence of rules that tell us how to act.

If the character of Sen. Dreyfus stands for skepticism about the limits of AI, Dennett Norton, the film's brilliant engineer, represents both the hypothesis that human beings are just information-processing vessels and an optimistic commitment to the power of technology to make the world a better place. Like his namesake, Daniel Dennett, who has elaborated on and defended the prospects of artificial intelligence more profoundly than anyone else, Dennett Norton is a progressive, one of the good guys; he'd rather be enabling amputee guitarists to play again than designing weaponized drones.

It's difficult to find an argument for one side or the other in RoboCop. On the surface the movie seems to buy into an opposition between the rational mind and the emotional soul; the latter makes humans special ,whereas the former is shared by man and machine. Neither Dennett nor Dreyfus would have much truck with that opposition, though.

But it is possible to read the movie as pulling the rug out from under that simple-minded opposition. Alex, the hero robot cop, comes out of the lab an unfeeling drone, acting automatically, according to program. But he gradually takes form as a person with values, memories, projects and feelings.

Now, you could read that as endorsing, in a somewhat mystical way, the idea that the human soul triumphs in the end; after all, Alex was once a healthy, living human being.

But there is a better way to understand the story. It offers a reconciliation of Dennett's and Dreyfus' views.

Dreyfus was right all along that you don't get a mind out of a computer program; in so far as Alex is just a computer, then, to that extent, he's just a machine.

But Dennett is right, too. A robot isn't just a computer. Alex has a body, and faces problems; he is thrown into the world. His internal states have meaning not only in virtue of the programming, but also in virtue of the way the robot and the world get tied together.

This is a paradoxical upshot, but a surprisingly plausible one.

It does justice to the idea that even if we are just machines, we aren't just machines. A person is a locus of engagement with the world. And after all, if there's one thing we know, it's that, well, we are just machines, but we aren't just machines!

This means that artificial intelligence, even if it is successful, doesn't solve the mind-body problem. Just because we make it doesn't mean we understand it.

Perhaps the film's real focus, like that of the original of which it is a remake, is the criticism of corporate capitalism. OmniCorp, the corporation behind the weaponized drones, isn't interested in doing good or keeping the peace. It seeks market domination. And so the movie makes the argument that there are dangers attached to technology whatever stand we take on the more philosophical problems, especially in the setting of capitalism.

This movie's clear villain is Raymond Sellars, the leader of OmniCorp and the one behind its inhuman handling of militarized AI. He's evil and he's a crook.

Now here's our question, mine and Dreyfus'.

Why is the baddie named Sellars?

Wilfrid Sellars is one of the giants of 20th-century philosophy.

Can it be a coincidence that Dreyfus and Dennett face off in RoboCop against a bad guy bearing the name of yet another noted philosopher?

Yet neither the real Bert Dreyfus, nor I, philosophy professors both, nor the friends and students who joined us that day at the movies, could come up with a plausible link between the villainous movie Sellars and the work of the great philosopher of the same name.

Any ideas?


You can keep up with more of what Alva Noë is thinking on Facebook and on Twitter: @alvanoe

Copyright 2021 NPR. To see more, visit https://www.npr.org.

Alva Noë is a contributor to the NPR blog 13.7: Cosmos and Culture. He is writer and a philosopher who works on the nature of mind and human experience.