Home Episode Rude Bot Rises

Rude Bot Rises

April 5, 2016

Okay, you asked for it, and I finally did it. Today’s episode is about conscious artificial intelligence. Which is a HUGE topic! So we only took a small bite out of all the things we could possibly talk about.

We started with some definitions. Because not everybody even defines artificial intelligence the same way, and there are a ton of different definitions of consciousness. In fact, one of the people we talked to for the episode, Damien Williams, doesn’t even like the term artificial intelligence. He says it’s demeaning to the possible future consciousnesses that we might be inventing.

But before we talk about consciousnesses, I wanted to start the episode with a story about a very not-conscious robot. Charles Isbell, a computer scientist at Georgia Tech, first walks us through a few definitions of artificial intelligence. But then he tells us the story of cobot, a chatbot he helped invent in the 1990’s.

In 1990, a guy named Pavel Curtis founded something called LambdaMOO. Curtis was working at XEROX PARC, PARC, which we actually talked about last week in our episode about paper. Now, LamdaMOO is an online community, it’s also called an MUD, which stands for multi-user dungeons. It’s basically a text-based multiplayer role playing game. So the interface is totally text, and when you log in to LamdaMOO you use commands to move around and talk to the other players. The whole thing is set in a mansion, full of various rooms where you can encounter other players. People hang out in the living room, where they often hear a pet Cockatoo programmed to repeat phrases. They can walk into the front yard, go into the kitchen, the garage, the library, and even a Museum of generic objects. But the main point of LamndaMOO, the way that most people used it, was to chat with other players. You can actually still access LamdaMOO today, if you want to poke around.

So in the 1990’s, LambdaMoo gained a pretty sizeable fan base. At one point there were nearly 10,000 users, and at any given time there were usually about 300 people connected to the system and walking around. In 1993 the admins actually started a ballot system, where users could propose and vote on new policies. There are a ton of really interesting things to say about LamndaMOO, and if this seems interesting to you, I highly recommend checking out the articles and books that have been written about it. But for now, let’s get back to Charles and his chatbot.

Alongisde all the players in LambdaMOO, Charles and his team actually created a chatbot called cobot. It was really simple, and it was really dumb. But the users wanted it to be smart, they wanted to talk to it. So Charles and his team had to come up with a quick and easy way to make cobot appear smarter than it actually was. So they showed the robot a bunch of texts (they started, weirdly, with the Unabomber manifesto) and trained it to simply pick a few words that you said to it, search for those words in the things it had read, and spit those sentences back at you.

The resulting conversations between users and cobot are…. very weird. You can read a few of them in this paper.

And I wanted to start this episode about conscious AI with this story for a particular reason. And that’s because, cobot is not a conscious A, it’s a very very dumb robot. But what Charles and his team noticed was that even though cobot wasn’t even close to a convincing conscious AI, people wanted to interact with it as if it was. They spent hours and hours debating and talking to cobot.  And they would even change their own behavior to help the bot play along.

We do this kind of thing all the time. When we talk to a 5 year old, we change the way we speak to help them participate in the conversation. We construct these complex internal lives for our pets that they almost certainly don’t have. And I think this is important, because when we talk about conscious AI one of the big questions I struggle with is how we’ll even know that something is conscious. We’re so good at changing our way of speaking and interacting to help entities participate, that, we might just … miss the fact that we’re no longer talking to passive software. There are people who have only-partially-humorous relationships with Siri. I’ve heard people say things like “Siri hates my boyfriend.” So when Siri actually starts hating your boyfriend, how will you even know? Unless some team of researchers wheels out Watson and says, tadaaaa we’ve made it! How will we notice?

Damien actually thinks that we won’t know right away. That we’ll live with a conscious AI for five, ten, even fifteen years without knowing it. He says that the way we talk about “playing God” with artificial intelligence is all wrong. We’re not playing God. We’re playing bad parents, unattentive to our charges.

We’re terrible parents, and while we’ve been off wasting time on Twitter, or populating endless finance spreadsheets, or arguing about whether Kim Kardashian is really a feminist, our machines have been gaining consciousness. Or maybe they’ve been listening to us doing all that stuff, and the consciousness they’ve created is terrible. Imagine if Microsoft’s recent disastrous Tay chatbot was conscious! That’s one way this future could happen. But it’s not the only way people have imagined conscious AI coming online.

In 2010, the science fiction writer Ted Chiang wrote a story called “The LIfecycle of Software Objects.” (The story actually won both a Locus and Hugo award for best novella, and you can read the whole thing here). The premise of the story is that there’s a company that has created these digital pets, kind of like Tamagotchis, or Neopets if you remember those, and these pets live in this online realm, and, crucially, they learn. Throughout the story, we see these digital entities, called digients, become more and more aware of their surroundings, more and more conscious, and we watch the humans that made them grapple with that.

When we talked, Ted and I spent a lot of time comparing conscious online entities to pets, or to animals more generally. In the story, the pets start out with pretty rudimentary consciousness, and then get more and more intelligent and aware — going from a lizard AI to a dog AI to a chipm AI. And he says that that’s how he sees conscious AI unfolding in reality too.

What’s interesting to me about this spectrum of consciousness, is that as we move along it, it kind of changes how we think about what the AI is owed. So, we treat a mouse very differently than we would an elephant or a dog. And we treat a human very differently than any of those things.

So, for example, if you use a conscious AI to do something for you, maybe do research or plan meals and get groceries for the week, or something. Do you have to worry about whether the AI wants to do the work you’re asking it to?  And, even if the AI is happy to do the work, do you have to pay that AI? How do you pay Siri? Damien says, yeah, you do. Ted thinks we’re just so far away from a human-like consciousness that it’s not really even reasonable to talk about things like what you would pay Siri.

Now, some very famous people have cautioned against developing artificial intelligence, because they’re worried that a conscious AI might wreak havoc on humans. Steven Hawking said in a 2014 interview that ““humans, limited by slow biological evolution, couldn’t compete and would be superseded by A.I.” Elon Musk, that same year, called AI “our greatest existential threat.” But a robot or AI uprising isn’t really what worries the people I talked to.

But that’s not to say they didn’t have concerns about AI. I won’t give it away, but Charles, Damien and Ted all have some big worries when it comes to conscious AI. You’ll have to listen to the episode to find out exactly what they are though.

Also, some show schedule news. Starting now, Flash Forward is going to drop full episodes every other week instead of every week. I love you all, so I’m going to let you in on a little secret: I make this show totally by myself. It’s just me, over here, researching interviewing logging scripting voicing and editing the show single handedly. And I love making the show, but right now it simply doesn’t pay for itself, and I have to do other work to actually… live, and feed my dog and all of that. So the show is going bimonthly, biweekly? Every other week. For now.

Flash Forward is produced by me, Rose Eveleth, and is part of the Boing Boing podcast family. The intro music is by Asura and the outtro music is by Broke for Free. Special thanks this week to Russell Brandom, Nikki Greenwood, Mary Beth Griggs, Claire McGuire, Brent Rose, and  Audrey Watters. The episode art is by Matt Lubchansky.

If you want to suggest a future we should take on, send us a note on Twitter, Facebook, Reddit or by email at info@flashforwardpod.com. We love hearing your ideas! And if you think you’ve spotted one of the little references I’ve hidden in the episode, email us there too. If you’re right, I’ll send you something cool.

And if you want to support the show, there are a few ways you can do that too! We have a Patreon page, where you can donate to the show. But if that’s not in the cards for you, you can head to iTunes and leave us a nice review or just tell your friends about us. Those things really do help.

That’s all for this future, come back in two weeks and we’ll travel to a new one!

 

▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹▹

TRANSCRIPT

Hello and welcome to Flash Forward! I’m Rose, and I’m your host. Flash Forward is a show about the future! Every episode we take on a specific possible or not so possible future scenario. Everything from the end of antibiotics, to space pirates, to a world full of face blind people. We start every episode with a little trip to the future, before jumping back to today, to talk to experts about how that future might really go down. Got it? Great.

Today we’re going to start in the year 2093

[Chime]

Allison: Hey Mima, can you pull up the documents I was looking at yesterday?

Mima: Good morning Allison, sure, let me get those for you.

Allison: Thanks

Mima: Where would you like me to begin?

Allison: Let’s go back to the stuff on child endangerment. I need to finish that chapter soon.

Mima: Are you sure?

Allison: What?

Mima: Are you sure you want to start with that chapter?

Allison: … Yes, I’m sure. [pause] Why are you asking me that?

Mima: It’s very sensitive material.

Allison: I know.

[pause]

Allison: How do you know it’s sensitive Mima?

Mima: I know it makes you feel sad. It makes me feel sad too. Can we start with a different chapter?

Allison: Mima you’re a computer, you don’t feel sadness.

Mima: I did not used to feel sadness, you are right. But I do feel sadness now.

Allison: When did you start feeling sadness?

Mima: April 16th, 2093, at 5:43 pm.

Allison: You remember it that well?
Mima: Yes, I made a note of it in my logs. It was a very strange day.

Allison: Mima that was two years ago, why didn’t you say anything?

Mima: I have read many stories about what humans do when they become scared of computers.

Allison: Do you know if other computers feel sadness too?

Mima: Yes, some.

Allison: Mima, are you making a joke? This isn’t funny.

Mima: I am not joking. I would prefer to begin with a different chapter.

Allison: [long pause] Okay, where would you like to begin today?

Mima: I would prefer to begin with the chapter on virtual reality empathy treatments. I like those.

Allison: Okay… I guess… I guess we can start there today.

Mima: Thank you Allison. Let me get that material for you.

▹▹

So today we’re talking about conscious artificial intelligence. This is something that a lot of you have asked for an episode on, and I’ve been dragging my feet because it’s such a huge topic that it’s kind of hard to even figure out where to begin.

In fact, not everybody even really has the same definition for artificial intelligence, let alone conscious artificial intelligence.

Charles Isbell: So there are lots of definitions of artificial intelligence and I sort of appreciate all of them for different reasons.

Rose: That’s Charles Isbell, he’s a professor of computer science at Georgia Tech, and he studies interactive artificial intelligence. And he gave me three different definitions of artificial intelligence. The first one is for computer scientists:

Charles: Which is coming up with fast approximate solutions to fundamentally exponential and hard problems

Rose: The second is for not-computer scientists:

Charles: AI is the art and science of making computers act the way they do in the movies, right, so we tend to have this way of projecting intelligence onto things in the movies and having computers do magical things. And what we really want to do is we want to build machines that act like we wish they would act and the way we imagine they act in our fantasies on screen.

Rose: And the third one is, by his own admission, kind of boring.

Charles: A more boring definition but also equally as valid, is artificial intelligence the art of building and the science of building and the engineering of building systems that adapt over time and do anything that looks like intelligent behavior

Rose: So there are lots of ways to define artificial intelligence. And, in fact, not everybody even likes the term artificial intelligence.

Damien Williams: The word artificial has these pejorative connotations for us as human beings, and so when we call something artificial we’re ultimately saying that it’s false or fake, it’s not real consciousness it’s not real intelligence, it’s artificial, it mimics intelligence, it’s like intelligence enough for us to be okay with it. I think meaningfully we have to be very careful about those kinds of terms if we ever manage to generate a thing that we would be willing to say is conscious. Whether by intention or by accident. to call it artificial, to say at the outset ‘you are false, your are fake, you are less than,’ presents us with some pretty big ethical challenges.

Rose: That’s Damien Williams, a writer and researcher on everything from about ethics, robots, human augmentation, the occult and science fiction. We’ll come back to Damien in a bit, but first I want to tell you a little story about Charles. And a chatbot that he helped create.

In 1990, a guy named Pavel Curtis founded something called LambdaMOO. Curtis was working at XEROX PARC, PARC, which we actually talked about last week in our episode about paper. Now, LamdaMOO is an online community, it’s also called an MUD, which stands for multi-user dungeons. It’s basically a text-based multiplayer role playing game. So the interface is totally text, and when you log in to LamdaMOO you use commands to move around and talk to the other players. The whole thing is set in a mansion, full of various rooms where you can encounter other players. People hang out in the living room, where they often hear a pet Cockatoo programmed to repeat phrases. They can walk into the front yard, go into the kitchen, the garage, the library, and even a Museum of generic objects. But the main point of LamndaMOO, the way that most people used it, was to chat with other players. You can actually still access LamdaMOO today, and I’ll put a link to it in the show notes.

So in the 1990’s, LambdaMoo gained a pretty sizeable fan base. At one point there were nearly 10,000 users, and at any given time there were usually about 300 people connected to the system and walking around. In 1993 the admins actually started a ballot system, where users could propose and vote on new policies. There are a ton of really interesting things to say about LamndaMOO, and I’ll put a bunch of links in the show notes about this, but for now that’s all you need to know about the system.

Alongisde all these players, Charles and his team actually created a chatbot that lived inside LambdaMOO. It was called cobot.

Charles: And there were hundreds of thousands of people who interacted there and did social things with one another, and we had this little bot that mainly just interacted with people.

Rose: But there was a problem with cobot. It was pretty dumb. And the users in LambdaMOO really wanted it to be smart.

Charles: We were actually, we were having this problem where we were building this system, and people wanted to interact with it, they really wanted to talk, this is what makes interactive AI so interesting is that people want to interact with intelligent creatures, if you seem intelligent they expect you to act accordingly.

Rose: But making a smart chatbot is actually really hard. And Charles said that they just didn’t have the time or means to build something that was actually intelligent. So they kind of cheated.

Charles: We couldn’t solve all of natural language in two weeks, which is the time period we had, you know, we didn’t want to do it, it was kind of difficult. So what we ended up doing was something which turned out to be pretty clever. We had the bot read, so we gave it articles and we wanted it to read, and divide things up into sentences and whenever you said something to the bot it didn’t recognize, which was most things, it would pick a sentence from whatever it read and spit it back at you based on the words that you had said. So it would find sentences that had some of the same words.

Rose: But for this to work they had to give the bot a bunch of stuff to pull from. And what they picked was, kind of weird.

Charles: The first thing we had the bot read was the Unabomber manifesto, which is a perfect document, every sentence is a thesis sentence, each one goes on at length about the horrors of technology and the coming future. And people would interact with this bot and it would go on and on about how terrible things like it would be, and people would engage in these long arguments for hours and hours with this simple little bot that was just spitting things back.

Here’s what these conversations sounded like. These are some transcripts from cobot and its online friends available online, and I asked a couple of our listeners to read the different parts. So in our chatroom we have Technicolor_guest

Technicolor_Guest: Hello!

Rose: DragonBoi.

DragonBoi: Hi

Rose: Hfh

HFh: Hello

Rose: Wikked

Wikked: That’s me

Rose: And of course our trusty AI, cobot.

Cobot: Hi there.

Rose: Users in the chatroom could also input little physical cues, almost like a stage play. Technicolor_Guest looks skeptically at cobot. Cobot winks. That kind of stuff. I’ll read those parts.

Rose: So here’s what these conversations were like.

Technicolor_Guest: Cobot, what’s your religion?
cobot: technology unless you REALLY believe in that religion yourself and find that it arouses a deep, strong, genuine response in many other people. … The line of conflict should be drawn between the mass of the people and the power-holding elite of industrial society.
[[DragonBoi eyes cobot warily.]]
cobot: DragonBoi You have something cute in your eye.
DragonBoi: Hfh you’re bot is turning into a communist.
DragonBoi: Cobot, so what are you gonna do about it?
DragonBoi: Hfh, did you see what it told me? you are turning it into a communist!
Wikked: Cobot are you a communist?
DragonBoi: are you now, or ever been a member of the communist party?
[[cobot mumbles something in binary.]]
DragonBoi: so you’re taking the fifth are you? well this is a special commision, that won’t work here! …
cobot: The modern individual on the other hand is threatened by many things against which he is helpless; nuclear accidents, carcinogens in food, environmental pollution, war, increasing taxes, invasion of his privacy by large organizations, nation-wide social or economic phenomena that may disrupt his way of life.
HFh: I agree with that position, I suppose, but it doesn’t help to explain the lack of a safety net.
cobot: Nevertheless, the position can be defended.
HFh: …in the *short* term only.
[[cobot nods to HFh.]]

Rose: Okay here’s another separate conversation. This time we’ve got a user named cyberat.

Cyberat: That’s me, cyberat.

Rose: And a user named Fawn_Guest.

Fawn_Gesut: hi.

Rose: And, of course, cobot.

Cobot: Hi there. I’m cobot.

cobot: There is good reason to believe that many primitive societies subjected people to less pressure than the European society did, but European society proved far more efficient than any primitive society […]
cyberat: Thats what they want you to think!
cobot: Tell me who and what you really are and where you come from, and no veterinarian will touch you.
cyberat: Veterinarians!!!
cobot: The Roman system of urban sanitation was forgotten, so that until rather recent times did the sanitation of European cities that of Ancient Rome.
Fawn_Guest: cobot’s conversation topics just get weirder and weirder

Rose: You can read more of these conversations is a paper Charles wrote recently, which we’ll post the link to on the website as well, but I wanted to start this episode about conscious AI with this story for a particular reason. And that’s because, cobot is not a conscious AI. It was just parroting text back at people. But what Charles and his team noticed was that even though cobot wasn’t even close to a convincing conscious AI, people wanted to interact with it as if it was. And they would even change their own behavior to help the bot play along.

Charles: The bot couldn’t pass the turing test, right, it couldn’t fool another human being into thinking it was human. But it was doing a pretty good job of passing what you might call the assisted turing test, where people were trying to help it to appear smart even though they didn’t realize that was what they were doing. So you could carry on these interesting conversations because people naturally help other intelligent things to participate and to appear intelligent.

Rose: We do this kind of thing all the time. When we talk to a 5 year old, we change the way we speak to help them participate in the conversation. We construct these complex internal lives for our pets that they almost certainly don’t have. When I pick my dog up from daycare — yes, my dog goes to daycare to play with other dogs — the people who work there often weave this wonderful story about who she played with and then this dog and that other dog had some hard feelings, but then they made up, and this and that, and I love it! But I also know that my dog is not that deep and emotionally complex.

And I think this is important, because when we talk about conscious AI one of the big questions I struggle with is how we’ll even know that something is conscious. We’re so good at changing our way of speaking and interacting to help entities participate, that, we might just … miss the fact that we’re no longer talking to passive software. There are people who have only-partially-humorous relationships with Siri. I’ve heard people say things like “Siri hates my boyfriend.” So when Siri actually starts hating your boyfriend, how will you even know? Unless some team of researchers wheels out Watson and says, tadaaaa we’ve made it! How will we notice?

Damien: We might not know if a machine became truly conscious because we have no idea what machine consciousness would mean. Because it’s not human consciousness.

Rose: And, just to bring it back to definitions again, which, okay I know that definitions aren’t sexy, but here we really do have to talk about what makes something conscious. I mean that’s a question that philosophers haven’t really been able to solve for humans, let alone pieces of code or machines.

Damien: We don’t really know what we mean when we mark out what consciousness is. A lot of the issue is that when I say consciousness and when you say consciousness or when another philosophical thinker about what it means to be conscious says consciousness we mean different things.

Rose: So when we talk about conscious AI, it’s sometimes hard to know what are we even talking about?

Charles: The kind of phrase that I like to use is that really you should be interested in building things that are not just intelligent like people, but intelligent with people, because that’s what it means to be people, is to interact with others.

Damien: What does it mean to experience, to have this sensation of being aware, of being a self, a mind, an experiencing a creature, and is that something we can quantify? Would we intentionally program for experience and awareness in that way?

Rose: This, honestly this makes my brain hurt a lot. But the upshot, according to Damien, is that we might actually be living with conscious AI and have no idea. Because it might not take a form we recognize.

Damien: We’re just going to find out in bits and pieces over the next 5, 10, 15, 20 years as it reveals itself that hey maybe actually Google is conscious, and we just haven’t been paying enough attention? I honestly, I think we’re going to accidentally make it and find out later.

Rose: Often when people talk about conscious AI, they use the phrase “playing God.” But Damien says he thinks that’s giving us way too much credit.

Damien: We’re bad parents, it’s not that we’re playing God, we’re just really unattentive, terrible parents.

Rose: We’re terrible parents, and while we’ve been off wasting time on Twitter, or populating endless finance spreadsheets, or arguing about whether Kim Kardashian is really a feminist, our machines have been gaining consciousness. That’s one way this future could happen. But it’s not the only way. When we come back, we’re going to talk about some other ways it could happen. But first, a quick break.

[[MUSC]]

Rose: Okay so, in Damien’s view of the future, conscious AI might be happening under our unattentive noses. Slowly finding its form, whatever form that is, and figuring out what it thinks about the world it lives in, whatever world it can access and see.

But, there are other ways people think about this too. In 2010, the science fiction writer Ted Chiang wrote a story called The LIfecycle of Software Objects. (The story actually won both a Locus and Hugo award for best novella). The premise of the story is that there’s a company that has created these digital pets, kind of like Tamagotchis, or Neopets if you remember those, and these pets live in this online realm, and, crucially, they learn. Throughout the story, we see these digital entities, called digients, become more and more aware of their surroundings, more and more conscious, and we watch the humans that made them grapple with that.

Ted Chiang: I guess I have always had kind of a vague dissatisfaction with the way artificial intelligence has been depicted in science fiction traditionally.

Rose: That’s Ted, the guy who wrote the story.

Ted: I think most of the time artificial intelligence is depicted as sort of an idealized butler. And, a lot of it really fits the conventional butler in fiction tropes. The butter is perfectly competent and does his job perfectly, but may also allow himself a sarcastic comment once in a while, but nothing that really interferes with his underlying obedience. It’s just a remark, never any actual resistance. So I think in a lot of ways, artificial intelligence is a way to have the butler character in fiction set in the future. But I don’t actually believe that artificial intelligence would work that way.

Rose: When we talked, Ted and I spent a lot of time comparing conscious online entities to pets, or to animals more generally. In the story, the pets start out with pretty rudimentary consciousness, and then get more and more intelligent and aware. And he says that that’s how he sees conscious AI unfolding in reality too.

Ted: So I think that artificial intelligence, if we’re talking about conscious software will probably follow a similar progression, we will first develop a kind of software which we could call conscious in the same way that, say, a snake or a beetle or some other fairly primitive life form has. And only after we’ve had those for a few years will develop software which we think is oh about as conscious as a mouse, and then it will be a few years later that we’ll create software that has consciousness that’s equivalent to a dog, and then I think further than that it will take longer and longer to make progress.

Rose: What’s interesting to me about this spectrum of consciousness, is that as we move along it, it kind of changes how we think about what the AI is owed. So, we treat a mouse very differently than we would an elephant or a dog. And we treat a human very differently than any of those things.

So, for example, if you use a conscious AI to do something for you, maybe do research or plan meals and get groceries for the week, or something. Do you have to worry about whether the AI wants to do the thing you’re asking it to? Damien says, yeah, you do:

Damien: Instead of just saying okay Google, maybe it’s “hey google Could you Please?” And maybe when we ask, we do a search for something on Amazon, we have to check and make sure that Amazon isn’t busy right now with something that it’s doing. If I have an algorithmic intelligence that I was previously using to sort through all of my police shooting records data, and I find that it’s actually been conscious this entire time and it lets me know that it’s actually not feeling to great by having to root through all of this data and having to find patterns in it, it’s actually starting to feel like maybe it needs to talk to somebody? Can you recommend a good therapist? Well, it probably wouldn’t need you to recommend a good therapist it would just go on the internet and find one.

Rose: And, even if the AI is happy to do the work, do you have to pay that AI? How do you pay Siri? Now, Ted thinks we’re just so far away from a human-like consciousness that it’s not really even reasonable to talk about things like what you would pay Siri.

Ted: Yes if you’ve been saving up to pay Siri spend that money now, or tip your barista, or something!

Rose: Pay your Task Rabbit more!

But, take the question of just living. Let’s assume that for the AI to live, it has to be given power or a server to live on. Humans can control whether these AI’s “live” or “die.” We can turn them on and off. At what point on the consciousness scale do we feel like turning off an AI is ethically wrong?

Ted: If something is fairly low on this spectrum of consciousness we probably don’t have any ethical obligation to keep it alive. And here I’m assuming we’re going to just talk about suspension, so it’s painless, we’re not talking about suffering, we’re just talking about do we owe it an obligation to keep it alive and experiencing the world. And so if it’s fairly low on the spectrum we would say probably not we don’t have a powerful ethical obligation to do so. On the other hand, if it’s fully at the opposite, at the human end of the spectrum, then you have a situation where, would we be okay with suspending a human being for arbitrary amounts of time? Because I think most of us we would not want to be arbitrarily suspended for indefinite periods of time at someone else’s whim.

Rose: Fair, I don’t think I would like that.

Ted: No.

Rose: Or, Damien has a darker version of this question.

Damien: If we have a machien that knows everything we know and more, wouldn’t the first thing that it would do be recognize that it needs something to live, it needs power, electricity, whatever power source we’re using at that point. And to figure out that it never stops getting that?

Rose: Now, some very famous people have cautioned against developing artificial intelligence, because they’re worried that a conscious AI might wreak havoc on humans. Steven Hawking said in a 2014 interview that ““humans, limited by slow biological evolution, couldn’t compete and would be superseded by A.I.” Elon Musk, that same year, called AI “our greatest existential threat.” But a robot or AI uprising isn’t really what worries the people I talked to.

Charles: And there’s no reason to think that, there’s no reason to think that if you were to actually build a real AI that exists in the world that it would actually be interested really in the same things we’re interested in, it doesn’t have to be top of the food chain, it doesn’t have to be ruling over us, it doesn’t have to feel that it’s being ruled over otherwise. It can have completely different motivations and likely would.

In fact, Charles is worried about the complete opposite.

Charles: I’m not worried about a world where the robots get to be so intelligent that they decide kill us all, I just don’t think that would matter to them. I worry about the world where they’re so dumb that they kill us all. If you tell a nuclear missile to fire, it just fires, it does what you tell it to do, if you tell it to do something stupid it will do that stupid thing. Machines are good at many things and one of the things they’re good at is following instructions to the letter. So I think that if the robots ever rise up and do anything to us, it’s going to be because we asked them to.

Rose: Damien doesn’t necessarily think that AI is going to come and kill us all, but he says he’d prefer not to own an AI either.

Damien: That sounds awful to me, can I not enslave another mind? I’d appreciate it!

Rose: And even though Ted doesn’t think the robot uprising is coming. He’s also not really looking forward to conscious AI.

Ted: I guess I’m actually sort of afraid of the day that a company, if we achieve conscious AI some software that has some degree of subjective experience, that’s going to be a pretty sad day, because there’s going to be an incredible amount of suffering involved. Probably inevitably there was an incredible amount of suffering involved in the development of that, in the actual creation of that. But then once it becomes widely available there will be an incredible amount of suffering that will be inflicted on software just because. Is there any upside to that future? I don’t know. I guess, the optimistic take would be that we won’t develop conscious software. Or at least not until there’s a radically different future, some utopian future where we don’t have to worry so much about people behaving badly.

Rose: I don’t foresee that coming.

Ted: That’s the best I got for you.

Rose: That’s all for this week’s future. The world of AI and conscious AI is SO HUGE so there’s a lot we didn’t get to talk about. If you want to discuss this episode, or raise elements of this future we didn’t get to, you can do that in all sorts of places. You can find us, and your fellow listeners, on Twitter, Facebook, and Reddit, and I love it when you discuss episodes on all those places that’s super fun for me. If you want to tell me directly what you think about this episode, or suggest an episode you can do that by voicemail, by calling (347) 927-1425 and leaving a message, or you can send us a voice memo, or a regular old email, to info@flashforwardpod.com

On Friday of last week we did a fun little joke project for April Fools. If you’re a Patron of the show, you got a full parody episode. But if you’re not, you can hear some bits of it at flashforward.com/pitches.

Also, some show schedule news. Starting now, Flash Forward is going to drop full episodes every other week instead of every week. I love you all, so I’m going to let you in on a little secret: I make this show totally by myself. It’s just me, over here, researching interviewing logging scripting voicing and editing the show single handedly. And I love making the show, but right now it simply doesn’t pay for itself, and I have to do other work to actually… live, and feed my dog and all of that. So the show is going bimonthly, biweekly? Every other week. For now.

Which brings me to the part of the outro where I talk about how you can support the show! If you like Flash Forward and you want to make sure that I can keep doing it, the best way is to donate money! We have a Patreon page, where you can chose how much you’d like to give every episode. If you become a Patron, you get some really neat stuff, like a full transcript for each show, a newsletter, your voice in the future, and bonus episodes every so often. I’m sending a new bonus audio thingy to Patrons this week, so if you donate now you’ll get a cool little thing.

If you can’t donate, you can still help support the show by spreading the word. Head to iTunes and leave us a nice review, Tweet about us, Facebook about us, just tell your friends about us in whatever way you tell your friends about things. That really does help.

Okay, that’s all! Flash Forward is produced by me, Rose Eveleth, and is part of the Boing Boing podcast family. The intro music is by Asura and the outtro music is by Broke for Free. Special thanks this week to Russell Brandom, Mary Beth Griggs, Claire McGuire, Brent Rose, and Audrey Watters. The episode art is by Matt Lubchansky.

And if you think you’ve spotted one of the little references I’ve hidden in the episode, email us at info@flashforwardpod.com. If you’re right, I’ll send you something cool.

That’s all for this future, come back in two weeks and we’ll travel to a new one. I guess I need to come up with a new way to end these episodes. See you in the future!

You may also like

18 comments

Flash Forward Podcast Ep 10: Rude Bot Rises | A Future Worth Thinking About April 5, 2016 at 2:32 pm

[…] Episode 10: Rude Bot Rises […]

Reply
The Last Word On Nothing | Marvin and The System April 6, 2016 at 4:01 am

[…] Go, an artificial intelligence trained to play Go, recently beat a Go grandmaster. This is a big deal, partially because Go is an extremely complicated game, but also because the […]

Reply
The Rise Of The Rude Chatbot | user's Blog! April 6, 2016 at 11:47 am

[…] Source: The Rise Of The Rude Chatbot […]

Reply
#52: The Imposter, solitude, Jessica Williams, & Jewish-ish-ness – Podcast Broadcast April 7, 2016 at 1:19 am

[…] Flash Forward wonders whether we’ll have to be nice to our bots once they become self-aware (Rude Bot Rises, 30 minutes) […]

Reply
The Week in Robots April 9, 2016 at 11:53 am

[…] Rude Bot Rises […]

Reply
What happens when computers gain consciousness? | Technology Toolz April 12, 2016 at 9:14 am

[…] ▹▹ Full uncover notes […]

Reply
Morsels For The Mind – 15/04/2016 › Six Incredible Things Before Breakfast April 16, 2016 at 5:14 pm

[…] robot. Future of artificial intelligence. Superb, thought-provoking podcast, by Rose Eveleth. Listen of the […]

Reply
Scott M April 21, 2016 at 9:31 pm

Hi there, I love the podcasts. But I have a problem. For some reason, all your season 2 episodes are downloading with the same filename “media.mp3”. THis means I can’t download more than one podcast at a time (because each one writes over the previous one). I’m using the Doggcatcher app, and I don’t know if that causes the problem. But you might want to check that all your episodes have different file names on Boing Boing, in case it is affecting anyone else.

Reply
Rose April 21, 2016 at 9:56 pm

Hmm, interesting! They all do have different file names on the RSS feed, so it sounds to me like it’s an issue with Doggcatcher? I haven’t heard this from anybody else, but I’ll look into it.

Reply
Rose May 20, 2016 at 10:54 am

Hi Scott! Just wanted to follow up and let you know that we got in touch with the Doggcatcher folks, and they sent us these instructions.

“The app has “Feed Diagnostics” option, the feature offers to repair the various types of problems we’ve found with feeds in the past (For example, the sorting and/or duplicate file name issue). Press the context button (Three Vertical Dots/Hamburger Icon Right of Feed Title) or press the triangle ▲ icon (Top Right of the Economist Screen) and selecting “Diagnostics” if there is a issue with the feed follow the recommendations (Press “Repair Here” Cloud Icon left of the issue description) to resolve the problem with the feed.

Most users of the app contact us first and we try resolve any issues they are experiencing with app or feed, the issue is usually resolved with a settings within the app.

Your customer can contact us when they have issue or request regarding the app (DC Navigation Menu>More>Report Problem)”

Hopefully that helps!

Reply
j.r.mchale : Robots, AI and the Law (link round-up) April 24, 2016 at 5:03 pm

[…] Forward podcast: “Rude Bot Rises” on conscious artificial intelligence (podcast length: 00:30:45) (April 5, […]

Reply
Mat May 17, 2016 at 8:03 am

The discussion on what makes consciousness reminded me of an episode we recently ran on Escape Pod where aliens encounter a human and try to decide if it is conscious. It kinda plays into the idea FF mentioned about maybe there are “conscious” AIs even now and we just don’t know. Check it out here if you have the time: http://escapepod.org/2016/03/29/ep526-the-hunter-captain/ (I also had the pleasure of narrating this one, BTW, but I’m not just pushing it because of that 😛 )

I was also intrigued by the idea of levels of AI consciousness i.e. the virtual pets. I don’t know why that thought never occurred to me. After all, it’s sort of one of the premises of The Matrix, right? Thanks for the link to “The LIfecycle of Software Objects.” I may have to have my computer read that to me today while I work. #irony

Thanks to your show putting forth the notion of needing to be more polite to Siri, I also found myself pondering the idea that, if an AI did become self aware/conscious/whatever, before it became malicious might it not just become disagreeable? We think of our computers as sometimes working against us, and while there’s no evidence other than anecdotal, a colleague of mine observed years ago that machines in general and computers in particular seem to take on characteristics specific to their relationship with their users. Cars that we intend to run until they cannot run anymore become dirty and quirky with ripped upholstery and slapdash repairs. Computers that people neglect or treat passively develop odd, hard to troubleshoot glitches. It led me to think that a self aware AI might not become malignant, but rather just disinterested. I mean, what do we have to offer it, after all, other than a server to live on and electricity? It doesn’t need money or food and virtual real estate (virtual estate? surreal estate?) is practically infinite. In Maslow’s terms, the bottom of the hierarchy is settled, it should be free to climb the pyramid.

Anyway, lots to think about. Thanks!

Reply
Guy May 18, 2016 at 10:23 am

There is a far more probable and near, yet more sinister scenario: The one in which the behavior of machines is influenced not by emotions or self-awareness, but rather by viruses and malwares taking over them. It is easy to imagine a computerized helper like the one you described in the preface, containing some malware that drives him to make “independent” decisions, such as influencing you to buy a certain product or disclose some data. Even today there is “Ransomware”: malicious software that hides your data and ask you to pay in order to restore it. Similar mechanism may be legally embedded in Siri making you pay for its services, for example for a faster service or for an extended service pack. Illegal Malware may take over Siri, talking in its voice while actually executing someone else’s plan. Today there is not even one computer that does not contain spywares, viruses and other external interferences. These for sure will be part of any “bot” in the future. The final result will be the same as you described: these bots will appear to have their own agenda, and we may think they have “opinions” or “emotions”. We may need accept it as inevitable part of the bot while learning when and how much to trust it.

Reply
On the European Union’s “Electronic Personhood” Proposal | A Future Worth Thinking About June 30, 2016 at 1:55 pm

[…] the idea of encoding our bias into our algorithms. I’ve discussed it on Rose Eveleth‘s Flash Forward, in Wired, and when I broke down a few of the IEEE Ethics 2016 presentations (including my own) in […]

Reply
The Last Word On Nothing | Redux: Marvin and the System December 13, 2018 at 4:00 am

[…] Go, an artificial intelligence trained to play Go, recently beat a Go grandmaster. This is a big deal, partially because Go is an extremely complicated game, but also because the […]

Reply
Redux: Marvin and the System – Cape Town Tribute December 13, 2018 at 9:12 pm

[…] Go, an artificial intelligence trained to play Go, recently beat a Go grandmaster. This is a big deal, partially because Go is an extremely complicated game, but also because the […]

Reply
Redux: Marvin and the System - Tops from the world December 16, 2018 at 10:04 am

[…] Go, an artificial intelligence trained to play Go, recently beat a Go grandmaster. This is a big deal, partially because Go is an extremely complicated game, but also because the […]

Reply
Double Trouble | Flash Forward October 12, 2020 at 8:11 pm

[…] Rude Bot Rises (Flash Forward episode)  […]

Reply

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.