Home Episode CRIME: Can You Sue An Algorithm?

CRIME: Can You Sue An Algorithm?

August 27, 2019

Today we travel to a future where algorithms can be put on trial. 

Guests:

Further Reading:

Actors:

  • Evan Johnson as Mr. Morton
  • David Romero as David
  • Ash Greenberg as Ash
  • Santos Flores as Santos
  • Charlie Chalmers as Charlie
  • Grace Nelligan as Grace
  • Ava Ausman as Ava
  • Sidney Perry-Thistle as Sidney
  • Arthur Allison as Arthur

Flash Forward is produced by me, Rose Eveleth. The intro music is by Asura and the outtro music is by Hussalonia. The episode art is by Matt Lubchansky. Special thanks this episode to Evan Johnson who coordinated all the teens you’re going to hear in this season, and who plays our intrepid debate club teacher this season. Special thanks also to Veronica Simonetti and Erin Laetz at the Women’s Audio Mission, where all the intro scenes were recorded this season. Check out their work and mission at womensaudiomission.org.

If you want to suggest a future we should take on, send us a note on Twitter, Facebook or by email at info@flashforwardpod.com. We love hearing your ideas! And if you think you’ve spotted one of the little references I’ve hidden in the episode, email us there too. If you’re right, I’ll send you something cool. 

And if you want to support the show, there are a few ways you can do that too! Head to www.flashforwardpod.com/support for more about how to give. But if that’s not in the cards for you, you can head to iTunes and leave us a nice review or just tell your friends about us. Those things really do help. 

That’s all for this future, come back next time and we’ll travel to a new one. 

FULL TRANSCRIPT BELOW

▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹

Hello and welcome to Flash Forward! I’m Rose and I’m your host. Flash Forward is a show about the future. Every episode we take on a specific possible… or not so possible future scenario. We always start with a little field trip to the future, to check out what’s going on, and then we teleport back to today to talk to experts about how that world we just heard might really go down. Got it? Great!

This episode we’re starting in the year 2060. 

***

[students milling about, chatting]

David: Oh my god, it’s so long! If that was in my house my mom would kill it. 

Mr. Morton [slightly out of breath]: Right, okay, sorry I’m late everybody. There was an incident in the cafeteria. We’re not going to get into it.

[Students chattering] Please please please. All right. The rumor mill is already active. We don’t need to. Add fuel to the fire. So we have some debating to do. You guys ready. 

Students: Yeah!

Mr. Morton: Let me find my little list here. Oh. Santos and Ava. You guys are up. All right. 

Arthur: Different height squad!

Mr. Morton: Places please. 

Arthur: That was a power move. 

Mr. Morton: So today, you two are arguing about whether we can put bots in jail. [laughing] Right. I’m sort of kidding. The case in question, real case by the way, that really happened back in 2023. I think this was Connecticut. There was a woman. Yeah. Who was diagnosed by an algorithmic triage system at a hospital and she was therefore sent into a low priority group. And that meant, and then she died a few hours later. So when the hospital looked into why she died they found out that it wasn’t the that the algorithm improperly diagnosed her. Well, it actually diagnosed her accurately and calculated that the chances of her surviving were so low that it was not worth committing hospital resources to her cause or to her case rather. Her family decided not to sue the hospital but it’s instead to sue the company that created the algorithm that made the decision. Yeah. Whoa. So Santos we’re going start with you. Will you defend

Santos: Hello. 

Mr. Morton: You can defend please, these poor coders. Please defend them for us.  

Santos: Okay. I I’m going to defend the creators of algorithms and explain why they kind of cannot be held responsible for what their code does. We know that a woman was diagnosed by algorithmic triage system at a hospital. Yes. The system placed her in a low priority group because they calculated, accurately might I add, that her condition was already so critical that there was no point in trying to save her. 

Now I know it seems easy to blame the people who created the system, the same system that this family seemed to perceive as her ultimate cause of death. But there are many many other factors that we need to consider here. 

Let’s start with the hospital. The hospital decided that trying to say this woman wouldn’t be worth the risk because of her terrible condition that would most likely be a waste of time and resources, had the doctors made it a high priority to save her. We do definitely have to think about all the other patients at the hospital, the ones who did require extra medical care but still had a chance at living or more, more than her.

Let’s say they put her in a high, well, let’s say they put her in a high priority group and saved her which would be a long taxing and risky procedure. But let’s say that they succeeded. All those medical resources could have gone to the other patients who needed that care. It could have been spread out more evenly to the high priority patients. And um, some patients could have died. Many patients. 

But that is not what happened. The algorithm made the decision to put her in low priority put aside at the hospital would be able to function more properly and that it would benefit the other patients and save them. 

Let’s let’s just talk about algorithms in general. They’re everywhere. They’re in our devices in our government. They’re used to make important decisions and try to achieve the best possible outcome. So like I said, you can’t just blame the creators of the algorithm because there are many other factors that influence the final outcome and choice. So… yeah.  

Mr. Morton: All right now great. Good job. Eva, why is Santos wrong?

Students: ooohh [laughter]

Ava: That’s a long list. 

[laughter]

Ava: I’m just kidding, I’m sorry Santos. 

So the creator of the algorithm should be held responsible because it doesn’t matter if her chances of surviving were slim, a slim chance of surviving is better than no chance at all, which is the condition that the algorithm played her. Basically when she came they said, “well you could survive but we’re going to decide not to let you. We’re gonna let you die.”

Currently the creators of the algorithm are putting money before a woman’s life. They… if you create something like that you should stand behind your work. And if it’s misused then you should be able to jump in and take action. And let’s… and if they don’t get charged then that could be us sign to other companies that they could do the exact same thing. Letting people die while having like risky programs and still gain money off of it. 

Yes it might have saved the family money in the end, but that doesn’t really take away the fact that the woman died in the programmers are still trying to justify her death.  

Mr. Morton: OK. All right. Great great response, or great argument I should say rather. Time for questions, Santos, go ahead ask Ava your question.

Santos: All right. Well what do you think should be done differently at the hospital, like what changes would you have made to the algorithm.

Ava: I think they should have replaced the algorithm with something that would instead like present all the different outcomes instead of just saying she should go a low priority they should say they should have made it more she should go on a low priority because she has a risky chance of surviving. If you just give something without any justification then a number of possibilities could happen.

Mr. Morton: Okay Ava time for you to ask Santos your question. 

Ava: Is it OK for the creators of the algorithm to… dammit I didn’t make a question I forgot to make a question

Sydney: Wait can I add something? 

Ava: Yeah. 

Sydney: Isn’t it technically all of our fault because everyone in this party is guilty of this woman’s death because the creator created this program. The hospital allowed it in, to be in a position of power and that gave that power over someone’s life or death. 

Santos: Right. That’s my point. 

Sydney: So isn’t everyone at fault here? Shouldn’t everyone get punished for this?

Santos: Sure. So I believe that they did what they could. Yeah. 

Ava: Okay so in that case is it okay you continue to use this algorithm?

Santos: If they are able to find a better way to manage the lives of these patients. I think yes. But until then I think that if this is what, if this is how like the hospital succeeds and like how most of the patients get better than I think they should continue until they find a better solution. 

Mr. Morton: David you remember…

David: I remember what I was going to say! What I was going to say is like one thing that like you have remember it was a point that you brought up earlier Ava. And like yo what it should have, you, when you said like “yo what if I should have said is, what it should say is this because of this and then a human moderator could like choose.” Like “Oh should we really put it, should we agree with the algorithm.” But I think that like from what I can tell from this case the reason that the algorithms there is because it’s a very busy hospital. Like all the workers the people that would be the human moderators are all doing stuff they’re all busy they’re doing this operation. They’re saving these lives. And so this algorithm is there so that a human doesn’t have to be. So if you had someone there to like look over what all the albums said it would sort of defeat the purpose of having that algorithm there in the first place.

Ava: If you don’t have time to save someone’s life I think you should be transferred to another hospital. 

David: No but that’s the thing is because they’re, they’re, they’re not having time to save someone’s life because they’re saving more other people, they’re saving more people’s lives like in the other room they’re saving more lives instead of using their time to check the algorithm.  

Ava: So it is you’re saying it’s OK to put all our trust into machines. 

Santos: Well a lot of people do that. 

David: I mean yeah a lot of people do put their trust in machines, if this machines has been well coded and if it knows about these things then yes it is OK to put our trust in these machines.  

Ava: Well I don’t think we should be we should put our trust in these machines especially if it’s such a big factor like life or death. 

Claire: Wouldn’t you have a better if the algorithm instead of putting her in a low priority group if the hospital was full, wouldn’t it be better if they just transferred her to another hospital.

Arthur: But also they chose to go to that hospital.

[crosstalk]

David: But they probably didn’t know about the hospital. Also it says that they died within a few hours. I don’t know if they could have transferred her. I mean yes the algorithm probably has some reworking, is probably, they should have transferred it but also they did say they die they die quickly like I don’t I don’t know if a transfer would have been possible. If it is then yeah the algorithm probably needs some tweaking but I don’t think it’s a reason to completely like devalue let the entire like algorithm or like the entire thing of algorithms just because you should probably tweet this to give more information.

Arthur: If you’re going to go to a hospital though for something that dangerous you should probably know stuff about the hospital. Also you’re going into the hospital going to one you trust, that you’ve been there before.

[crosstallk]

Ava: But we’re not holding the hospital responsible we’re holding the coders. 

Arthur: Yeah I know that but you should know that… tou should know the algorithm about the hospital.

Sydney: If you’re going to a hospital you expect treatment. So what this, what I think should have happened is that this person went to the hospital because we all were in pain or they knew they had a serious medical condition and they thought that they were going to get help and they did not get help. Which means that the hospital failed in their duty because of this algorithm.  

Mr. Morton: Wow wow wow you this has been a very very lively debate. You got the entire class involved.  

Santos: I’m evil.

Charlie: It’s not evil if it’s debate:

Mr. Morton:  I’m looking around I’m very proud of you all I see future lawyers in my midst. Thank you so much for your wonderful smart debates. And please get home safely everyone. It’s pouring rain out there. Grab your umbrellas.

[clapping]

*** 

Rose: Okay! Algorithms! You hear about them all the time. But you might not realize that they’re all around you. And they’re only becoming more common, and more controversial. Schools are using them to grade students essays, the US Department of Housing and Urban Development just proposed new rules that would let landlords and mortgage lenders discriminate using algorithms, warehouses are considering installing algorithms to detect risky movements by their workers

Algorithms already touch so many pieces of our lives, and their domain is only growing.

So today’s big question is this: who is responsible for all these algorithms? 

As usual, we’re going to start with a case. But before we talk about why the state of Arkansas was sued over an algorithm, let’s first define what the heck and algorithm actually is. Because it’s one of those words that people use all the time, but when you ask them to define it… it can be kind of tricky. 

Rumman Chowdury: So, people talk a lot about algorithms in the media, and, actually, to a practitioner there’s two things. There’s an algorithm and there’s a model. And it seems like splitting hairs, but it’s a really important definition.

Rose: This is Rumman Chowdury, a data scientist and the head of Responsible AI at a company called Accenture. 

Rumman: So, an algorithm is basically math translated into code. It’s essentially some sort of a statistical, probabilistic, formula that figures out the likelihood of something to happen. And then we translate that into code. 

Rose: So an algorithm is basically just a set of rules, or steps, written down in a code. You can think of algorithms kind of like recipes. You have a list of ingredients and a list of steps, and then you follow those steps and you are likely to get an output that looks like what you’re trying to make. As long as you’re not me, who is a terrible cook, and somehow cannot for the life of me follow a recipes. But you can’t eat a recipe, written down on a piece of paper.

Rumman: That in and of itself is just lines of code. It doesn’t have an application.

Rose: What most people mean, myself included, when we talk about algorithms, is actually what experts call models. 

Rumman: So, models are when you take an algorithm; you put data into it; and you get some sort of a prediction. 

Rose: And these models are everywhere. They help Spotify predict what you might want to listen to. They help mapping applications figure out the fastest route from where you are to where you want to be. They help Netflix figure out what you might want to watch when you Netflix and Chill. And they help Facebook figure out which babies you see in your feed, and which babies you do not. Models are behind Google search results, they’re in the software that guides airplanes, they run travel websites, and ATMs, and pretty much every advertisement you see online. So that one thing; that one product that is just following you around, on every single platform; that’s because of a model. But what happens when they get something wrong? 

That question brings us back to Arkansas.

Animaniacs: Little Rock in Arkansas, Iowa’s got Des Moines. 

Rose: In 2016, Kevin de Liban started getting distressed phone calls from people in this one particular program. 

Kevin de Liba: Prior to 2016, we had fewer than a dozen calls, even involving this program, in probably the four or five years I had been here at that time. And then, in 2016 we start getting floods of calls.

Rse: Kevin is an attorney who works at an organization called Legal Aid of Arkansas. 

Kevin We represent low income Arkansans in all kinds of civil, legal matters So, anything that affects the lives of low income folks.

Rose: And these calls he was getting, they were all the same. 

Kevin: The state is cutting my care, and I haven’t gotten any better.  

Rose: The callers were all part of a Medicaid program that helps people live independently. 

Kevin: Basically, the state will provide for a care aide to come in and help somebody with the key activities of daily living. Bathing, eating, dressing, getting out of bed. All the things that you would need to live independently. And the thinking behind this is that it’s cheaper for the state to have somebody at home in the community, instead of it in a nursing home. And, of course, it’s better for somebody’s dignity to just be close to home, where there might be family, or friends, or at least memories, or neighbors that they are connected to. 

Rose: People in this program were evaluated by the state every year, to figure out how many hours of at home aid they needed. And what that usually meant was that a nurse would come to their house, ask them a whole bunch of questions, and then make a recommendation for how many hours they should get. And that is how the program worked for the last fifteen years. But in 2016, people who had been enrolled in this program – for years and years, always getting the same hours – suddenly were told: actually, you don’t need that much help. We’re downgrading you.

Kevin: These are folks with cerebral palsy, or quadriplegia, or multiple sclerosis, or various other chronic diseases, that haven’t gotten any better. And for no reason that they can identify, they’re having their care cut. Sometimes by as little as 20 percent, which is still a lot for somebody who depends on every minute of care to live independently, all the way up to something like 60 percent.

Rose: And the results of this were devastating for some people.

Kevin: People were getting bed sores from not being able to be turned. Were skipping meals. Would get dehydrated because they wouldn’t drink water after a certain time of day because that meant they’d have to sit in their own urine for longer. 

Rose: And none of these people could figure out why their hours had been cut, when nothing about their lives, or their condition had changed. Even the nurses who showed up, to give the questionnaire and deliver the news, couldn’t explain what was going on.

Kevin: The only thing the nurses could tell them is that the computer did it. So, “the computer did it.” That was the key phrase. 

Rose: The nurses mostly didn’t know it was a model at work here, the application of an algorithm. In May of 2016, Legal Aid sued the state of Arkansas.

Kevin: So, in law, there’s a basic principle of due process, right? The idea is that the government can’t take anything that’s life, liberty, or property, without giving you a fair chance to contest the taking of it, right? And so that means they’ve got to tell you why they’re taking it. They’ve got to give you an opportunity to prove them wrong. And what we argued is that if the state was using this kind of black box process to take away people’s benefits, they were depriving them of fundamental due process. 

Rose: Nobody involved in the program could explain to those affected how the algorithm had come to the decisions that it had. Which meant that people who had their hours cut, couldn’t really contest the decision making process, since they weren’t privy to it in the first place. 

And the Arkansas algorithm wasn’t just a black box, it was a black box with a wonky lid and a whole bunch of cracks in it. 

Kevin: So, the assessment itself involves 286 questions. The algorithm only looks at 60 of those. Which means that 226 questions are completely irrelevant to the number of hours that you’re going to get. Completely. 

Rose: Beyond just being kind of annoying, to spend all this time asking all these questions if only 60 of them matter, leaving out those 226 questions often meant leaving out information that was kind of important.

Kevin: So the algorithm didn’t take into account how well you could walk, for example. Or how much time you needed to bathe, or how much bathing assistance you needed. How much assistance with chores. Whether or not you could be left alone, because you might be a choking hazard. Or might not be mobile, and able to get out of the House if there was something that was happening to it.

Rose: :And there were weird, like, loopholes almost. In one case, the assessor correctly noted that a person didn’t have any foot problems… because they were an amputee. They didn’t have feet. And that made the system think that this person needed less help. 

Kevin: The algorithm didn’t automatically sort you into the category of the most hours that you would qualify for. So let’s say you qualified for a category that gave you five hours a day, and you qualified for a category that gave you five and a half hours a day of care. The algorithm would sort you into the one that gave you five. 

Rose: In some cases, entire conditions were just programmed incorrectly. Diabetes for example, just wasn’t counted as a condition at all. And cerebral palsy wasn’t coded correctly in the algorithm.

Kevin: And as a result, nearly 200 people with cerebral palsy were denied about an hour a day of care, on average, that they otherwise should have received for a span of almost two years.  

Rose: In court, the state argued that these were tiny problems, marginal, that should not prevent them from using this algorithm to help make more “objective” decisions about who got what kind of care. Kevin, and Legal Aid, argued that even if these mistakes didn’t exist, the fact that nobody could explain or defend the choices of the model to the people impacted by it, was itself a problem. And there was one moment, during the proceedings, that might have been the nail in the coffin for the state’s model. 

Kevin: This is, I think, a once in a career movie moment. 

Rose: Kevin had a client whose hours were cut. And Kevin asked one of the state’s witnesses, Dr. Brant Fries, the inventor of this algorithm, if he could hand score this client, just to confirm that the algorithm “got it right” and to basically explain how the decision came to be.

Kevin: So he did that, and he came out that she should have been in a group with more hours attached to it. And so, that came out. It was shocking. The state asked for a recess to go try to figure out what happened, and then the state came back and tried to plead with the judge that, you know, it was just a small error, and it was something that they would be sure to get on and fix right away.  

Rose: Ultimately, the judge sided with Legal Aid. So they won the federal lawsuit on due process grounds. 

Kevin: And what that meant is that the state had to go back, and come up with a way to better explain why your hours got cut. And that took the state several months to do. So, for several months, cuts for thousands of people were on hold. So that was great. 

Rose: And Legal Aid had hoped that this would be the end of the model… but, of course it wasn’t. The state said they were going to keep using the algorithm, once they could figure out how to explain it better. Which meant that, ultimately, a lot of these people would still wind up with dramatically reduced hours, based on an algorithm’s decision. So Legal Aid sued again.

Kevin: And there, we attempted to invalidate the algorithm directly, and we ultimately prevailed.

Rose: So this particular model, in Arkansas, might be gone. But the state is still tinkering with automating these kinds of decisions.

Kevin: The algorithm has been gone, and the state has since switched to a new system. They won’t use the word algorithm to describe the new system. It’s like it’s taboo now. Nobody says the A word. What they have now instead is what they call “tiering logic.” So, you know, you play the game of euphemism. “We don’t have algorithms, we only after tiering logic.” Which somehow makes it better. And they still don’t understand,

Rose: And it’s not just Arkansas. Colorado, California, Idaho, they have all implemented similar models with similar results, people wind up getting less care. And sometimes, that’s the point. 

Kevin: The state wanted to cut Medicaid spending, and they wanted to cut it from the home care program here. And the algorithm issue was just an easy and convenient way to do that that comes in this veneer of being objective, and rational, and not subject to any sort of human biases that a nurse might have.  

Rose: In Arkansas, the state defended the need for a model by arguing that nurses were biased, and inconsistent. Here’s what a spokesperson for the Department of Health Services said at the time: “We wanted to take the subjectivity out of the system so that decisions about level of care were objective, consistent, based on science and based on real data from real Arkansans.” The catch here was that … Arkansas actually didn’t have any data about whether or not nurses were in fact delivering vastly different decisions in different places. 

But you see this argument a lot, right? This idea that an algorithm, a model, can help save us from our pesky human biases. 

Shobita Parthasarathy: We, as human beings, have forever wanted to find ways to simplify human judgment and complex decision making. 

Rose: This is Shobita Parthasarathy, the Director of the Science, Technology, and Public Policy Program, at the University of Michigan.

Shobita: I mean, I think there’s some kind of allure in objectivity, and a distrust of ourselves and our judgment. And so you can think about classification schemes, phrenology….

Rose: Today, instead of measuring skulls, we’re now plugging vast amounts of data into elaborate AI systems to try and get the “right” answer about everything from whether someone should be promoted, get a loan, get a job, or be offered parole. 

Shobita: One of the most hot areas, I guess, when it comes to criminal justice, is in using algorithms to set decisions about bail. And the proponents of these algorithms suggest that it actually has a kind of progressive, or liberal, orientation.  That is that, historically judges have made decisions about whether or not someone should be let out on bail, and that the judge could be biased. But if we base the decisions on a variety of different kinds of data then, you know, it’s based on something that’s a little bit more seemingly objective.

Rose: It turns out, surprise, surprise, that these algorithms have their own bias baked in too — they’re more likely to deny parole to black people than white people who commit similar crimes. This promise of unbiased, objective decision making is… basically like a Cadmean vixen, a fox that can never be caught. 

Shobita: As they say, the robots are us, right? The algorithms are as biased as we are. And by we, I don’t just mean we as individuals, I mean we as societies. 

Rose: But how does a model, which is just a pile of data and math, wind up being biased? And what does it mean when people call them black boxes? But first, a quick break.

[[BREAK]]

So, the most common reaction that I get, when I talk to people about biased algorithms, goes something like this: Wait a minute. Algorithms are just… math. Math can’t be biased. 

Rumman: Some people will say, “Most algorithms are just super advanced statistics, this isn’t like this magical thing.” And that’s all correct.

Rose: That’s Rumman Chowdury again.

Rumman: But, technology is not neutral. Even when we think about how data is collected, and stored, and how we measure, things even that in of itself has some sort of a bias. 

Rose: Now, if you’ve been a long time listener of Flash Forward, you might already know this. But, let’s review this. There are actually two kinds of bias that creep into these systems. 

Rumman: So when a technical person, a data scientist, thinks of bias, they’re often thinking about a quantifiable value. So that can come from your data, and maybe your data is missing some information. And maybe that missingness is something systemic. Like you don’t have enough information about people from a particular zip code, or people who are low income, or people who don’t own a cell phone. And then that systemic bias translates into the algorithmic output, because you just don’t know about these people. 

Rose: In other cases, the way you ask the question on the survey that helps you train your model could be flawed. Remember the parole algorithm we talked about earlier? The one that could guess whether or not someone was going to recommit, and would then recommend a parole decision based on that data? The whole system was based on a survey given to the prisoner. 

Rumman: And the survey asks questions like, “did your mother have a job? Are you from a broken home?” And it would ask these questions that, frankly, to me, raised some question marks.  Why would someone’s socioeconomic status growing up be a factor in whether or not they should get parole? Or why should the fact that they’re from a broken home or not be somehow an indicator of these things.

Rose: In data science, there’s a term for this. 

Rumman: GIGO. Garbage in garbage out. The data went in problematic. So what the output is is going to be problematic to begin with.  

Rose: In other cases, it’s not that the data going in is bad, it’s that it only really applies to certain situations. In many cases, algorithms are trained in one context, and then they are being deployed in a completely different context.. Take medical models, for example. 

Nicholson Price: Lots of conversations about how do we develop algorithms and artificial intelligence in medicine take place in the context of pretty fancy medical environments. 

Rose: This is Nicholson Price, a professor of law at the University of Michigan. 

Nicholson: Because those places tend to have the resources to collect data, and tend to have the resources to say, “okay, we’re going to make these data available for further use, and for secondary use and for trying to advance medical science.” That’s great. 

Rose: But not everybody gets treated at a fancy hospital. 

Nicholson: It’s entirely possible, and I think quite likely, that in many ways learning what to do in a high resource, fancy, medical environment won’t give you the right suggestions when you move out of that environment. 

Rose: So let’s say someone comes in and has cancer. And, let’s say there are two drugs that doctors could treat this person with. 

Nicholson: One of them is really strong, and likely to really be very effective against the cancer, but it also has nasty side effects that sometimes occur. And if those nasty side effects sometimes occur, you’ve got to intervene pretty quickly, or really bad things are going to happen to the patient. A second drug is less strong, less good at knocking out the cancer, but it’s a lot safer and you don’t have to worry so much about those side effects. They don’t happen as much, they’re more easily managed.

Rose: Which drug you should pick has a lot to do with where you are. 

Nicholson: If I am at a world class cancer center, and I’ve got a team of crack oncology nurses available 24/7 to manage patients, that first drug might be a great choice. 

Rose: But, if you’re at a rural hospital that’s strapped for resources, that first drug probably isn’t the right choice. In fact, that choice could be deadly.

Nicholson: And if the algorithm just learns at fancy places, it might think that the better option is always the first drug, and not the second drug. And that’s a problem.

Rose: And this is not some far fetched scenario that I’m making up because I’m very paranoid. In a lot of cases, these diagnostic models developed in fancy hospitals are actually meant to be exported to places with fewer resources. That’s the whole idea. 

Nicholson: Not everybody can get treated at the University of Michigan, but maybe if we trained an algorithm to suggest what the treatment was like, that algorithm could be easily duplicated, put around the world and voila. Lots of people are getting care. So this, I think, is a hope of black box medicine, of artificial intelligence in medicine. 

Rose: But it’s a hope that might… not come to fruition. Because context matters, and algorithms are very bad at context. 

All of these different factors can bias the data going in, and the way that data is processed. And one of the challenges in rooting out this bias, is that as models get more and more complicated, they become less and less understandable. This is what Nicholson meant by that phrase “black box medicine.” 

Nicholson: We can also have algorithms that say, “this is what I think the right dose of a drug is for this person.” And the software can’t tell you why that is. It can’t say, “oh, it’s because they weigh 75 kg, and they’re male, and they’re 5 foot 8.” It, instead, has some very large number of variables that put together say, “well, for patients like this, this kind of dosage has worked best in the past. But I really can’t tell you why.”

Rose: It’s not just like, “oh it’s hard to understand because it’s complicated math.” It’s that the algorithm itself cannot tell you how it came to a decision. Literally nobody knows, not even the system itself. And this makes it even harder to root out bias in the data, or the system, because you don’t actually know how it works. You don’t know which pieces of information it used to come to a conclusion, and which ones it didn’t. 

Plus, a lot of these algorithms are made by private companies. They’re proprietary, which means that the public has no way of knowing how they’re built. 

Shobita: Invariably, we don’t actually know, a lot of the times, what are the pieces of data that go into these algorithms that are presented as objective, because they’re produced by proprietary companies and they’re opaque to us. They’re purchased kind of lock, stock, and barrel by states, or by counties. 

Rose: And this makes questions of rooting out bias almost impossible. 

But there’s another kind of bias at play here too. 

Rumman: Then there’s a second kind of bias, where it’s a societal bias. So you can have perfect data, and a nicely specified model. But actually, the real world is an unfair place, and we know this. People are discriminated against systematically. People don’t get jobs because of how they look, or their skin color, or their gender. So we know the world is an unfair place. 

Rose: Even if you have all the data in the world, your model is going to reflect a world that is not fair. Just last week, on Thursday, August 22nd, it was “Equal Pay Day for black women” — the date in 2019 that black women have to work to, to earn as much money as white men did, if their work year ended December 31st, 2018. In other words, black women are paid 61 cents for every dollar a white man in the United States makes. That’s $23,000 less every year, and it adds up to $900,000 over the course of a 40-year career. You could almost buy a house in the Bay Area for that amount of money. Almost. And, perhaps most depressingly, the gap is actually widening. Equal Pay Day for black women was on July 31st, in 2017. 

Rumman: So we can have perfect data, but we live in an imperfect world. So that imperfection is reflected in a model’s output, even if you’ve corrected for all the quantifiable bias you can think about. 

Rose: And this kind of bias is actually way more challenging for people to try and account for in their models. When bias is about data collection, or survey design, there are well established ways of stamping it out, and improving the data collection. When bias is baked into the world around you, accounting for that is no longer a question of just methodology, it’s a question of ethics and morals. One thing people try to do is exclude certain factors from the data, like gender or race. Which sounds like a good idea, right? If the model doesn’t know anybody’s race it can’t discriminate based on race. In practice, that… doesn’t really work. 

Rumman: There are latent variables, or proxy variables. And what that means is sometimes a variable is actually a representation of another variable. And the most obvious example in the United States is zip code and race, and zip code and socioeconomic status. And it’s pretty obvious when I say, “yes someone’s ZIP code is a really good indicator for how much money they make.” Absolutely. So, race also. Someone’s ZIP code is often a very good indicator for what race they’re likely to be. And we’ve seen this happen where people build into algorithms zip code as a way of identifying geography, for whatever reason that might be relevant. But what that does, is it picks up race, it picks up socioeconomic status. And this is where bias can creep in in a way that we hadn’t thought about. 

Rose: So, what you’re left with is a situation in which no matter what you do, your model has to grapple with the reality of the world. And this is something that makes many data scientists very uncomfortable. Because correcting for this kind of bias, means putting your finger on the scale in the name of marginalized people. Once you do that, you can no longer claim that it’s “just math,” even though… that defense never really was legitimate in the first place. 

Rumman:  Increasingly, we’re gonna be thinking about what I call gray area questions. Should, for example, a search engine algorithm, when I input CEO, should it show me the real world. or should it show me an ideal world where there are more female CEOs and more CEOs of color?

And that’s a debatable question, right? I think most people would agree that someone should not be denied a job based on things like race, gender, etc.. In fact, we have laws to protect people about these things. But what about the things that are not necessarily protected by law, that are this grey area? Like the CEO question.

Rose: Tech companies do not want to seem like they are fiddling with the results of their models to advantage certain people, even though they are, all the time, no matter what. But you can see places like Facebook and Twitter and YouTube really struggle with the idea that perhaps they should show less hate speech, or homophobia, or conspiracy theories. Because once they admit that they are pushing on their results in one way or another, whichever group is on the losing end always screams bloody murder. 

And Rumman says that often, when these conversations come up, we like to blame the engineers at the company. They’re the ones who build these models, after all, shouldn’t they be held responsible for their results? 

Rumman: I think that data scientists get this unfair amount of blame put on them. Yes, you are the ones picking the data, training the model, etc. But you are part of a larger team, a larger corporation, etc. So, it’s not a unilateral decision that’s made by data scientists, and often they’re actually not even really trained to understand how this bias might manifest itself.  

Rose: These models are really complicated, and they exist as a piece of a broader company’s strategy. But Rumman says that we, you and me, are also a key piece of the solution here. The more we know about how these systems work, the smarter we can be about asking questions when they’re deployed around us. 

Rumman: I know there’s a lot of data scientist bashing; to say that people should learn ethics –  and I totally agree with that, by the way. I absolutely do think that data scientists should be better trained in social sciences. But on the other hand, it absolutely does help for non-technical folks to understand algorithms, and limitation of algorithms, so it’s not doesn’t look like magic to them.

Rose: Models are not magic! 

So if the data scientists aren’t fully to blame here — who is? If an algorithm does, say, tell a doctor to give you that super risky cancer drug and things go badly… what recourse do you have? When we come back, I finally get to ask the question I’ve been waiting to ask this whole episode:

[on the phone]: This gets me to my real question: can you sue an algorithm?

But first, a quick break. 

[[BREAK]]

Okay, so algorithms can, and do make bad decisions sometimes. Algorithms, they’re just like us! But when you or I make a choice that causes harm, we are generally held accountable for it. How do you hold an algorithm or a model, accountable? In Arkansas, they sued the state and got them to stop using the model, but that won’t always happen or work.

Let’s stick with that hospital example that Nicholson gave earlier. So, you go into the hospital, you have cancer, there are these two drugs; the risky one, the safe one. Now let’s say that you are at this more resource strapped hospital, where the second drug is the better choice, because it’s less risky. Your doctor consults the algorithm, which was trained at a fancy hospital, and the algorithm says “give the more effective, but riskier drug.” And the doctor does, and you don’t do so well. 

Now, remember the algorithm might not even be able to tell the doctor how it decided that first drug was the better choice.. It’s a black box, it just says … do this. So, before we even get into whether the outcome is good or bad, that has implications for something called informed consent, this idea that you should know what you’re agreeing to, as a patient. 

Rose [on the phone]: And when it comes to a patient; you’re in the room with the doctor, and the doctor says, “okay, my computer says to do this.” Can a patient give informed consent when the doctor doesn’t even know why this treatment is the one that’s being recommended?

Nicholson: That’s a fascinating question, actually. I literally read a draft from a friend of mine just this morning on exactly that question. And I think we don’t have an answer yet. A kind of blithe answer would be yes. Doctors often don’t know why they recommend what they recommend, and they don’t know how what they’re using works. And patients don’t know anyway, and so informed consent is lots of assumptions about us knowing things that we don’t really know. And so why worry if there is one more set of things that we actually don’t know how they work?

Rose [on the phone]: That’s not very encouraging.  

Nicholson: No, that’s not the way we imagine the system should work.

I think it’s going to depend on how patients feel about the new technology, how doctors feel about using it. It’s not a very satisfying answer, either.  And I’m honestly not sure how doctors, patients, or frankly the courts are going to sort out the issue of informed consent, in particular.  

Rose: But even if the courts decide that informed consent isn’t really at play here, malpractice could be. Right now, in order to win a malpractice suit, you have to show that the doctor or nurse or whoever didn’t meet a minimum standard of care. 

Nicholson: Hey, this is what we expect of physicians and you have to do at least that much. And if you fall below that standard of care; if you provide care that was inadequate, and under that standard of care, and that results in an injury, then the patient should be able to recover and sue you for medical malpractice.

Rose [on the phone]: In that case, couldn’t you just be like, “well, I’m in a fancy hospital, and using an algorithm is even above the standard level of care. So, I can’t possibly be held in malpractice”?

Nicholson: Yes, a fascinating question is how our algorithms actually going to fit into the standard of care. 

Rose: It’s kind of like, this double edged future sword. You might be damned if you do, and damned if you don’t. So if you don’t use the algorithm…

Nicholson: Can you say, “oh, you didn’t consult this particular A.I. product when you were figuring out the diagnosis, or when you were figuring out the right dosage for a patient. And not consulting that is itself below the standard of care”?

Rose: But if you do use the algorithm, and get something wrong, you could argue that they shouldn’t have listened to the digital doctor in the first place. 

Nicholson: Does the advice of an algorithm to do something, or not to do something, change the reasonableness of the physician doing, or not doing, that particular thing. 

Rose: And what this means is that, at least for now, algorithms probably won’t actually do much to change the kind or level of care you get at a hospital.

Nicholson: So, I think, at least for now, the safest thing to do is to do exactly what you would have done before, assuming that you’re reasonably competent. Which is a little bit disheartening. 

Rose: This kind of defeats the purpose of these systems right? These medical models are supposed to revolutionize medicine, and make doctors faster, and smarter! They’re supposed to find surprising and rare cases that a doctor might have missed, and uncover counterintuitive diagnoses. But none of that is going to happen, at least for now, because taking a risk just based on an algorithm that can’t explain why it thinks that a patient actually has lupus, when you think they have smallpox, is a recipe for a malpractice suit! 

[clip from House]

Dr. House: Okay, from now on, no one says anything, unless no one’s said it before. 

Rose: And obviously doctors should be careful, because these algorithms are new, and could absolutely be wrong. And people’s lives are on the line.

But what happens when a doctor throws this caution to the wind, and decides to put their faith completely in this black box Dr. House. 

[on the phone]: This gets me to my real question: can you sue an algorithm? 

Nicholson: Ha. So, the easy answer – and again this is a law professor stock answer – is It depends. It’s totally unclear. The law actually isn’t there yet. 

Rose: Now, you probably can’t actually name an algorithm as a defendant, since, it’s not a person. There will be no jail for computer servers, where they all sit around humming and trading torrented music illegally, or whatever computers servers might do in jail… 

But who actually is responsible in this case?

Nicholson: You could potentially go after the physician for malpractice. You can imagine the health system being liable, the hospital for implementing an algorithm in some sort of irresponsible or unreasonable way. We could even imagine payers potentially being liable. Insurers, depending on how their reimbursement policies, or their decision policies, or their guidances interact with what it is that the algorithms are doing, and how their physicians are, or the providers are reacting to that algorithm care.

Rose: And we won’t really know what the legal framework will be here, until it happens. Until somebody sues. And the courts, and the juries, and the people like you and me, will have to figure out how we feel about these cases. 

Nicholson:  It’s also possible that states could pass laws, right? We could have state legislatures saying, “hey, no physician is liable for following the recommendation of an algorithm.” 

Rose: The US has this system setup for vaccines, already. If you get a vaccine, and something bad happens, which is rare but does occur, you can’t sue your doctor or the manufacturer for giving the vaccine. Instead you go through a special court and you get compensated through a special fund. If you could sue doctors or vaccine manufacturers for those injuries, they wouldn’t give or make vaccines, and we’d all be dying of polio and measles. 

In some cases the FDA might get involved. Earlier this year the administration put out a white paper about how they might regulate artificial intelligence in medicine. But even if the FDA were to implement new rules about these models, it wouldn’t apply to all of them.

Nicholson: When a hospital develops its own set of artificial intelligence protocols for doing recommendations, or for identifying patients who are about to go into sepsis, or for whatever else, and then it just uses them within its walls; FDA doesn’t touch that.

Rose: Some people have argued for basically an FDA for AI.

Shobita: That is, some kind of centralized regulatory infrastructure where the algorithms are, essentially, approved by regulators.

Rose: That’s Shobita tagain.

Shobita:  I think that’s an interesting idea.  I think it’s probably unlikely, at least in the short term. But I think that it’s important for regulators, for the government, to be actively involved with people, companies, who are developing the algorithms. 

Rose: And she says that the big problem with regulation of algorithms, right now, is that people keep expecting the engineers and technicians to have the answers. 

Shobita: They trust the technical expert to answer questions about the social and ethical implications of the technology. But of course, the technical expert is not an expert in this social and ethical dimensions of the technology.  Their expertise is in the technical dimensions of the technology.

Rose: Right now, these models are governed in an elaborate game of policy whack a mole — as they pop up, lawmakers react and try to figure out how to make sure they’re actually serving the public, instead of hurting them. 

So what the heck does all this mean? What should people who don’t have a hand in building or deploying algorithms… do? Well the first thing to realize is that you actually do have a say here.

Shobita: Invariably, we, as citizens, have more of an understanding of our community values than we realize. And it’s simply a matter of having the courage to ask those questions of the technical experts as much as we ask them about the water utility folks, or the sidewalk or road people, the social service people; we need to be asking those questions, too. And we can.

Rose: So, go to your local community meetings, get involved, ask questions about what exactly these systems do, what data they were trained on, and how they’re being used. Don’t let anybody tell you “oh it’s complicated math,” No! You deserve to know how these decisions that affect you are being made. So ask the questions, and if they can’t explain the answers to you, keep asking. That’s a red flag. Remember: algorithms aren’t magic, and you should never let anybody saw you in half before understanding how the trick works. 

[music up]

Flash Forward is produced by me, Rose Eveleth. The intro music is by Asura and the outtro music is by Hussalonia. The episode art is by Matt Lubchansky. Special thanks to Veronica Simonetti and Erin Laetz at the Women’s Audio Mission, where all the intro scenes were recorded this season. Special thanks also to Evan Johnson who played Mr. Morton and who also coordinated the actors of the Junior Acting Troupe who play the students in the intros this season. Today’s debaters were played by Santos Flores and Ava Ausman. If you want to hear the students debate this topic further, you can hear the full cut of their conversation by becoming a Patron at $5/episode or more, which gets you access to the Bonus Podcast.

Flash Forward is mainly supported by Patrons! If you like this show, and you want it to continue, the very best way to make that happen is by becoming a Patron. Even a dollar an episode really helps. You can find out more about that at flashforwardpod.com/support. If financial giving isn’t in the cards for you, the other great way to support the show is by head to Apple Podcasts and leave us a nice review, or just tell your friends about us. The more people who listen, the easier it will be for me to get sponsors and grants to keep the show going.

If you want to suggest a future that I should take on, send me a note on Twitter, Facebook or by email at info@flashforwardpod.com. I love hearing your ideas! If you want to discuss this episode, or just the future in general, with other listeners, you can join the Flash Forward FB group! Just search Facebook for Flash Forward Podcast and ask to join. And if you think you’ve spotted one of the little references I’ve hidden in the episode, email us there too. If you’re right, I’ll send you something cool. 

If you want to chat with other listeners about the show, you can do that on Facebook, at the Flash Forward Facebook group. You just search Facebook for Flash Forward Podcast, and you ask to join. And I will add you. Okay, that’s all for this episode. Come back next time and we’ll travel to a new one.


You may also like

5 comments

Sigrid August 27, 2019 at 7:05 pm

Hi Rose, I tried to look up your podcast recommendation „encyclopedia womenica“, but could not find anything. Do you have a link to share? Thank you very much!

Reply
Rose August 27, 2019 at 7:11 pm Reply
Psalm August 28, 2019 at 2:28 am

I wonder about the approach on “effective altruism” on creating algorithms for situations like this. Like in the case provided in the introduction, what would have been the best way for an algorithm to determine during triage: either abandon the patient and maximize medical staff use on other patients with urgent yet high survival rate, or try to maintain some level of medical care to that said patient, but risk increasing the problems on others?

Reply
[Flash Forward] CRIME: Can You Sue An Algorithm? – Aadisht Logs Everything November 18, 2019 at 3:32 am

[…] CRIME: Can You Sue An Algorithm? […]

Reply
Thought Experiments: I'm a Fan, But… | Untrammeled Mind November 19, 2019 at 6:02 pm

[…] interesting discussion of this, see this fascinating episode of the Flash Forward podcast: “CRIME: Can You Sue An Algorithm?” […]

Reply

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.