Home Episode Boss Bot

Boss Bot

September 29, 2020

Today we travel to a future where you’re guaranteed a job, but there’s a catch: your job is assigned to you by an algorithm. 

Guests:

Voice acting:

Further Reading:

Episode Sponsors:

  • PNAS Science Sessions: short, in-depth conversations with the world’s top scientific researchers. Subscribe wherever you get your podcasts. 
  • Shaker & Spoon: A subscription cocktail service that helps you learn how to make hand-crafted cocktails right at home. Get $20 off your first box at shakerandspoon.com/ffwd.
  • Tab for a Cause: A browser extension that lets you raise money for charity while doing your thing online. Whenever you open a new tab, you’ll see a beautiful photo and a small ad. Part of that ad money goes toward a charity of your choice! Join team Advice For And From The future by signing up at tabforacause.org/flashforward.
  • Tavour: Tavour is THE app for fans of beer, craft brews, and trying new and exciting labels. You sign up in the app and can choose the beers you’re interested in (including two new ones DAILY) adding to your own personalized crate. Use code: flashforward for $10 off after your first order of $25 or more. 
  • Purple Carrot: Purple Carrot is THE plant-based subscription meal kit that makes it easy to cook irresistible meals to fuel your body. Each week, choose from an expansive and delicious menu of dinners, lunches, breakfasts, and snacks! Get $30 off your first box by going to www.purplecarrot.com and entering code FLASH at checkout today! Purple Carrot, the easiest way to eat more plants! 

Flash Forward is hosted by Rose Eveleth and produced by Julia Llinas Goodman. The intro music is by Asura and the outtro music is by Hussalonia. The episode art is by Matt Lubchansky. Transcription by Emily White.

If you want to suggest a future we should take on, send us a note on Twitter, Facebook or by email at info@flashforwardpod.com. We love hearing your ideas! And if you think you’ve spotted one of the little references I’ve hidden in the episode, email us there too. If you’re right, I’ll send you something cool. 

And if you want to support the show, there are a few ways you can do that too! Head to www.flashforwardpod.com/support for more about how to give. But if that’s not in the cards for you, you can head to iTunes and leave us a nice review or just tell your friends about us. Those things really do help. 

That’s all for this future, come back next time and we’ll travel to a new one. 

FULL TRANSCRIPT BELOW

▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹ ▹▹

FLASH FORWARD

S6E13 – “Boss Bot”

[Flash Forward intro music – “Whispering Through” by Asura, an electronic, rhythm-heavy piece]

ROSE EVELETH:
Hello and welcome to Flash Forward! I’m Rose and I’m your host. Flash Forward is a show about the future. Every episode we take on a specific possible… or not so possible future scenario. Every episode begins with a little field trip into the future to check out what is going on, and then we all collectively teleport back to today to talk to experts about how the world that we just heard from might really go down. Got it? Great!

This episode we are starting in the year 2035.

FICTION SKETCH BEGINS

SOOTHING COMPUTER VOICE:
Hello Michelle, and welcome to your Career Placement Pod. Please, have a seat and get comfortable. My name is Echo, and I’ll be walking you through your results.

Please place your finger on the touchpad on your left, and we can enter your E2J portal. I’m sure you’re quite familiar with it by now.

[ding]

At E2J, we’re proud to have supported students like you through their entire academic career. And now, on into the workforce. We have been with you from second grade, it looks like, which is great. This will make your results that much more accurate. At E2J, we strive to gather as much data as possible on your strengths, weaknesses, potential, and personality. Every click of the mouse is a potential data point. Each eye flick, each web search, each volleyball dig. It all comes together to this moment.

Now, a bit of legalese, okay? E2J is a contractor with the United States Department of Labor. Our role is to place you in a job, as guaranteed by the America Works Act of 2029. All data shared with the Department of Labor is confidential and tied to your social security number. Your job placement is legally binding for five years, after which you can apply for re-placement. The United States Department of Labor and E2J are equal opportunity employers. We do not discriminate based on race, color, religion, sex (including pregnancy, gender identity, and sexual orientation), national origin, age (40 or older), disability, or genetic information. In fact, E2J never stores or transmits any of that information to its algorithms or servers. Please press your finger to the pad to confirm that you’ve heard and understand.

[ding]

Excellent. We just have a few final questions for you, before your final results can be compiled. Please press the answer that you feel best represents your deepest truth. Remember, I won’t see your answers, so be honest.

You enjoy vibrant social events with lots of people.

[user input feedback “zip”]

You often spend time exploring unrealistic yet intriguing ideas.

[zip]

You often think about what you should have said in a conversation long after it has taken place.

[zip]

People can rarely upset you.

[zip]

You are still bothered by the mistakes you made a long time ago.

[zip]

Okay, that will do for now. In just a moment we’ll be getting your results. From there I’ll generate an employment contract for you.

[chime]

Ah, here we are. All right, Michelle. Are you ready? We have found your perfect job. Congratulations, and welcome to the team at the National Parks Service. You’re now a forester. On the screen in front of you, you’ll find your contract. Please take your time reading it, and if you have any questions, let me know.

FICTION SKETCH END

ROSE:
Okay! So today’s episode is about the future of work. What if we had a job guarantee – if everybody was assured that they would have a job? And just to up the ante a little bit and combine two often-requested futures, what if that job was assigned not by your family, or your position in life, or your income, or your education, but instead by an algorithm that looked at your hist and your personality and chose the perfect position for you?

We are going to take these two ideas one by one, and then we are going to combine them. So let’s start with the idea of a jobs guarantee.

DR. PAVLINA TCHERNEVA:
If you can’t find a wage-paying job, you are unemployed. And if that is also one of those crucial tickets to your well-being, your standard of living, then isn’t it upon us, isn’t it our responsibility to redesign the system in a way that we actually will provide the employment opportunities when people seek it?

ROSE:
This is Dr. Pavlina Tcherneva, a professor of economics at Bard College and a researcher at the Levy Economics Institute. Pavlina has been thinking about and researching the idea of a jobs guarantee for over 20 years, but the idea goes back way further than that.

PAVLINA:
The idea has been there since the eighteen hundreds, at least since the rise of the industrial era.

ROSE:
Then, during the Great Depression when so many people lost their jobs, the New Deal stepped in and provided employment much like a jobs guarantee. And then the idea came up again during the civil rights movement of the 1960s in the United States, noting that unemployment was concentrated in the Black community and asking the government to step in.

PAVLINA:
Even at the international level, the United Nations Declaration of Human Rights lists the right, the right to a job as one of the basic human rights.

ROSE:
As the coronavirus pandemic has swept across the country, and unemployment has skyrocketed, the idea of a jobs guarantee has started getting more attention once again. Which is both good, and kind of annoying to someone like Pavlina who has worked on this for a really long time.

PAVLINA:
It is always, maybe, a little bit disappointing that people pay attention in the depths of a major crisis like the Great Depression. So what we are seeing now is, in the depths of a great recession, tens of millions of people out of work, now we have a much bigger problem. And I’m glad that the job guarantee is getting attention, and it should, and maybe this is the moment to put it in place as a critical infrastructure for policy purposes. But it’s obviously going to be a taller task to employ, you know, 30 million people than, you know, five million people.

You want to implement the job guarantee when times are good. You want to put it in place when there are few people that are unemployed, when we can actually put the structure in place, make it workable. And when the recession comes in, you’re ready to absorb the unemployed.

ROSE:
Pavlina argues that this should be the way we save ourselves from horrible spikes in unemployment.

PAVLINA:
The job guarantee is what economists call an automatic macroeconomic stabilizer. It just automatically kicks in when there’s unemployment. Government spends money on employing the unemployed. That very expenditure is what stabilizes the economy.

ROSE:
So let’s talk about what, exactly, Pavlina is proposing here. I mentioned the New Deal earlier, but her proposal is kind of different from that. It’s not a program where the federal government shows up in a town and says, “Let’s build a bridge so we can put these people to work!” That is not how her version of a jobs guarantee functions. To illustrate her proposal, let’s say that we live in this world with this jobs guarantee, and you are unemployed, and you want a job.

PAVLINA:
In the US, across the country there are these unemployment offices, they are now nicely called American Job Centers. So you go to the job center, you’re looking for a job and you can’t find it. Well, let’s just make them into jobs banks.

ROSE:
These jobs banks will have already gathered up a list of jobs that the community has identified as necessary.

PAVLINA:
And then they deposit them. They basically file them with the jobs office and say, “Hey, look. In this community we have a lot of environmental work that we have to do. There’s a lot of flooding that happens on a regular basis. We just need to do this, address these problems on an ongoing basis.” Many communities throughout the United States suffer from something we call a food desert, which just means that folks don’t have access to healthy, high-quality food. And so what if we were to do actually some small community farms and try to address that problem?

ROSE:
So when you don’t have a job, you go to this job bank and you’re matched with a job working on a project that the community has come up with and wants to see happen. Things like urban farming, environmental cleanup, historical archiving of old maps, maintaining the hiking trails in a city; whatever it is that the public needs and that the private sector isn’t working on.

PAVLINA:
The private sector creates the vast majority of jobs, but not for everyone. Also, it creates a lot of work for commercial return, right? For profit. But then there is a lot of work that just doesn’t generate profit like environmental work, and that still has to be done. It’s not done at a large enough scale. So, we are basically matching two problems; the problem of joblessness with the problem of some specific activities that the public sector has to undertake anyway.

ROSE:
The jobs that these job centers would provide you with would come with benefits and pay $15 an hour.

PAVLINA:
Which also means that whoever is looking for another job, the employer has to, at a minimum, pay a living wage, the living wage that the public program pays. So we also have a firm floor. We’ve lifted the well-being of everyone at the bottom.

ROSE:
The idea is that these job centers always exist for those who might need them and can grow and change as economic conditions do. When there are lots of private-sector jobs, people can transition out of the jobs program and into those jobs. When there aren’t lots of those jobs, they can come back. The community gets things done that need doing, and people have jobs they can rely on. Plus, once the economy recovers and there are more jobs, it’s way easier to get one of those private-sector jobs when you are currently employed.

PAVLINA:
Employers don’t like to hire the unemployed. They tend to hire folks who either have a job or have been out of work for a short period of time.

ROSE:
In the United States in particular, there’s this notion that if someone is unemployed it’s their own fault somehow.

PAVLINA:
The idea is that if somebody doesn’t have a job, it’s not because the economy works in particular ways that always creates unemployment, but that unemployment is a personal, individual, and maybe even moral failure.

ROSE:
This idea compounds the challenges that people of color face when trying to find jobs. Hiring managers are already biased against Black people, for example, and then if you don’t have a job it’s harder to get one.

PAVLINA:
Unemployment is concentrated in communities of color, African American communities, especially black youths. And so black youths live in a permanent, like depression level, unemployment rate.

ROSE:
Obviously, a jobs guarantee is not going to end discrimination in hiring, or structural racism, but it could help bridge some of these gaps for some people.

Okay, so that’s the pitch. That’s the idea. Now to the questions you are probably asking in your head. And I’m guessing that the number one question is this: Who is going to pay for this?

PAVLINA:
You know, there’s still a fair amount of confusion out there. It’s a public policy and it’s a government federally funded policy. Congress gets together, passes a budget, and then when the checks start coming in, when Congress starts spending on contracts or paying them wages, who clears those checks? It’s always the Treasury and the Federal Reserve.

ROSE:
So this would be a federally funded program. Every year Congress would have to pass a certain amount of money earmarked for these jobs, and that funding would change year to year based on projections and what is needed. Pavlina points to the ways in which disaster and emergency relief money is currently allocated. Every year they estimate what will be required, but if they need more, they allocate more funding as necessary. Same idea. And she argues that, in the long run, this spending is worth it because it will, on the whole, help the economy.

To understand why, it helps to talk a little bit about what unemployment actually does; not just to individual people, but to communities and economies. On the individual level, we know that unemployment has physical and psychological impacts.

PAVLINA:
People get sicker. Physically and mentally, they are sicker. You know, they take more medications. They’re stressed out. I mean, suicide rates are correlated; deaths of despair with mass unemployment. And it’s not just a problem for the unemployed. It’s also for their families and their children.

ROSE:
Broadening out from the individual, unemployment moves through communities much like viruses and contagions do.

PAVLINA:
You know, when somebody loses their job, they can’t go to the local restaurant, and they can’t go to the movies, they can’t renovate. And then local businesses and neighboring businesses start losing their businesses and they lay off workers. And so that’s a ripple effect. Very much like epidemiologists that model viruses, they see the same sort of behavior.

ROSE:
When more and more people lose their jobs, they also lose their spending money, which means that restaurants, and movie theatres, and barbershops all go out of business too. All that adds up and can really impact the economy in ways that we are kind of seeing right now in the US.

PAVLINA:
It’s not just a metaphor, but it’s also a way to think about how to address it. Because with pandemics, what we do is we try to prepare ahead of time. We try not to allow this huge outburst of contagion. And of course, we failed in this particular case. But with unemployment, imagine if we had a job guarantee and then were mass layoffs in one community. Well, folks can get jobs and they can still go to the restaurant, and they can still go to the movie theater, and maybe carry on with that renovation project. So you stop this contagion effect of other people losing their jobs just because there were layoffs in one firm.

ROSE:
And remember, you’re not spending that money for nothing. These jobs are real jobs, doing things in communities that people want done. And that is the big difference between this idea and the idea of a universal basic income, which we have talked about on the show before. With a UBI, you just give people money. They may or may not have a job; that’s not really part of it.

PAVLINA:
So very often we hear the argument, you know, “Robots are marching in. They’re taking away people’s jobs. Forget about giving people jobs. Just give them income.” My perspective is, you know, robots are not the enemy. We should welcome automation. We can automate away some terrible jobs. We don’t want a society where people are toiling away in meatpacking plants and dangerous conditions. But does that mean that we still can’t create new employment for people if they want it? So I see the job guarantee as this policy that, yes, embraces technology, yes, reduces the working week, then creates socially useful employment projects from the ground up, improving the public well-being, the common good.

ROSE:
And this might be why the idea of a job guarantee can sometimes poll better than other proposals for social safety nets out there. In 2018, an organization called Data for Progress ran a poll asking about a handful of progressive ideas including a job guarantee. They found that 54% of eligible voters were in favor of the idea.

PAVLINA:
I’ve talked to folks in coal mining communities that, you know, love the idea because they see nothing on the horizon. And it’s been polled in other places. And so I think that it’s not as radical… In fact, when polling has compared the job guarantee to other policies, the job guarantee has the closest bipartisan support and the highest approval rating to any of the other… whether it’s Medicare for all, even basic income.

ROSE:
Of course, there are lots of policies that are very popular with the general public but seem politically impossible, but some politicians have indeed taken up this call, like Kirsten Gillibrand, Cory Booker, and Bernie Sanders. And Pavlina also argues that this could be wrapped into the Green New Deal. There is all this environmental work that needs doing, and we’re going to need people to do it even if it doesn’t always make money for a private company.

PAVLINA:
It is kind of an inherently green proposal because it enhances our environment, it enhances our communities, and you know, individual well-being.

ROSE:
And in case you’re still really, really skeptical of the implementation of this idea, perhaps it would help to know that it has actually been tried before. And it worked pretty well.

PAVLINA:
In 2001, Argentina went through this devastating economic crisis and people took to the streets. I was actually in Argentina when those protests were happening, just visiting friends. It was very interesting because they were asking for one thing mainly, and that was jobs. And so you would have like, you know, middle-income class, lower-income folks; they were all together marching for jobs.

ROSE:
In response, the Argentinian government crafted, basically, a jobs guarantee, in part based on Pavlina’s work. And once they put this program into place, she was actually able to visit and see how it went down in practice.

PAVLINA:
Here’s what we wanted to know: First, is it impossibly difficult to implement a program like this? Because everybody says, “Oh, we don’t have the administrative capacity. It’s so hard to do jobs.” And it turned out it wasn’t. And they did it exactly in this kind of bottom way approach where municipalities and localities, they propose the projects and it went up the approval process, up the chain.

The second question that we wanted to find out is, were these like no-good jobs? Like digging holes, or painting rocks, or anything like that? And absolutely not. People proposed things that they needed. There were food kitchens, for example, in very impoverished communities. There were community gardens. The city of Buenos Aires had trouble dealing with their trash. And this program sorted out the trash. They recycle. They separated plastic from paper. They helped deal with that. So there were actually a lot of projects that made a material difference.

The third question we wanted to know is, are people going to be stuck? Is the government just going to always be employing a whole bunch of people? Will they be able to transition? And as soon as the economy recovered, a lot of folks transitioned to other work. The private sector recovered and so they were able to transition.

ROSE:
Now, not everything went perfectly, obviously. This is a huge government program; nothing is ever going to go perfectly. But it did wind up having a pretty big impact on the economy and on stability. And it also had an impact that Pavlina did not expect. The policy was written to provide the “head of household” with a job. And the assumption by the policymakers was that that person would probably be a man.

PAVLINA:
But families decided to designate the woman as the head of the household, so a lot of women showed up in the program. And when we interviewed women, we asked them, you know, “What do you like about it or what do you not like about it?” And folks like some of the smallest things, you know, just being recognized. Just bring a check home. And most importantly, when we asked them, “Would you prefer just to get, like, income assistance?” In our experience, every woman said no.

ROSE:
The government even tried offering women a basic income guarantee instead of jobs. But even though the income guarantee offered more money than women could get paid under the jobs plan, most of them actually chose to keep their jobs.

Eventually, the economy in Argentina recovered and the program was phased out. But for Pavlina, who by her own admission is a “big picture, wonky kind of thinker,” it was really interesting to see these things actually happen.

PAVLINA:
So it was very much like an academic, you know, bird’s-eye-view kind of exercise. But when I went and actually saw some of the difficulties of implementation, some of the political problems, that was very eye opening. And also the impact that it had on folks.

ROSE:
Of course, Argentina is not the United States, and if we wanted to do this here it would take political will of a kind that, frankly, feels sparse these days. But maybe, the economic crisis delivered to us by our colossal mismanagement of the pandemic will be the thing that pushes this idea further into the true mainstream.

PAVLINA:
It’s anyone’s guess, but my Argentinian experience, at least, taught me that sometimes these things happen by accident. Sometimes it takes just one… you know, a president in the midst of a severe crisis just to pull it off the shelf and say, “Here it is. Let’s go with it.”

ROSE:
If you want to learn more about the jobs guarantee idea, Pavlina has a book out now, conveniently called The Case for a Job Guarantee. I will link to it in the show notes along with some other resources and papers about this idea.

So that’s part one– jobs guarantee. But that is only part one. This future has guaranteed jobs BUT you are assigned to them by an algorithm. And when we come back we are going to talk about those algorithms and where they might come from.

Okay, jobs guarantee: check. Now let’s talk about part two of our future this week. An algorithm that assigns you a job. It might seem a bit sci-fi, but it’s actually a pretty old idea. The original personality tests were, at least in part, meant to function as job placement tests.

DR. KIRA LUSSIER:
During the Great Depression, they’re like, “Okay, we have a scarcity of jobs. And so it’s even more important to get the right people in place.

ROSE:
This is Dr. Kira Lussier, a historian at the University of Toronto who studies the history of personality tests. And one of the earliest versions of these tests was called the Humm-Wadsworth Temperament Scale and was published in 1935. The test asked each person 318 questions, and one of the things I found really interesting about them is that, honestly, these are the same kind of questions you would see on modern personality tests. Stuff like:

  • Do you like to meet people and make new friends?
  • Do you find it difficult to maintain your opinions?
  • Do you like variety in your work?
  • Have you sometimes had the feeling that people were talking about you behind your back?

After analyzing your answers, the test would then give hiring managers and bosses an interpretation of your personality.

KIRA:
They developed this test and basically sold it to companies as a way to, they claim, screen out people who might be emotionally imbalanced in the workplace. And the same kind of language about, I think, fitness for jobs that we often see with some of the current tests was there. This promise that you can, kind of, match the worker to the job and that the worker themselves will then be more fulfilled because they can find the appropriate job. Then for the company, you can screen out for people who might have, say, an undesirable personality trait.

ROSE:
And in 1935, this was about not just finding the right jobs for people, it was also about hiring people with personalities that bosses liked, personalities that wouldn’t cause them… trouble.

KIRA:
“We have this union concern, and so we need to make sure we don’t have troublemakers,” aka union agitators, aka labor activists. And they told companies like, “If you use this test, you can screen out people who might be likely to join a union.” And some of the questions, they were about, like, “As a kid, did you ever fake sick to get out of school?” And it’s very obvious how that could map onto, like, concerns about malingering and faking sick to get out of work.

ROSE:
When World War II rolls around, these tests really kick into high gear. Along with a more standard job application process, you would be asked to take a test like the Humm-Wadsworth. Or, by the time WW II rolls around, you’re probably taking one of the descendants of that test, like the Minnesota Multiphasic Personality Inventory, or one that you’ve probably heard of, the Myers-Briggs.

KIRA:
The Myers-Briggs is a fascinating one, both because of its popularity and now backlash. I always have a hard time answering questions about it because often people either will be, like, big fans of it and expect me to be a big fan, or the opposite; they think because I study it I think it’s totally bunk.

ROSE:
The Myers-Briggs test was founded by a mother-daughter duo, and its history is worth a whole entire episode of a different podcast, but the basic gist is that they had no training in psychology. And at first, the test was judged pretty harshly by professionals in the field.

So the Myers Briggs test actually wasn’t all that popular when it was first published. It wasn’t until the 1970s that it really took off. But from the beginning, Isabel Myers wanted the test to help people in the workplace.

KIRA:
She saw it as a way to kind of help fit people to jobs. These various personality tests, including the Myers-Briggs, might give you insights into what kind of work is more or less suited.

ROSE:
And one of the ways people tried to match personality types to jobs was by studying the personality types of people who had those jobs.

KIRA:
So for example, at Institute for Personality Assessment Research at Berkeley, they would bring in a bunch of people like architects, or poets, or writers and try to give them a battery of tests and try to understand: what are the features of creativity, or what are the features of, like, creative genius and understand distributions of personalities. They did this at Berkeley in an old fraternity house. I always find this funny.

ROSE:
And from all of this, they would say, “Okay, this is the personality type of this job or that job, so we can now match you in this particular way.” Now, we are not going to get into whether any of these tests “work” because that is a really complicated question to answer and is not the point of this episode. But what we can say is that these kinds of tests were, and are, used by businesses to make decisions about hiring, promotion, and management.

And if you don’t totally love the idea of that, you are not alone. Basically, as soon as these tests showed up, people started asking questions about them.

KIRA:
In the 1950s, you get a huge backlash. There’s a whole bunch of social critics who basically start writing these tests about how personality tests are, like, invading your mind and basically promoting a particular kind of corporate homogeneity and blandness.

ROSE:
In his 1956 book Organization Man, the social critic William Whyte included an appendix called “How to Cheat On Personality Tests.” It’s totally worth reading. I will link to it in the show notes because it’s very funny. It includes the following advice: “You should try to answer as if you were like everybody else is supposed to be.”

Unions also, obviously, didn’t like some of the ways that employers were trying to use personality tests. And in the 1960s, there were Congressional hearings about these tests and whether they constituted an invasion of privacy.

KIRA:
A lot of the questions were seen, rightly so, as invasive because they asked about your medical history, your childhood, these kinds of… what were seen as very personal questions. Including things like religion, which became an object of concern in the context of hiring or workplaces because, you know, what right does an employer have to access your mind or your religious beliefs? And in fact, legally, there are laws that say they don’t.

ROSE:
The psychologists developing and using these tests argued that they didn’t actually care about the individual answers but rather the pattern of the response. And you hear this again today with these same kinds of tests. People say, “Your specific answers aren’t important, it’s the pattern they reveal.” And yet…

KIRA:
If you’re about to be hired, or you’re sitting in an office taking a test that asks you these questions and being given to your employer, of course you’re going to be concerned.

ROSE:
And even as early as the 1960s, people were asking questions about whether these tests were biased. In 1971, a group of Black employees at a power plant in North Carolina took their employer to court over the ways in which employment and promotion was being handled, including certain tests they were asked to take.

KIRA:
So it includes the more personality ones, but it includes, you know, intelligence tests, includes more task-based tests. There’s a whole suite of them.

ROSE:
This power plant, which was owned by Duke Power, had a long-standing policy that Black employees could only work in the coal handling and labor departments, which meant that even the highest paid Black employee could never make more than the lowest-paying position in any other department. When the Civil Rights Act took effect, basically making that policy illegal, rather than allowing Black employees to move into these other jobs, Duke’s power plant implemented this battery of employment tests that they had to take in order to ask for a transfer to these higher-paying departments.

The tests were pretty clearly meant to justify continued separation and discrimination, but now the company could just say, “Well, you didn’t pass the test.” So these men took their employer to court, in a case that went all the way up to the Supreme Court.

KIRA:
And it was a Supreme Court ruling that said that any sort of testing programs used in hiring, had to be, A: clearly related to the job at hand; and then B: that they had to not adversely impact protected classes, aka racial minorities, women.

So if a test shows to decidedly, consistently produce, either very different or worse scores on, say, Black Americans, then that test cannot be used in hiring.

ROSE:
These testing companies, who were selling their wares as a way to hire people, suddenly had to figure out how to prove that their tests weren’t biased.

KIRA:
So that led a lot of companies and testing programs to scramble. And so it in some ways led to a bit of a waning, but then also a bit of a revival later on once they did this.

ROSE:
The stuff that I’ve read about this case, and the ways in which it changed psychology and the ways we think about personality, and objectivity, and science, are really striking because it honestly feels like I’m reading stuff that has been written now about algorithms today. Here’s a line from a paper by Roderick D. Buchanan about these tests in the 1960s and the backlash: “Assisted by those on Capitol Hill, psychologists were able to defend their science in a manner that avoided imposed forms of public accountability. Social questions were reformulated as technical problems.”

It really feels like those two sentences could describe any number of stories you might have seen in the headlines recently, like, about Facebook, for example. There are so many ways that we are turning social questions into technical problems today using algorithms. Crime? Let’s have an algorithm predict it! Cities? We can have algorithms make civic choices too! Education? Who needs teachers, let’s measure success via surveillance and data!

We see this over and over again. And when it comes to personality tests in hiring, this question of bias is still very, very present. And when we come back we are going to talk about how these algorithms are used today and what this future world might look like if we were to put all of this together. But first, a quick break.

ADVERTISEMENT: THE LISTENER

Not every podcaster feels this way, but I actually love recommending podcasts to people. I don’t really watch TV or movies, so I’m useless when it comes to recommending things like that. But I do listen to a lot of podcasts, and I actually really enjoy trying to figure out which shows someone might like and why. And I also like doing this because it can be really hard to find podcasts that you actually like and connect with. So many of the “Best of” podcasts lists just, kind of, name the same shows over and over again; the shows that you’ve probably already heard of. No shade to those shows, they are good, but you’ve probably already tried them. But what about us smaller, indie shows? Shows created and hosted by more diverse voices? Shows that are not based in the United States? There is so much more out there.

A great way to find new shows is by subscribing to The Listener. The Listener is a daily podcast recommendation newsletter, sending three superb episodes to your inbox every weekday. It introduces you to outstanding and diverse audio beyond the usual bubble of big publishers, uncovering gems from creators around the world. Shows like Answerable Questions with Questionable Answers, which recently did an episode called “What gives you hope?”

As a paying subscriber, as well as with the email newsletter, you will also get access to a personal feed that can deliver the recommendations straight into your favorite podcasting app.

The Listener is written by Caroline Crampton, a podcast industry expert who listens to dozens of hours of podcasts in order to filter out the very best to surprise and delight you. Listeners of Flash Forward can get two extra months for free at TheListener.co using the code FLASHFORWARD20. That’s TheListener.co and use the code FLASHFORWARD20 for two months of this great daily podcast recommendation newsletter.

Happy listening!

ADVERTISEMENT END

ROSE:
Okay, so let’s assemble all the pieces now. We live in a world with a jobs guarantee, but – and this is a pretty big but – your job is determined by an algorithm. What would that be like?

First, it might help to know that algorithms are already being used in hiring today.

MANISH RAGHAVAN:
One of the first places they get used, actually, is how do you even get people to apply for your job? This is what’s known as sourcing. And so you might think… like traditionally you would’ve put a job ad in a newspaper or something, right? Well, now you’re putting an ad out on Google, or Facebook, or something. And there’s an algorithm involved in determining who actually ends up seeing that ad. So even before you as a candidate get involved, you’re already subject to algorithms in some way.

This is Manish Raghavan, a PhD candidate at Cornell who studies algorithmic bias in hiring. So before you even apply for a job, an algorithm is deciding which jobs you’ll even see. Then, when you do apply to the job that you have seen, there are a bunch of companies that make systems that purportedly help hiring managers figure out who the best candidate is.

MANISH:
You perform some task; maybe you record a video interview, you play some game, maybe you take some personality assessment. An algorithm somewhere now takes the results and turns that into an evaluation of you as a candidate.

ROSE:
And then, once you’re hired, companies also use algorithms to figure out who to promote.

MANISH:
So throughout the lifecycle of an employee, basically, from before you even know that you’re a candidate to while you’re an employee, you are being evaluated, or you may be evaluated, by some of these algorithms.

ROSE:
These algorithms are developed and sold by private companies, which makes it hard to say very much about exactly how they work. Which variables are they looking for? What are they measuring about your video, exactly? It’s really hard to say.

This makes the work Manish does slightly trickier, because trying to understand exactly how these systems work and who they might be favoring is really hard when you don’t actually have access to them. It used to be that when researchers wanted to study bias in hiring, they would do things like send identical résumés with different names on them, or different racial signifiers. So you might send two identical résumés to a company, one with the name Brittany on it and one with the name Ben on it. Or, one with the name DeShawn on it and one with the name Doug.

And over and over, and over, these studies showed that white men were the most likely to get a call back, even when everything else about the résumé was exactly the same. And similarly, that white people in general fared better than Black applicants. But you can’t really do that with these hiring algorithms.

ROSE (on call):
And now I’m thinking of using, like, a deepfake to do the video interview and try to change things. Is that something you guys are talking about?

MANISH:
Yeah. These are all proposals, because fundamentally what you’re trying to do is, like, control for everything else except for, like, race, or gender, or something like that. In a world where things are entirely text, it makes a little bit more sense what that controlling might do. It’s very difficult.. even without deepfakes you might try to like, “Here are two actors. I’ll have them say the exact same script, try to do the same facial expressions, and so on, and see with the algorithms says.” But let’s say you get some results that indicate that there are racial disparities. Someone’s just going to come back and say, “Well, that’s because the acting wasn’t perfect. You didn’t control for everything properly.” And it’s really hard to respond to that objection.

ROSE (Mono):
But it’s not really a question of whether these algorithms are biased. It’s really a question of how, right? And the companies themselves know this. Some of them spend time trying to de-bias their algorithms, which basically means that if they test the system and find out that it is favoring one type of person over another, they spend some time trying to fix that.

MANISH:
So it’s basically like, you know, see if it creates what’s called a disparate impact. If it does, throw away information until it no longer does.

ROSE:
Now, this is not necessarily the best way to fix an algorithm.

MANISH:
Let’s say I build some system that is very good at picking out the top-performing men and picks randomly among women. Now, I can still pick them at equal rates. But ultimately, if I’m picking really high performing men and a random sample of women, you’re going to notice downstream disparities later on in the line.

ROSE:
But it’s one way that companies have tried to correct bias that might crop up in these systems. And in fact, these companies often present themselves as antidotes to bias in hiring.

MANISH:
Essentially, their spin on things is something like, “Humans have been doing hiring for a very long time and have been biased for a very long time. So now we’re going to come in and say we’ve fixed it.” Like, “We can be objective with these algorithms. We know exactly what they’re doing. And we’re de-biasing them to make sure that they don’t suffer the same problems that humans have.”

ROSE:
And in some ways, they’re not totally wrong. You cannot reasonably argue that humans haven’t been making biased hiring decisions for forever.

MANISH:
The thing that they’re less clear on is, how would we even know if these algorithms are just as bad or if they’re worse in some ways?

ROSE:
And this is the big, hard question here, and it’s something I’m really interested in. If you’ve listened to the show before, you know that we’ve talked about algorithmic bias a lot. It comes up over, and over, and over again. In some cases, the obvious answer is to stop using these algorithms. But in other cases, I think it’s actually a little bit harder of a question. Humans make biased decisions in hiring all the time. We know that.

We also know, through research, that most diversity and equity trainings that you might give to a manager to fix that bias… They don’t work. They don’t have an impact on the bias that matters. So while these algorithms are certainly going to have some bias, the real question is: Is that a better kind of bias than the human kind? Can they be slightly better? Neither side is ever going to be perfect, but is there a better answer? And that’s a really hard question.

MANISH:
As we begin to wrestle with these types of questions, I think it’s worth figuring out, like, what are the different things that we care about and how can we evaluate according to those criteria? So for instance, with an algorithm, I know it’s going to make the same decision consistently. And so now I can test it ahead of time to see what it’s going to do. I can go back and see why it made a particular decision in a way that I can’t do with a human. Somebody made a decision somewhere; I can’t look into their brain. I can’t get them to even remember what they were thinking that particular day. So maybe that’s a plus for algorithms.

On the other hand, perhaps there’s a particular bias that’s encoded into your system and now it’s being automatically shipped out to, like, thousands of people across the country. You can’t make mistakes on that scale with humans.

ROSE:
I don’t actually know what the answer is here, I should say. I think that we should be more careful with these algorithms and we shouldn’t just be deploying them without really understanding what they might be doing. But I also think that the idea that humans are always “better” at things like this isn’t totally true either. So I don’t know. Turns out, the future is complicated.

MANISH:
Personally, I think this runs into a lot of, just, real-world problems that we’ve been discussing. I can see the appeal in the long term of making this better and getting it right, and I guess my worry is that in the short term, while we get there, we don’t really know how to protect people.

ROSE:
There is some science fiction that posits algorithmically assigned jobs as some kind of utopia, that it would break down the legacies presented by wealthy, mostly white families whose kids all become doctors, or lawyers, or whatever it is.

KIRA:
We could have a better way of allocating jobs. It’s not just, like, you grew up in an upper-class family and so you’re going to have an upper-class job. To get around that seems desirable in some way.

ROSE:
That’s Kira Lussier again, our historian. But in reality, that probably wouldn’t happen.

ROSE (on call):
When people make that kind of argument that these pieces of technology are going to remove inequality, how do you respond to that?

DR. CHRIS GILLIARD:
Well, first, I laugh, and then I catch my breath. I mean, it’s absurd.

ROSE (Mono):
This is Dr. Chris Gilliard, a privacy researcher and a professor of English.

CHRIS:
I think the important thing to note is that most of the systems are trained on… let’s take Amazon, for instance. You know, they had an algo that they say they never put into place. It turns out, the algo was saying that men would be the ideal candidate. And the problem was that they had trained the algo on the people who historically had worked there, who were mostly men.

ROSE:
The algorithm Chris is referring to was trained on ten years of Amazon hiring data. And it learned that any time it saw the word “women” – so if someone said they liked ‘women’s sports’ for example – it would rank that applicant lower. And Amazon engineers actually weren’t even sure that they could fix this bias, so they actually scrapped the project. Other hiring algorithms have been shown to be most excited about any candidate named Jared and men who played high school lacrosse.

And this is the fear of this future, right? That these algorithms that are assigning us our jobs are going to just replicate all of the biases that we have already present in hiring.

ROSE (on call):
So let’s say that we’re, like, in this world where you’re guaranteed to have a job, but you take a battery of personality tests and then you’re assigned a job. What do you think that would be like?

CHRIS:
It sounds like dystopia. I always kind of draw on my own experiences to some extent. My father was a washing machine and dishwasher repair person, and my mother was a cook. And you know, the algorithm wouldn’t have said that I would be a Harvard fellow. That’s not what it would have said. When I hear things like that, I cringe because our society is so deeply racist, and classist, and misogynistic, and the systems that we build, unfortunately, kind of reproduce that. And so it would be really hard, I would say impossible, to produce a system that could somehow strain that out of its processes and do, like, some kind of equitable job of assigning people. I actually don’t think it’s possible.

ROSE (Mono):
Chris argues that these algorithms contribute to something he calls digital redlining.

CHRIS:
So I define it as the creation and maintenance of tech practices, policies, and pedagogies, and investment decisions that enforce class boundaries and discriminate against specific groups. And so what I mean by that is, basically, that I encourage people to think about the ways that tech policies and decisions negatively influence – and harm, for that matter – often, black communities, marginalized populations of people who are already under some kind of scrutiny.

ROSE:
Redlining, as a term, was coined in the 1960s to describe the way that a federal agency called the Home Owners’ Loan Corporation literally color-coded maps in red to mark the areas considered a “risky investment” because of the demographics of those communities. And the thing about redlining is that it has these really long-lasting impacts.

CHRIS:
I draw a lot on my background as coming from Detroit. I mean, if you know a city well and you got a look at the Home Owners’ Loan Corporation maps from that area, you can see the vestiges of redlining where you live. Detroit’s a place where that’s super clear, so there are certain streets like Mack or 8 Mile that delineate kind of “Detroit” and “not Detroit.”

ROSE:
Digital redlining is the same kind of thing, it’s the systematic investment, or lack of investment, in the digital tools that certain communities have. That could be access to high-speed internet or just access to the internet at all. It could be increased surveillance in some communities or the ways that algorithms discriminate against certain kinds of people.

Right now, as students return to school in the era of coronavirus, some students have easy access to computers and reliable internet, while others do not. And Chris prefers to call these things ‘digital redlining’, rather than ‘the digital divide’, which is something that you may have heard about before.

CHRIS:
Digital red lighting is a verb. So, how I encourage people to think about it is that it’s not an accident. It’s not a force of nature. It’s not something that just happens.It’s not bad choices on the part of a black kid in Detroit. It is policy and investment decisions. You know, it’s when Facebook decides that only certain people will see a job ad or a housing ad. It’s all these things that are not accidents. They are design decisions, and policy decisions, and coding decisions.

ROSE:
A lot of Chris’s work looks at the ways that algorithms and surveillance technologies impact students, and particularly underserved students.

ROSE (on call):
When you have students who may come to you, or if you ever advise students on career stuff, how do you approach that?

CHRIS:
You know, I usually ask them what they’re good at. I usually don’t ask them what they love because, I mean, I haven’t found a way to get paid for the things I love. But I also am pretty confident that if I had to do those for a living, that that love might turn to hate. So I usually just ask them what they’re good at. This is actually a question that not a lot of my students… not a lot of people have asked them before.

ROSE (Mono):
One of the things that’s interesting to me about this future is that it is kind of based on this baseline capitalist fantasy about work.

KIRA:
This fantasy that work is this, you know, space of fulfillment and that we can always have the possibility of a match, that you have something that matches, is itself obviously very problematic and has all these weird class dimensions of it that often get left out.

ROSE:
Modern American work culture likes to talk about passion and to blur the boundaries between work and personal identity. I will be totally honest; this is something that I really struggle with personally. In many ways, my work, this show, the stuff I do, is me. It is my identity in a way that is probably not super healthy. But I also own this show. I own everything I do, whereas employees at big companies are being asked to give their whole selves for… what exactly? A paycheck, yes. But your job is not your family. It’s a job.

We talked about this recently on the Advice For and From the Future episode about workplace surveillance. So many companies ask so much of their workers, and a lot of people from privileged backgrounds sort of expect to find work that they love, that is personally fulfilling. But what if your job was just a job? And that’s fine? This future also takes away a key element of the human experience: agency.

MANISH:
I don’t think that the future that you’re envisioning has a lot of that. What is my recourse if it gives me a job that I really don’t like and I think I’d be better at something else? I think the things that I would care about are something like, how much agency do people have and to what extent can they express their preferences; what is the recourse for when mistakes are made?

We sort of already have the technology to do a bad job of this. And that’s actually what worries me, because we know how to predict from data, we know how to manage these constraints and sort people into whatever occupations. I think the issue is we don’t know how to do these things well. And in particular, predictions are hard, especially when we make predictions about people. So I think that stuff of it is, in my mind, the most likely to go wrong in the future you’re envisioning.

ROSE:
In making this episode, I’ve been thinking a lot about my own experiences in job placement. I went to an upper-middle-class high school in NJ, and we did actually have to take a career quiz, which told me that I should be a cinematographer, which was a word that I did not know and had to ask my teacher to define. And to end this episode I decided that Julia, our lovely producer, and I should take some tests to see what careers we might be assigned in this possible future.

ROSE (in person):
You took a test like this in high school?

JULIA:
Yeah, I think actually it was when I was in college. For some reason I was like, “I just need to, like, figure out, am I doing the right thing with my life?” And so I looked one up because I never had to take one in high school. Mine told me that I should be a doctor, which was, like, just not… I mean, like, I didn’t want to go into science. I don’t like blood or injuries. I’m very grossed out by that stuff. So it was just like, “No, this is wrong.” 

ROSE:
Okay, I’m going to send you… Let’s start with this one, which is called the Jung Typology Test. This is the one you’ve probably heard of it where it’s like I-N-T-P, like the four letters that you get at the end.

JULIA:
Oh, okay.

ROSE:
I know I’ve taken this and I have no idea what I am. So, let’s take this one first.

[a few minutes later]

JULIA:
All right, I’m done.

ROSE:
All right. What did you get?

JULIA:
Okay, I got INFJ.

ROSE:
Ooh! Okay, I got INFP. That’s Introvert, Intuitive, Feeling, and then I am Perceiving. And are you Judging? Is that…

JULIA:
Yeah. Mine says… So, I have a moderate preference for Introvert and Intuitive but only a slight preference for Feeling and Judging.

ROSE:
Similar, except I’m flipped for Perceiving over Judging. Oh and then on the bottom it says, “Career Choices for Your Type.” Let’s click on that. We probably have some similar ones because we have mostly similar things. What does yours say?

JULIA:
Mine says… Okay, social services, so social work, education, librarian, healthcare is here again. Although this time it says early childhood education and psychology, which is a little different from, like, a surgeon maybe. And then arts and humanities; graphic design. It’s arts and humanities. That seems kind of broad, but… What does yours say?

ROSE:
Similar stuff. Although the examples underneath… So social services, but my examples they give are counseling, religious education, and just education. And then I also got health care but I got psychology and psychotherapy as my two, like, examples. And then arts and humanities, I have musician, archeology, and anthropology, which is interesting.

Okay, I’m going to now send you a different one. And let’s see how these match up, because this’ll be fun. This one’s called 16Personalities. There’s a button on the upper right that says take the test.

ROSE (taking the test):
Some of these questions… usually prefer to get your revenge rather than forgive is, like, a very intense question. [laughs] 

JULIA:
Yeah. Oh my gosh. [laughs]

ROSE:
There are a couple that are like, “You tend to focus on present realities rather than future possibilities.”

JULIA:
It’s like, do you know this podcast? [laughs]

ROSE:
We should leave feedback, like, “You should really listen to this podcast called Flash Forward. [laughs]

ROSE (Mono):
Okay, so we took a bunch of these tests and, honestly, none of them felt all that helpful. And some of them made me personally feel like a bit of a sociopath. One of them had this big pie chart with traits and the main one, this big red half of the pie just said DOMINANCE. And then it said I should be a lawyer or a police officer which… no. Definitely not.

And yet, these tests are kind of appealing because they do offer the promise of some easy answers to a really hard question: What should you do with your life? Wouldn’t it be nice if a 28-question quiz could actually tell you that? Could it find this mythical perfect fit? And, in this future world, you could just get that job no problem?

In reality, work is messy. It’s the site of conflict, always. It can be fulfilling and soul crushing. No algorithm can really fix that. We should definitely push to decouple our self-worth from our jobs. I know that I need to take my own advice there, maybe more than any of you. But that doesn’t happen with a technology fix.

I keep thinking about that sentence that I read to you earlier from the paper about the 1960s conflicts over personality tests. “Social questions were reformulated as technical problems.” The question of how to structure our economy, how to value people, how to feel fulfilled. Those are not technical questions; they’re social ones. And I hope we figure out these questions. I don’t know what the answers are. Maybe they do include a jobs guarantee! But I do know one thing: An algorithm cannot solve this for us.

[Flash Forward closing music begins – a snapping, synthy piece]

Flash Forward is hosted by me, Rose Eveleth, and produced by Julia Llinas Goodman. The intro music is by Asura and the outro music is by Hussalonia. The episode art is by Matt Lubchansky. Our lovely job placement guru from the future was played by Elena Fernández Collins.

If you want to suggest a future that we should take on, send us a note on Twitter, Facebook or the best way is by email at Info@FlashForwardPod.com. We love hearing your ideas! Genuinely, I always smile when I get a new idea. And if you think you’ve spotted one of the little references that I’ve hidden in the episode, email me there too. Info@FlashForwardPod.com. If you are right, I will send you something cool. If you want to discuss this episode, some other episode, or just the future in general, you can join the Flash Forward Facebook group! Just search Flash Forward Podcast and ask to join. You have to answer one question. If you don’t answer it, I will not let you in. If you do answer it, I will let you in. It’s very simple.

And if you want to support the show, there are a couple of ways you can do that too. Head to https://www.flashforwardpod.com/Support for more about how to give. If financial giving is not in the cards for you, you can always head to Apple Podcasts and leave us a nice review or just tell your friends about the show. That really does help.

That’s all for this future. Come back next time and we’ll travel to a new one.

[music fades out]

You may also like

6 comments

John October 4, 2020 at 7:16 pm

Argentina is a terrible example Im in Australia and all my labourers come from Argentina and they’re here because they have no opportunities at home and the poverty is terrible

Reply
J October 21, 2020 at 7:39 pm

UBI doesn’t leave peoole in the dust who are disabled or caring for a disabled loved one.
I once dated a guy who used the Myer Briggs to discriminate against hiring introverts. It didn’t last long. I’ve also taken jobs from exrroverts who were good at passing interviews and then couldn’t do the job.

Reply
J October 21, 2020 at 8:41 pm

UBI doesn’t leave peoole in the dust who are disabled or caring for a disabled loved one.
I once dated a guy who used the Myer Briggs to discriminate against hiring introverts. It didn’t last long. I’ve also taken jobs from exrroverts who were good at passing interviews and then couldn’t do the job.
IDK I get so mad listening to these sometimes but I also really like them so ?

Reply
Rose October 21, 2020 at 8:50 pm

You can read a bit about what the jobs guarantee says re: disabled folks. http://pavlina-tcherneva.net/job-guarantee-faq/ (click on #8) The basic gist is that a) the jobs that communities create should take into account access and disability and make work for folks of all kinds and b) one of the jobs that they might actually create is caring for those disabled loved ones. So you’d be paid for that, because it’s a need in the community.

Reply
Salma November 30, 2020 at 3:35 am

Love this podcast.
Do you have a transcript for this by any chance?:)

Reply
Home Sweet Home | Flash Forward February 15, 2021 at 11:09 pm

[…] you can’t get a job to pay your bills, it’s really hard to stay in housing. And as we talked about a couple of months ago, it’s really hard to get a job if you don’t already have a job. And […]

Reply

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.