Summer 2024 Bulletin

Understanding Implicit Bias and How to Combat It

 

A police officer bends to face a driver during a traffic stop. The officer has pale skin, dark hair, and a light beard. The driver has dark brown skin and short black hair.
Photo by iStock.com/Ivan Pantic.

2125th Stated Meeting | April 30, 2024 | Virtual Event | A Morton L. Mandel Conversation
 

Implicit bias is the residue of stereotyped associations and social patterns that exist outside our conscious awareness but reinforce inequality in the world. The implications of implicit bias are present in every field, from law enforcement to courts, education, medicine, and employment. 

Scientific inquiry has advanced our understanding of implicit bias in recent decades. It has also illuminated the limitations of certain cognitive measures and commonplace interventions, including some forms of diversity or implicit bias training used by corporations, universities, and other organizations. How can we improve our knowledge base on effective strategies to counteract bias and its negative impacts on our nation? What changes to organizational policies, procedures, and decision-making structures have shown promise? 

On April 30, 2024, the Academy hosted a virtual event that featured four contributors to the Dædalus volume on “Understanding Implicit Bias: Insights & Innovations”—guest editors Goodwin Liu (California Supreme Court) and Camara Phyllis Jones (King’s College London) and authors Jennifer Eberhardt (Stanford University) and Frank Dobbin (Harvard University)—who discussed some of the strategies and solutions to understand and combat implicit bias. The program included welcoming remarks from Academy President David W. Oxtoby. An edited transcript of the event follows. 

Camara Phyllis Jones

Camara Phyllis Jones is a Commissioner on the three-year O’Neill Lancet Commission on Racism, Structural Discrimination, and Global Health and a Visiting Professor in Global Health and Social Medicine at King’s College London. Elected to the American Academy in 2022, she is the guest editor (with Goodwin Liu) of the Dædalus volume on “Understanding Implicit Bias: Insights & Innovations.”
 

A person with dark brown skin and curly black and gray hair, wearing a brightly colored striped blouse and dark blazer, smiles at the viewer.
Photo courtesy of Camara Phyllis Jones.

I am delighted to be invited to share some framing comments. Today’s conversation is the evolution of a workshop convened by the Committee on Science, Technology, and Law of the National Academies of Sciences, Engineering, and Medicine (NASEM) three years ago that focused on the science of implicit bias and the implications of that science for law and policy. Goodwin Liu and I cochaired that workshop.

I was initially surprised when the organizers reached out to me to be involved in an effort focused on the science of implicit bias. I have always characterized my work as focusing on naming, measuring, and addressing the impacts of racism on the health and well-being of our nation. And it has been my observation that when people talk about implicit bias, they sometimes use that term to avoid saying the word “racism.” To my delight, many of the participants in that initial NASEM workshop explicitly contextualized their work on implicit bias within the broader context of the reality of structural racism in our country today.

Fast-forward three years. Many of the papers that were presented at that NASEM workshop, as well as several new ones that we commissioned, have been published in the Winter 2024 issue of Dædalus. It’s an amazing volume. I do hope that you will access the essays online. 

To frame today’s conversation about implicit bias, I would like to talk about racism. That is because implicit bias is both a reflection of and a contributor to racism.

When I say the word racism, I am clear that I am talking about a system, not an individual character flaw, or a personal moral failing, or even a psychiatric illness as some people have suggested. Yes, racism does manifest in all of those ways, but in its essence, racism is a system of power that structures opportunity and assigns value based on the social interpretation of how one looks (which is what we call “race”).

The impacts of this system are threefold. It unfairly disadvantages some individuals and communities. But every unfair disadvantage has its reciprocal unfair advantage, so it also unfairly advantages other individuals and communities. And whether an individual or community is unfairly disadvantaged or unfairly advantaged, racism is sapping the strength of the whole society through the waste of human resources.

But where is implicit bias in that? As Keith Payne and others have said, implicit bias is just the manifestation in the mind of what’s going on in the environment. Implicit bias contributes to, and is a reflection of, structural racism. More on that in a minute. Many people have trouble saying the word “racism” and find it easier to say the words “implicit bias.” But we must name racism because we must name any problem in order to get started on the solution.

There are four key messages that we need to communicate when naming racism: 1) racism exists; 2) racism is a system; 3) racism saps the strength of the whole society; and 4) we can act to dismantle racism. In my teaching on issues of race and racism, I share many allegories to communicate these four key messages. Today I want to share the allegory that I call “Cement Dust in Our Lungs” to help us understand that racism is a system.

Recognizing that racism is a system helps us understand how to move forward and address implicit bias and all of the other manifestations of racism, which occur at structural, interpersonal, and internalized levels. As I talk about cement dust in this allegory, I want you to think cement dust, but I also want you to think racism.

Imagine that there’s a cement factory and it is spewing cement dust. The cement dust fills the air, and for any of us who are near that factory for any amount of time, we are going to develop cement dust in our lungs. Having cement dust in our lungs is problematic for all of us, even though it might affect different ones of us differently. Cement dust in my lungs might make me feel “less than” (internalized racism), while cement dust in somebody else’s lungs might make him feel that he can, with equanimity, crush the life out of another human being with his knee for 9 minutes and 29 seconds.

Cement dust in our lungs is bad for all of us, even for people who don’t recognize or acknowledge that they have cement dust in their lungs. So if it’s a problem, what should we do? Should we focus on the individual because there’s cement dust in our individual lungs? I’m going to share two ideas of interventions that focus on the individual.

The first one is a screening program. We can screen and see how much cement dust different people have in their lungs and if somebody has too much dust in their lungs, an alarm goes off.

That is a very good strategy for people who don’t believe they have cement dust in their lungs. But how much is too much dust? And what do you do with the people when the alarm goes off? We can’t vote them off the planet.

One of the byproducts of this kind of approach is that it makes people not want to talk about or think about cement dust. (Indeed, people do not want to talk about racism because they think that if they say the word, other people are going to be peering deeply into their souls to figure out “Exactly how racist are you?”) But it is a useful strategy because it enables us to at least acknowledge that we do indeed have cement dust in our lungs.

The second intervention that focuses on the individual is to set up a cleansing spa. People who know that they have cement dust in their lungs and want it out could volunteer to go into the cleansing spa, while others might go into the cleansing spa because it is important to their employers. Inside, maybe they will start reading widely and especially reading history. Maybe they will start talking to strangers or venturing across town to experience our common humanity in very different contexts. And maybe after these and other experiences they will come back out as good as new. But if they come back out into that cloud of cement dust, the cement dust will just reaccumulate in their lungs. So it’s not a very permanent solution unless they spend their whole lives inside the cleansing spa rather than out in the world.

Aha! That gives us an insight. Since the cleansing from our short-term efforts gets undone when people come back out into the cloud, maybe our intervention should be about acknowledging the cloud. What does that look like? Well, if I acknowledge the cloud, then at least I know that I wasn’t born with cement dust in my lungs. I also recognize that I need to do something to stop accumulating more and more cement dust in my lungs. So maybe I put on a gas mask. I start my own individual anti-cement dust journey, recognizing that the gas mask won’t in and of itself extract the dust already lodged in my lungs, but praying that it will at least protect me from more dust accumulating there.

In fact, I understand that my individual anti-cement dust journey is a 24/7 commitment. I walk around wearing my gas mask without embarrassment, even proudly, understanding that if I take it off for even a moment, more harmful dust will accumulate in my lungs. When I see myself reflected in a glass window or a mirror, I am reassured to see that the mask is still in place, understanding the importance of remaining steadfast in my individual anti-cement dust journey.

And when you see me, you may ask me, “Dr. Jones, why are you wearing a gas mask?” This is my opportunity to describe the cloud of harmful cement dust that we’re in, one that most of us do not even see. I can name the cloud and point out its dangers to all of us. And then I can then ask, “Do you want to keep breathing cement dust?” And more and more people will say “No!” and will put on their own gas masks, start their own individual anti-cement dust journeys. So is that the answer? Maybe what we need to do in this country is just have 330 million gas masks? Little baby gas masks, old people gas masks, all kinds of gas masks?

Well, it’s a start, but it is insufficient because if at any minute we take off our gas masks, all bets are off.

What we really need to do is dismantle the factory!

Many people will ask, “What factory?” Indeed, the cement factory has been so obscured from view by all of the cement dust that it has been spewing into the air that many people, even those who are aware of the dust in the air, don’t realize that a factory is operating in the middle of it all. Those of us who recognize that we’re living in a cloud of harmful cement dust have to not only call out the dust in the air (White supremacist culture) but also call out the factory (systemic/structural racism).

We who can see more clearly and breathe more easily through our gas masks need to resolutely approach the factory. Next, we need to ask “How is this factory operating here?”, examining structures, policies, practices, norms, and values. And finally, we need to organize and strategize to act, working collectively to dismantle the factory and put in its place a system in which all people can know and develop to their full potentials.

In this allegory, we have considered interventions at different levels of focus to address implicit bias as the cement dust in our lungs. At the level of the individual, a screening program like the implicit association test or a cleansing spa like DEI trainings are good interventions, but they can backfire or have only short-term impacts. At the level of acknowledging the cloud, naming racism in our history and in our culture, we are motivated to start our own individual antiracism journeys, which can include studying history, talking to strangers, going across town to experience our common humanity. But if we really want to set things right, then we must address the existence and operation of the factory itself—the structures, policies, practices, norms, and values that are the mechanisms of systemic racism.

We need to understand that yes, there is harmful cement dust in our lungs, but we were not born that way. We need to recognize that the dust in our lungs comes from the dust in the air, and that the dust in the air comes from the cement factory. When we are willing to name the cement factory, examine how it is operating, and organize to dismantle it, we will improve the health and well-being of all of us for generations to come.

Let me now turn things over to my wonderful colleague, Goodwin Liu, who will start our conversation about understanding implicit bias and how to combat it. I hope that my “Cement Dust in Our Lungs” allegory has given us some intuition about the extent to which we have to move if we want to combat implicit bias as well as all the other manifestations of racism.

Goodwin Liu

Goodwin Liu is an Associate Justice of the California Supreme Court. A member of the American Academy since 2019, he serves as the Chair of the Board of Directors at the Academy. He is the guest editor, with Camara Phyllis Jones, of the Dædalus volume on “Understanding Implicit Bias: Insights & Innovations.”
 

Headshot of Goodwin Liu, who has tan skin and short black hair. He wears a gray suit, blue tie, and faces the viewer smiling.
Photo by Martha Stewart Photography.

Thank you, Camara, and thank you all for joining this webinar. It is a pleasure to be with all of you today. I want to begin by acknowledging what an honor and privilege it has been to work with Camara on the Dædalus volume. Let me also note that this project was originally conceived by the Committee on Science, Technology, and Law at the National Academies of Sciences, Engineering, and Medicine. I want to acknowledge our collaborators at the National Academies, including Anne-Marie Mazza and others there who have been so dedicated in supporting this work, intellectually and otherwise, over the years.

Also, tremendous kudos to Phyllis Bendell and the team at the Academy plus all the authors in the Dædalus volume for pulling together what we hope is one of the most up-to-date and comprehensive reviews of the state of knowledge about implicit bias and some of the interventions that are emerging. 

In addition to Camara, we have two wonderful experts to help us understand these issues: Frank Dobbin and Jennifer Eberhardt, both of whom are authors in our volume. 

I am going to start with Jennifer and pick up right where I think Camara took us. I think many in the audience probably have a fair intuition about what implicit bias is and have seen some of the evidence behind it. Jennifer, a lot of the evidence has been gathered through your own work over the last few decades. Your early work in behavioral and social psychology unlocked some of the interesting cognitive aspects of how people can be primed with images and how that affects their behaviors with regard to bias. In recent years, you have been thinking more about structures and practices.

I want to read an interesting quote from your Dædalus essay. You and your coauthors say:

Conceptualizing implicit racial bias as merely a byproduct of human cognition overlooks the critical scientific insight that racial bias exists not only in the head, but also in the world. Implicit bias is the residue that an unequal world leaves on an individual’s mind and brain, residue that has been created and built into institutional policies and practices and socialized into patterns of behavior over hundreds of years through the workings of culture.1

You go on to describe what you call a “socio­cultural approach to racial bias.” Could you elaborate on the evolution from a purely cognitive understanding of implicit bias to this sociocultural approach, which is an interesting development in the scientific study of this field.

Endnotes

  • 1Rebecca C. Hetey, MarYam G. Hamedani, Hazel Rose Markus, and Jennifer L. Eberhardt, “‘When the Cruiser Lights Come On’: Using the Science of Bias & Culture to Combat Racial Disparities in Policing,” Dædalus 153 (1) (Winter 2024): 124–125.

Jennifer Eberhardt

Jennifer Eberhardt is the William R. Kimball Professor of Organizational Behavior at the Stanford Graduate School of Business, Professor of Psychology, and Cofounder and Codirector of Stanford SPARQ at Stanford University. She was elected a member of the American Academy in 2016.
 

A person with brown skin and brown hair, wearing a purple top, smiles at the viewer.
Photo by Nana Kofi Nti.

Thank you, Goodwin. Yes, I can elaborate on the evolution of the field. Let me start with Gordon Allport. This sociocultural approach to bias is something that he wrote about in the 1950s in his landmark book, The Nature of Prejudice. It is an approach that James Jones later highlighted with a particular focus on institutional racism. Yet from the 1980s until very recently, the dominant model of bias has been one that has focused on individual cognition. 

Bias has been treated as an offshoot of this basic categorization process that our brains rely on to function, so for the past several decades there’s been little focus on history. There’s been little focus on the broader cultural environment. The thinking has been that much of what we need to understand about bias could be found in the head by simply studying the basic mechanics of cognition. But recently, that has all begun to change. There’s been a rebirth, not only of this sociocultural approach to bias, but there is now an understanding that there are multiple levels at which bias can function.

In the model of the culture cycle, which we highlight in our Dædalus essay, we discuss bias as operating at four different levels: at the level of the individual, at the level of interactions between individuals and groups, at the level of institutions, and at the level of ideas that we pass on across generations. We believe that rather than studying bias one level at a time or from one perspective at a time, we should examine multiple levels at once. We need to better understand how these levels affect one another.

LIU: I want to ask you about your work with the Oakland Police Department. I am a resident of Oakland, so I’m particularly interested in what you are doing there. You have been working with the Oakland Police for a number of years now to help mitigate racial bias in policing. I think many members of our audience are interested in effective interventions in their own institutional settings, so let’s choose this one as a starting point. How did you get involved with the Oakland Police, and how are you using science and data to help spur change?

EBERHARDT: To answer that question we need to go back over twenty years. There was a gang within the Oakland Police Department, and they called themselves “the Riders.” They harassed community members. They assaulted, planted evidence on, and filed false reports against their victims, who collectively served over forty years behind bars for crimes they did not commit.

There were two civil rights lawyers in Oakland who filed a class action lawsuit on behalf of 119 people, 118 of whom were African American, who claimed to suffer harm at the hands of members of the Oakland Police Department. The city and the department entered into a negotiated settlement agreement that outlined over fifty steps the Oakland Police Department would need to take to reform itself. One of those steps required the department to begin to track all of their pedestrian and car stops by race. I was brought on as a subject matter expert to analyze those stop data, and I enlisted a whole team of researchers from Stanford in this work.

Many people in the Oakland community suspected that there were huge racial disparities in who police officers stopped, who they searched, and who they arrested, but for a long time, the department hadn’t kept data on any of those actions as a function of race. My team and I were the first to rigorously analyze their police stops in this way. And it was the first time the department had the opportunity to have an independent assessment of their data to answer allegations about racial profiling. There was a lot of interest in our work and in the findings from the community members in Oakland and from the police department, the mayor, the city council, the plaintiffs’ attorneys, and the federal monitor, who is still in place today.

I think everyone saw the possibility that data could be used to spur change. We thought we would be working with the Okland Police Department for a couple of years, but ten years later we are still there.

LIU: What did you learn from the data? 

EBERHARDT: We analyzed the data and produced a report, which found that Black people were significantly more likely to be stopped, they were more likely to be searched and handcuffed, and they were more likely to be arrested than White people. Just to provide a little texture here: at the time we analyzed the data, only 28 percent of the Oakland population was Black, but roughly 60 percent of the stops that officers made were of Black people. Black people were disproportionately stopped even when we controlled for over two dozen variables that could have explained those disparities.

I think perhaps the most jaw-dropping finding we uncovered was related to handcuffing. Nearly one in four Black men were handcuffed in the course of vehicle stops, even though the vast majority of those stops were for minor infractions. It’s jaw-dropping not just because of the numbers, but also because at the time it wasn’t a standard variable that analysts examined. They would focus primarily on stops, searches, and arrests. 

When we first went to Oakland we held focus groups to try to understand from the community members’ perspective what was going on and what they cared about the most. Handcuffing was one of the things Black men in particular talked about. They would be stopped for these minor infractions and then pulled out of the car and handcuffed, which they found humiliating. They felt like they were being treated as criminals from the start. 

On the form that police officers completed during a stop, there was a checkbox for handcuffing, but our team didn’t focus on it because it was not the standard variable at the time that analysts were using. Talking to community members opened our eyes to this other variable, which ended up playing a huge role in terms of the experiences that people were having in Oakland, and it was indeed the experience that had the most dramatic outcomes for people—where the racial disparities were greatest.

LIU: How do you use this information to make improvements? And could you describe one of your other findings, which concerned the words used at the beginning of a stop. How does the predictive nature of the words used by police officers lead to escalation or not? 

EBERHARDT: Yes. However, we have to fast forward ten years. We have worked with a number of police departments since then. For the data set you are describing, we were looking at Black drivers only. That is because they were the drivers who experienced the most escalated stops. Using large language models, we were trying to predict which stops would end with the driver being handcuffed, searched, or arrested. We found that there was a linguistic signature to these escalated stops. 

We could tell by the first forty-five words that an officer uttered during a stop, roughly the first twenty-seven seconds of a stop, how that stop would end. There were two elements to that linguistic signature. One was that the officer started the stop with an order, and the second was that they did not explain the reason for the stop. If you had both of those, then that predicted whether the stop was going to end with an escalated outcome.

LIU: These are striking findings. How do you work with police departments, in general, to use data to make improvements in their practices?

EBERHARDT: Let’s take the Oakland Police Department as an example. When we released our report in 2016, many members of the department didn’t like anything about the report. They didn’t even like the title of the report, which was Data for Change, because they did not believe there was a need for change. They felt that the racial disparities we found could be explained by the simple fact that Black people just commit more crime than other people. 

The deputy chief at the time, however, did see the need for change, and he organized a small task force of about fifteen people in the department to discuss the findings with us. He was careful to choose people who were critical of the report and others who were not so we had both groups sitting at the table. He also chose people who played different roles, from line officers to sergeants to the assistant chief of the department. Some had recently arrived in the department; others were veterans. It was a real mix of people sitting around the table. 

For those who were most critical of our report and the findings, they claimed that a lot of the Black people they chose to stop were already on their radar. In other words, they had intelligence on these people. They were upset that we didn’t take this into account and because of this, our analysis of the racial disparities made it look like they were simply profiling while they claimed they were making reasoned decisions—based on prior information—about who to stop. 

At the time, the department did not keep track of intelligence-led vehicle stops. So there wasn’t any data to analyze. Our first task therefore was to decide on what counted as an intelligence-led stop. We ended up defining it as a stop in which the officer has prior evidence to tie the person in the car to a specific crime. We asked the people sitting around the table to tell us how many of the stops that officers are making in the department are intel-based stops.

We heard numbers like 85 percent, 90 percent, even 99 percent of the stops are intelligence-led. We decided as a group to begin to track these kinds of stops for the first time. And we did this by simply adding a question to the form officers complete during a stop: Is this stop intelligence-led, yes or no? If the officers decided that the stop was intel-led, they had to state the source of that intelligence. So that was the intervention.

It sounds pretty simple to just add a question to the form, but there are a lot of social psychological principles baked into it. We know that the potential for bias increases as people make quick decisions. So we were intentionally slowing officers down with this new checkbox. We were pushing them to use concrete information in place of their intuition about who to stop. We also know that the potential for bias increases when there’s a lack of accountability. This new metric served as an accountability tool.

We got everyone in the department to agree on the same definition for an intel-led stop. And the police leadership then trained officers on how to spot it. The leadership also encouraged these kinds of stops because they were more evidence-based. In fact, the leadership changed the norms for what good policing looked like. We know that norms can also influence the probability that our biases will be triggered. When we first added the intel-led question to the form, we found that about 20 percent of their stops were intel-led, not 99 percent, as they initially reported. 

We also found that as the number of intel-led stops rose, the total number of drivers stopped decreased because they were now just stopping drivers when they had evidence of wrongdoing. There was less stopping based on intuition. In fact, the stops of African Americans dropped by over 43 percent in that first year alone when we began tracking the data. This drop occurred even when the crime rate was going down, so stopping fewer Black people did not make the city any more dangerous, as many people feared it would.

We accomplished a number of things during this process and with this task force. We looked at practices, we changed policies, and we got the police department to appreciate the act of data collection—the idea that you can develop a metric, analyze it, and use it to bring about change. It started with a simple checkbox, but then that checkbox led people in the department to have open conversations about the potential for racial bias in their decision-making for the very first time since we arrived. 

LIU: Thank you, Jennifer. I hope your comments stimulate some thoughts in our audience about their own organizational settings and how simple interventions can convey some very powerful signals.

Let me now turn to Frank Dobbin. Frank, your essay in Dædalus with your coauthor, Alexandra Kalev, focuses on different organizational settings, namely, corporations and firms. You describe your work studying antibias or diversity trainings in these organizational contexts. Could you tell us a little bit about your data set and about some of your findings about diversity trainings?

Frank Dobbin

Frank Dobbin is the Henry Ford II Professor of the Social Sciences and Chair of the Sociology Department at Harvard University.
 

A person with pale skin and light brown hair, wearing business attire, looks out into the distance.
Photo courtesy of Frank Dobbin.

First, let me thank you and Camara for the invitation to contribute to this issue of Dædalus. It is a fantastic collection of essays, with the latest thinking and research on implicit bias. 

Sandra and I, with our research teams, have been looking at data from over eight hundred companies across more than forty years. These companies have over eight million workers. The bigger data set comes from the Equal Employment Opportunity Commission’s workplace census of private sector employers, which covers all employers with at least one hundred workers. That census goes back to 1966. We used a sample of eight hundred employers to look at the adoption of a range of different diversity programs over time. We are looking at diversity training in the context of lots of other diversity programming and other changes that companies have been making to their hiring and promotion processes.

One advantage of our data set is that we can look over time and take a snapshot each year to see the effects on a company’s management diversity when they introduce diversity training. We can compare the effects of particular practices or particular changes on the actual workforce in the years after companies adopt these changes, such as trainings. We are looking at a very long period of time, 1971 to 2015. 

The question for us is which things should we be working on? Within an organization, it’s hard to think about changing societal ideas, or the dust cloud that Camara talked about, but we can certainly change the institutions that careers are organized through. What we find when we look at antibias training should not come as a surprise to anybody who has been following social science research for any time. There have been hundreds of studies showing that antibias training doesn’t permanently and significantly reduce bias.

The first review of these studies was by Cornell sociologist Robin Williams in 1947. We knew in 1947 that training exercises couldn’t really change individual-level bias, certainly not significantly and permanently. Recent meta analyses that look at hundreds of studies at once show the same thing. There can be small positive effects of anti­bias training; that is, antibias training can reduce bias, but very briefly and the sizes of the effects are very small.

As Jennifer was saying, biases are based on stereotypes and experience with the world. These things are built up over decades and they are very hard to change. As Camara described, if you go to a cleansing spa and then come back, you are still going to be exposed to the same stereotypes at a societal level, so it would be very difficult to produce sustained change even if you could brainwash everybody.

So why isn’t training promising? I think the best evidence that it isn’t promising as a strategy doesn’t just come from the fact that it’s hard to change what’s in people’s brains. You could in principle—and this is sometimes the argument that trainers make—make people aware that they are biased and then cause them to intervene in their own decision processes.

I think that may be the basis of Jennifer’s amazing intervention with the Oakland police. They are aware of what is going on. But when we look at the workplace, we see that the most common kinds of trainings, which are legalistic, actually lead to reductions in workforce diversity because they spark backlash against the training itself. People leave trainings feeling angry; they feel they have been blamed for something that they think is not their fault. They think inequality is an institutional problem or a problem of ideas at the societal level. To me, it’s very discouraging that corporations, universities, nonprofits, and governments spend so much of their money trying to reduce bias at the individual level through training when there is really no good evidence that it works either to reduce cognitive bias or to change what goes on in the workplace.

LIU: That is a very sobering conclusion, backed by social science. You draw a distinction between what you call “legal-compliance trainings”—and we have all done a lot of these—which you say are less effective, and “cultural-inclusion” models of training, which can be effective. Would you elaborate on the differences between these trainings and what people should be focusing on for trainings that are more promising?

DOBBIN: For trainings that are legalistic in orientation, people get an implicit bias component so that they understand that bias is widespread, and then the rest of the training has to do with what the law specifically forbids in the workplace. Managers are trained in what they shouldn’t be doing and what they shouldn’t be saying, and they come away feeling that they’ve been accused of being racist. Nobody thinks they’re racist. So it is not very useful to try to convince people that they are racist. The threat that “the law” will sanction their company for their own behavior often leaves them angry.

Training focused on cultural inclusion, rather than the law, can be effective. But if we look at all of the trainings done in corporations and by other employers, we see that a very small minority of training sessions take this form, and eschew discussion of the law or potential penalties. What works is training that focuses on cultural-inclusion skills, like how to listen to other people, how to take the position of another person, and how to negotiate between people who are having problems with one another in the workplace. In our studies, we look at how these trainings affect the diversity of the management workforce, because that is the part of the workforce that is the hardest to change.

We see some positive effects on the diversity of the management workforce, and no negative effects on historically underrepresented groups. I find that extremely promising, but the problem remains that companies want to put almost all of their eggs in the basket of diversity training. While cultural-inclusion training can have some positive effects, other simple structural or institutional changes in the career system can have much larger effects. Most companies aren’t adopting those, in part because they are being told that training will really move the needle, and in part because at some companies, leaders are happy to go along with legalistic training because they don’t really want to see the workforce change. 

LIU: What are some examples of structural changes that have yielded dividends?

DOBBIN: Based on the work of Gordon Allport, we know that a better way to reduce bias is by asking groups that are normally isolated from one another at work, and that hold racial animus, to work together. Some of the first studies of this were done in the European theater during World War II by Allport’s colleague at Harvard, sociologist Sam­uel Stouffer and his team. 

We see that changes to career systems that are easy to implement, but not as common as diversity training, can have quite rapid and significant effects on what the workforce looks like. For example, sending a company’s managers to historically Black colleges and universities, Hispanic-serving institutions, women’s colleges, as well as the usual historically White colleges, like Stanford and Harvard where Jennifer and I teach, democratizes the recruitment process so that people from schools where there’s a big concentration of people from underrepresented groups are more likely to be interviewed.

It also exposes managers to new pools of potential recruits. If you’re going to historically White colleges, why wouldn’t you go to historically Black colleges? The effects of doing that are much bigger than the effects of the best of diversity trainings.

The second thing that we find to be very effective is formal mentoring programs. Though many organizations have a lot of mentoring going on, we found that when mentoring is left to informal processes, Black, Hispanic, Asian American, and women workers are often left out because the potential mentors in most organizations are White men. And they choose to mentor people like themselves. So a formal program can ensure that people from these other groups get mentors. 

These rarely come under the rubric of DEI programs, but they have very strong positive effects because mentors are key to retention. If you have a mentor, you know somebody is looking out for you. While these formal mentoring programs are open to everybody, they are disproportionately used by women and by Black, Asian American, and Hispanic workers.

Cross-training programs are another way to put people into contact with people unlike themselves, and that’s because departments in organizations are often segregated by race and gender. 

Moving people around to different departments increases contact among people from different groups and can help break down siloes that may cluster women in HR and White and Asian American men in finance. In our analyses, we found that cross-training programs reduce inequality between groups in management. 

And then self-managed teams, which eliminate the supervisor and have people from different departments work together to manage their projects themselves, also have positive effects, particularly on the representation of women in management. That’s partly because women are often in “un-promotable” departments, and on these self-managed teams they show they can be leaders. These are just four of the institutional and systemic changes that can really move the needle in terms of changing what the managerial workforce looks like. And collectively they have a larger impact than even the best kind of diversity training.

LIU: Thank you, Frank. Before we turn to audience questions, let me ask each of you about your future directions for research. You both have done amazing work that has helped us understand more about interventions. Jennifer, you are now involved in studying body-camera footage. Could you describe that work for us? 

EBERHARDT: Our work using body-camera footage is very promising. The footage from these cameras allows us to look for patterns across many interactions at once. For the first time, we have been able to test the extent to which there are differences in the respect that officers communicate to Black and White drivers. Through SPARQ, the center that I codirect at Stanford, we put together an interdisciplinary group of researchers (which includes computer scientists and computational linguists) to systematically examine this footage at scale. 

We conducted an initial study in Oakland, where we examined nearly one thousand traffic stops. We used machine-learning techniques to comb through the words officers used during those stops. We found that even when officers were behaving professionally, they demonstrated less respect toward Black drivers compared to White drivers. They expressed more concern for the safety of White drivers. They apologized more and offered more reassurance to White drivers. In fact, using large language models, we could predict whether an officer was talking to a Black person or a White person. We published a paper on these findings in 2017.

When members of the Oakland community became aware of the findings, they pushed the police department to address the findings in some way. The department came to us and asked us to develop a training module, and we helped them to do that. They wanted us not only to describe our findings, but to offer clear takeaways on what officers could do to communicate in a respectful manner. We developed that training module, the department used it, but we didn’t stop there. We decided to look at officers’ camera footage weeks before they were trained and weeks after they were trained to evaluate whether the training was effective.

This is significant because most trainings like this, at least in the policing industry, are not evaluated. Trainings on procedural justice and implicit bias that were happening all over the country were rarely evaluated. For the first time, we used footage as a way to understand whether this kind of training was effective. We found that after the training, officers were more likely to explicitly state the reason for the stop. They were more likely to express concern for the safety of Black drivers. They were more likely to offer reassurance to Black drivers. We found even when there’s a long history of distrust, even in places where people feel like they can have their dignity taken from them, especially during these police stops, that change is possible. There’s a lot we can do with the footage, but unfortunately the vast majority of the footage is just never examined. This is something that our group at SPARQ aims to change. It’s an industry-wide change, not just in Oakland, that we feel is badly needed.

The federal government has incentivized the use of these cameras for years, and they’ve done that because of the potential for them to be used as an accountability tool. But to really get the full benefit of these cameras, we feel that the federal government should also consider incentivizing footage analysis. So that is where we are on body-cam footage.

LIU: Thank you, Jennifer. Frank, I want to ask you about the future directions of your research, but I am going to add in some questions from the audience. Several people are wondering if you have done any work in the university setting, and if there are things in an academic setting that can be implemented along the same lines as in the corporate world to improve upon DEI trainings as they exist today?

DOBBIN: We are now doing research with a similar data set on universities, using data from 1993 to 2016, and we are looking at the introduction of all kinds of programs to diversify the faculty, although we are seeing some spillover effects for graduate students as well.

The pattern that I have described for corporations is very similar to what we see as being effective in universities. That is, individual-level bias training has become very popular in universities in the last two decades. It’s not very effective, and there are conditions under which it can backfire. Unfortunately, it is not really moving the needle in a positive way. But special recruitment programs to try to find Black and brown faculty, and women faculty in the sciences, are having solid positive effects on women and faculty of color. So are formal mentoring programs that make sure that everybody gets a mentor. Also very effective are work-life support programs, which have been particularly effective for women, but which can also be effective for non-White men. This is partly because work-life support programs, such as family and medical leaves, tenure-clock extensions, dual-career programs, and childcare supports, signal that the university is accepting of people who have work-life challenges, and that the university is open to trying to work out solutions with them. 

When we talk to people about how the tenure clock extension helped them, or how the parental leave program helped them, or how the special childcare programs for parents who are at a conference or who have a child who’s home sick helped them, they say it is partly for the substantive help they provide but it is also about changing the idea of what a professor is. These programs send the signal that it’s OK to be a professor and have a busy family life. 

LIU: Let me pose a question to Camara, Jennifer, and Frank that has been bubbling through several of the questions and comments from the audience. I think it is fair to say that we have seen a backlash to DEI efforts building over the last few years. Some people might say, as Frank has detailed, that some of these programs have been ineffective: They have cost a lot in terms of resources and people’s time, and they haven’t moved the needle on the outcomes that we desire.

How has the change in context in terms of people’s attitudes about DEI affected the direction of your work? You can answer that at whatever level you think appropriate. You are all amazingly articulate experts who have done a lot of scientific research on this. Many people in organizations would love to have you work with them. What should people do in their own organizational context? How can this data-driven approach be generalized and adapted? 

JONES: The anti-DEI sentiment, the anti-critical race theory vitriol, the laws that say you can’t talk about DEI or racism or history, the Supreme Court decision banning affirmative action in higher education, the book bans spreading across the nation are all signals of massive racism denial. And they help us to realize what antiracism interventions we need. 

Because racism is a system of structuring opportunity and assigning value, we need to dismantle the oppressive opportunity structures and we need to nullify the dehumanizing values. But I have come to recognize that even if we had successful interventions on those fronts, if we were doing reparations, if we had massive Marshall Plan investments in communities of color, if we were supporting all children and their families, the racism denial that has characterized our society for a long time and that seems to be amplifying right now would stand in our way.

I understand now that there are four things that an antiracism or antibias approach needs to have. I have already described the first two: We need to address the differential opportunity structures and we need to address the differential value assignments. The third: we need to recognize our current context and confront racism denial. 

How do you confront racism denial? I guess the first thing would be to have people read history, or have people go across town, break through their bubbles of experience to recognize that just across town there are people who are just as kind, funny, generous, hardworking, and smart as they are who are living in very different circumstances. Frank, it is an expansion of what you were talking about in terms of group contact within the work experience. We need group contact in the whole of society. 

The fourth thing we need to do is anticipate and prepare for pushback. We shouldn’t be surprised that after the election of a Black president, there would be the election of a Make America White Again president. These experiences have expanded my understanding of the barriers, the context in which we are operating, and the future preparation that is necessary. We can’t simply address what racism does and try to undo that. We need to deal with bigger contextual factors.

LIU: Frank, would you like to comment on that?

DOBBIN: I agree with everything you say, Camara. I think we are at a time when there is virulent opposition that is well organized. It’s not a new thing. The anti-DEI forces have been working behind the scenes for years. What I find particularly discouraging is the number of people who are trying to hide what they are doing on the DEI front. They are trying to keep it quiet. I don’t know what the solution is.

Where I see a ray of hope, and where we have been successful, is in the workplace where we can create group contact that undermines racism. Despite efforts at desegregation in housing and in schools, those areas still remain highly segregated. Most workplaces are much more integrated now than they were ten, thirty, or fifty years ago. That’s not really true of schools and neighborhoods. But we’re still fighting segregation by department within the workplace, and the programs that are most effective help to break down that segregation by creating contact across groups and departments. 

In the current moment, there’s a real backlash against diversity programming. But there is a silver lining to this cloudy day. The things that are most high profile as DEI efforts—like grievance procedures and diversity trainings in the workplace—don’t work anyway, and they annoy many people. 

But there are things that work well without eliciting backlash: mentoring programs, self-managed teams, job rotation, and recruiting at all kinds of colleges and universities. It’s also important to keep in mind that work-life programs are DEI programs, but under the cover of night. People consider them to be programs that White women fought for and that benefit White women. But in the corporate world, work-life programs are very helpful to men of color for the simple reason that those men tend to be in two-parent households where both parents have to work full-time. Anything that can help them keep their job so they can move up is good for their careers. The silver lining of the cloud is that we can do a lot of positive work in this interregnum where we’re stuck with a pretty unpleasant political situation.

LIU: Before I turn to Jennifer, who will have the last word here, I want to make two quick observations. First, Frank, many of the things that you have mentioned are not implicated by the Supreme Court’s ruling on affirmative action. And second, you helpfully pointed out some of the ways in which these programs are labeled. We tend to think of DEI as one thing and then work-life as another, but language and categories matter. If we think more flexibly, we might be able to go farther from an evidence-based standpoint. Jennifer, would you like to add anything? 

EBERHARDT: Among trainings, I think we need more information about which specific aspects work and which don’t work. Unfortunately, many organizations, such as workplaces, schools, and police departments, haven’t rigorously evaluated the trainings—perhaps because you can’t get credit for a training that you’re delivering or paying for if you have evidence that it doesn’t work.

It feels like we are in this perverse system in some way, but Frank is right that there is a silver lining. The problem is that everything is being thrown out—the good with the bad. There is this feeling that focusing on racial bias and inequality isn’t important, that it doesn’t matter, that it simply aggravates things. Talking about race makes people uncomfortable, and so it’s a way to go back to not having to do that, to not talking about race anymore.

When I think about the moment that we’re in now and what it means for me personally and for my work, as a social psychologist I’m focused always on the social environment. Now we’re faced with a different environment from what existed two years ago. To have the opportunity to understand that environment is really important because, in all likelihood, it’s going to come back again. We want to have tools ready to be able to understand it, address it, and find ways to keep moving forward across these differing and challenging spaces and environments. 

LIU: Thank you. That’s a great place to end. I want to thank all of you for being on this panel today and for giving us a sense of what it means to really engage in a disciplined study of these questions. They are not easy questions, and part of what we try to do in the Dædalus volume is to bring the benefit of science and rigorous study to bear on separating the effective from the ineffective and to illuminate a path forward that is based on evidence. 


© 2024 by Camara Phyllis Jones, Goodwin Liu, Jennifer Eberhardt, and Frank Dobbin, respectively
 

To view or listen to the presentation, please visit the Academy’s website.

Share