Everyday Erinyes #229

 Posted by at 10:00 am  Politics
Aug 222020
 

Experts in autocracies have pointed out that it is, unfortunately, easy to slip into normalizing the tyrant, hence it is important to hang on to outrage. These incidents which seem to call for the efforts of the Greek Furies (Erinyes) to come and deal with them will, I hope, help with that. As a reminder, though no one really knows how many there were supposed to be, the three names we have are Alecto, Megaera, and Tisiphone. These roughly translate as “unceasing,” “grudging,” and “vengeful destruction.”

Like TC, and others, this time of action for racial justice is not my first rodeo. And I know, because I remember, that we made mistakes last time around. Most, maybe all, under the umbrella that we assumed that winning was winning (at least we “wypipo” assumed that. I don’t think black or brown people were ever really fooled.)

Now is a time when we are seeing a light at the end of a tunnel (and we hope it won’t be a train). We won’t know for a while what kind of light it is, but in the hope that it will be the blessed light of day, I welcome advice on how to avoid previous mistakes (which are still being made, actually) and find ways to truly advance.
================================================================

Diversity pledges alone won’t change corporate workplaces – here’s what will

Words alone won’t make corporate America more diverse. Robyn Beck/AFP via Getty Images

Kimberly A. Houser, University of North Texas

Dozen of companies, from Apple to Zappos, have reacted to George Floyd’s killing and the protests that followed by pledging to make their workforces more diverse.

While commendable, to me it feels a bit like deja vu. Back in 2014, a host of tech companies made similar commitments to diversify their ranks. Their latest reports – which they release annually – show they’ve made little progress.

Why have their efforts largely failed? Were they just empty promises?

As a gender diversity scholar, I explored these questions in my recent paper published in the Stanford Technology Law Review. The problem is not a lack of commitment but what social scientists call “unconscious bias.”

Big tech, little progress

Today’s efforts to promote diversity are certainly more specific than the tech industry’s vague promises in 2014.

In 2020, sports apparel maker Adidas pledged to fill at least 30% of all open positions with Black or Latino candidates. Cosmetics company Estée Lauder promised to make sure the share of Black people it employs mirrors their percentage of the U.S. population within five years. And Facebook vowed to double its number of Black and Latino employees within three years.

Companies have also committed at least US$1 billion in money and resources to fight the broader societal scourge of racism and support Black Americans and people of color more broadly.

Unfortunately, if past experience is any indication, good intentions and public pledges will not be enough to tackle the problem of the underrepresentation of women and people of color in most companies.

In 2014, Google, Facebook, Apple and other tech companies began publishing diversity reports after software engineer Tracy Chao, investor Ellen Pao and others called attention to Silicon Valley’s white male-dominated, misogynistic culture. The numbers weren’t pretty, and so one by one, they all made public commitments to diversity with promises of money, partnerships, training and mentorship programs.

Yet, half a decade later, their latest reports reveal, in embarrassing detail, how little things have changed, especially for underrepresented minorities. For example, at Apple, the share of women in tech jobs rose from 20% in 2014 to 23% in 2018, while the percentage of Black workers in those roles remained flat at 6%. Google managed to increase the share of women in such jobs to 24% in 2020 from 17% in 2014, yet only 2.4% of these tech roles are filled by Black workers, up from 1.5% in 2014. Even companies that have made more progress, such as Twitter, still have far to go to achieve meaningful representation.

I believe one of the reasons for the lack of progress is that two of their main methods, diversity training and mentoring, were flawed. Training can actually harm workplace relationships, while mentoring places the burden of changing the system on those disadvantaged by it and with the least influence over it.

More importantly, however, you can not solve the problem of diversity – no matter how much money you throw at it – without a thorough understanding of its source: faulty human decision-making.

A problem of bias

My research, which relies on the behavioral work of Nobel Prize winner Daniel Kahneman, explains that because humans are unaware of their unconscious biases, most underestimate their impact on the decisions they make.

People tend to believe they make hiring or other business decisions based on facts or merit alone, despite loads of evidence showing that decisions tend to be subjective, inconsistent and subject to mental shortcuts, known to psychologists as heuristics.

Male-dominated industries, such as tech, finance and engineering, tend to keep hiring the same types of employees and promoting the same types of workers due to their preference for applicants who match the stereotype of who belongs in these roles – a phenomenon known as representative bias. This perpetuates the status quo that keeps men in prime positions and prevents women and underrepresented minorities from gaining a foothold.

This problem is amplified by confirmation bias and the validity illusion, which lead us to be overconfident in our predictions and decisions – despite ample research demonstrating how poorly humans are at forecasting events.

By failing to make objective decisions in the hiring process, the system just repeats itself over and over.

How AI can overcome bias

Advances in artificial intelligence, however, offer a way to overcome these biases by making hiring decisions more objective and consistent.

One way is by anonymizing the interview process.

Studies have found that simply replacing female names with male names on resumes results in improving the odds of a woman being hired by 61%. AI could help ensure an applicant isn’t culled early in the vetting process due to gender or race in a number of ways. For example, code could be written that removes certain identifying features from resumes. Or a company could use neuroscience games – which help match candidate skills and cognitive traits to the needs of jobs – as an unbiased gatekeeper.

Another roadblock is job descriptions, which can be worded in a way that results in fewer applicants from diverse backgrounds. AI is able to identify and remove biased language before the ad is even posted.

Some companies have already made strides hiring women and underrepresented minorities this way. For example, Unilever has had fantastic success improving the diversity of its workforce by employing a number of AI technologies in the recruitment process, including using a chatbot to carry on automated “conversations” with applicants. Earlier this year, the maker of Ben & Jerry’s ice cream and Vaseline jelly said it achieved perfect parity between women and men in management positions, up from 38% a decade earlier.

Accenture, which ranked number one in 2019 among more than 7,000 companies around the world on an index of diversity and inclusion, utilizes AI in its online assessments of job applicants. Women now make up 38% of its U.S. workforce, up from 36% in 2015, while African Americans rose to 9.3% from 7.6%.

Garbage in, garbage out

Of course, AI is only as good as the data and design that go into it.

We know that biases can be introduced in the choices programmers make when creating an algorithm, how information is labeled and even in the very data sets that AI relies upon. A 2018 study found that a poorly designed facial recognition algorithm had an error rate as high as 34% for identifying darker-skinned women, compared with 1% for light-skinned men.

[Deep knowledge, daily. Sign up for The Conversation’s newsletter.]

Fortunately, bias in AI can be mitigated – and remedied when problems are discovered – through its responsible use, which requires balanced and inclusive data sets, the ability to peer inside its “black box” and the recruitment of a diverse group of programmers to build these programs. Additionally, algorithmic outcomes can be monitored and audited for bias and accuracy.

But that really is the point. You can take the bias out of AI – but you can’t remove it from humans.The Conversation

Kimberly A. Houser, Assistant Clinical Professor, Business and Tech Law, University of North Texas

This article is republished from The Conversation under a Creative Commons license. Read the original article.

================================================================
Alecto, Megaera, and Tisiphone, “you can take the bias out of AI – but you can’t remove it from humans.” So true. Many of us are worried that technology – including but not limited to AI – will dehumanize us. But with creativity, and good will, there is no reason we can’t use it to make ourselves more human – in the good senses of compassionate and creative, not in the negative senses of flawed and unpredictable.

Gary Larson – whom you will remember as the creator of The Far Side – and who has been terribly missed since he retired – credits his return to active cartooning entirely to the discovery that drawing digitally is fun. So that cartooning for him is now fun again. I hope we can learn something from that as we pursue greater fairness and diversity as well.

The Furies and I will be back.

Share

  9 Responses to “Everyday Erinyes #229”

  1. Eliminate bias!

    • Sadly, eliminate it is not going to happen.  It exists in all of us because it once had great, and still has some, survival value.  But what we can do is work around it.  If we, as a society, want to.

  2. Excellent article. 
    Compassion and empathy, love & understanding others, are important to me. 
    I sure do wish we could eliminate bias, Mitch. It would certainly be a better place without it, for sure. 

    Thanks, Joanne (and Furies) for a great post. 

  3. Harvard did some quality research showing how women would be rated far higher on many desired characteristics if the raters of both genders did not know they were women.
    Here is the late in the process killer of hiring anyone who isn’t like existing workers research article:https://www.huffpost.com/entry/culture-fit-failed-idea-in-hiring_l_5f1f2319c5b69fd47310363e
    and even if someone is hired, those biases will affect mentorship, promotions, assignments, etc.  Tons of disparity in all those areas even when entry level hire data is diverse and representative.

  4. While very informative, I can’t help but believe it’s putting the cart before the horse.

    A more urgent problem is addressing the lack of women and minorities even available to fill those positions. 

    This is easily understood when looking at first:

    Where the Jobs of the Future Lie: STEM (Science, Technology, Engineering Math) and the US Economy

    Mathematicians: 16%
    Computer System Analysts: 22%
    Systems Software Developers: 32%
    Medical Scientists: 36%
    Biomedical Engineers: 62%

    And second:

    FEMALE PARTICIPATION IN STEM STUDIES AT THE COLLEGIATE LEVEL
    Computer Science: 18.2%
    Engineering: 19.2%
    Physics: 19.1%

    MINORITY PARTICIPATION IN STEM STUDIES AT THE COLLEGIATE LEVEL
    Engineering: 3.1%
    Physical Sciences: 6.5%
    Mathematics: 5.4%
    Computer Science: 4.8%

    https://www.onlinecolleges.net/for-students/women-and-minorities-stem/

    https://eab.com/insights/daily-briefing/student-success/a-third-of-minority-students-leave-stem-majors-heres-why/

    Filling positions in jobs now and in the future with women and minorities will be difficult if the pool to draw from is not plentiful with women and minorities trained to occupy those positions.

    And that will not happen unless we can direct more women and minorities into STEM courses.

    • Also true.  And I know that you know that the interest is there – but they are not getting the encouragement.

      I kind of think both issues need to be worked on simultaneously.  Increased hiring will give more role models to members of inority groups aand that in turn will increase the participation in the education.

  5. Thanks for a very insightful article, Joanne.

    We can’t get rid of bias because it is part of who we are as humans through evolution, helping us to make snap decisions, make us part of our social group, starting with a bias towards the skin-colour of our parents when we are infants. To an extent, we still need some of these biases to survive in society.

    Though we can’t eliminate it, we can lessen its negative effect by acknowledging that we all have biases, starting with the bias that only other people are biased. We all are, and sometimes it gets the better of us. If AI can help us to get a better insight into our biases, we should be open, i.e. unbiased 😉, to it.

    But for that, people need to be unbiased towards science and with some this bias is ingrained very deeply.

  6. Very well said, JD. 04

    I think there is nothing we can ever do to purge the racism, sexism, bigotry, homophobia, etc. between Republican ears.  To stop them from acting on it requires a carrot for those those who actually diversify and a BIG stick for those who don’t.

  7. Great post, Joanne
    I like others wish we could do away with bias. but do understand your point about it.
    We need to do more especially now with the election that is coming up. I honestly hope we can get the people out there to vote to clean up the stinking dirt we do have in office. 
    Thanks, Joanne

Sorry, the comment form is closed at this time.