September 8, 2021

Bias in the World of Artificial Intelligence

Bias in the World of Artificial Intelligence

Softensity Leadership Series: A conversation with Michelle Yi, Senior Director of Applied Artificial Intelligence at RelationalAI.

By Softensity

Michelle Yi is probably smarter than you. After graduating high school at age 13, and college at 16, Yi went on to work for IBM’s Watson Group, which is focused on artificial intelligence. She has also been a violinist for the New York Philharmonic, and speaks six languages — among a host of equally impressive talents. Today, Yi is a Senior Director of ​​Applied Artificial Intelligence at RelationalAI, and we’re honored to feature her in our latest Leadership Series interview

Softensity’s Monika Mueller sat down with Yi to talk about bias in the world of artificial intelligence. We all know about bias on a human level, but when it’s applied to artificial intelligence, the scale of that bias is exponential. The two thought leaders discussed the impact of bias in artificial intelligence on society, jobs and beyond — and discussed what organizations can do about it.

How Hidden Biases Creep Into Algorithms

Bias in the World of Artificial Intelligence
As a society, we’re still very early into our journey with AI. Yi points out that it’s still very much a human-driven effort in many ways, and depends a great deal on the data we collect, the processes we have, and who is developing the AI. Bias can easily creep into this data in various situations, such as when we use AI to help determine whether someone should be approved for a loan, or not.

While a massive amount of data is gathered, no one is necessarily considering whether or not the right kind of data is being collected. Consequently, the algorithms can easily learn these hidden biases within the data. For example, if someone always denies loans for a specific minority group, the algorithm will learn to give that person a “no” recommendation. It’s “garbage data in, garbage outcome out,” says Yi, “and sometimes we’re not necessarily thinking about this because it’s so nice to have that automation and scale.” 

Mueller and Yi also discuss the reality that Alexa, Siri and Google are all female voices — “servant robots with female voices,” as Mueller notes. Could this be another example of a subconscious bias? Yi points out that according to both research and her own personal experience, voice recognition technology tends to understand men’s voices better than women’s voices.

Societal Concerns About AI, Bias and Beyond

Should society be concerned about artificial intelligence and bias? What are the risks? And should we be worried about our jobs? According to Yi, Covid has accelerated the need to reconsider what the future of work will look like, including AI’s ever expanding role. On the positive side, Yi notes that AI can minimize laborious manual tasks, be it inputting data into spreadsheets or pulling facts from different documents. “These are things we probably don’t want to spend 80% of our time on,” she says.

“AI will really challenge us to think about how we work going forward,” Yi continues. “If we don’t regulate certain things or perhaps don’t take into consideration where AI may not be as beneficial because of its tendency, we can develop it in a silo.” She feels that it’s crucial to bring in the business, the research, thought leaders and diverse groups of people, otherwise “We risk leaving behind certain types of groups and people.”

And what about those jobs that may be automated by AI? Mueller points out that as these changes come about, companies share a responsibility to retrain or re-skill employees with jobs that may be replaced by AI into different roles. This may mean higher-value positions that pay a bit more and can ultimately help the employee achieve a higher quality of life.

Yi agrees, noting “I think there’s a huge opportunity, but we have to focus on: How do we help people and enable people to live in this post AI, post digital Covid world? Because there is a big impact.”

What Can Leaders Do to Avoid Falling into the Bias Trap?

From starting up Data Ethics offices to reimagining or rethinking the way they develop AI in software practices, Yi thinks that there are a lot of steps companies can take to avoid bias in AI. It may simply start by including input from more diverse teams that go beyond the development team, and making sure other stakeholders are involved in the process all the way down to very tactical tasks. “It can go all the way from the top down to a process level improvement, and there are a lot of ways that organizations can focus on helping to address this issue,” says Yi.

Watch the full interview to learn more about bias in the world of AI, including the steps organizations can take to avoid falling into the bias trap. 

BACK TO MAIN PAGE

Let’s Talk