top of page
Katica Roy

How To Mitigate Bias In AI: Three Recommendations

Updated: Dec 14, 2020

 

Welcome to my weekly Q&A roundup. (Scroll down to find the Q&A.)


If this is your first time here, welcome. I spend a fair amount of time speaking at events and conferences. At the end of my presentations, I leave space for audience members to ask questions—tough questions, brave questions, you name it.


The level of candor and curiosity always inspires me, and I want to share that sentiment with you. So each week, I pick one question that I believe others would find most instructive and publish my response to it here.


The purpose of this weekly tradition is transparency and inclusivity.


Transparency: a behind-the-scenes look at my day-to-day.


Inclusivity: bringing others along in the journey.


Be Brave™

 

Three Ways To Address Bias In AI

Question:


Considering the amount of digital acceleration we’ve seen this year, how can we ensure that we aren’t compromising speed for ethics when it comes to Big Data and AI?


Answer:


The events of 2020 have only heightened the urgency of ethical AI, so thank you for asking about it. Already, in a span of about two months, consumers and businesses have leaped forward five years in rates of digital adoption.


As businesses rush to digitally future-proof their systems and procedures, we cannot let the integrity of AI fall to the periphery.


By 2022, 85% of AI projects will generate inaccurate reports as a result of algorithmic bias. One piece of legislation or industry-wide pledge will not sufficiently address this complex issue.


After all, how can we agree on a uniform, ethical approach to turning philosophical notions of fairness into mathematical expressions—at scale? We need a multi-pronged approach.


Here are three prongs to ensure the ethical development of AI:


1. Close the intersectional gender gap in AI


Closing this gap will help mitigate biases that arise when teams of engineers ideate, write, and train algorithms.


Currently, women make up 22% of the world’s AI practitioners, yet they are 50% of the population. Among the tech giants:

  • Facebook: women hold 24.1% of tech roles

  • Google: women hold 23.6% of tech roles

  • Apple: women hold 23% of tech roles

  • Microsoft: women hold 21.4% of tech roles

Moreover, Black employees hold 3.7% of technical roles at Google, and Facebook’s Black workforce has only risen by a mere .8 percentage points in the past six years (from 3% to 3.8%).


Interventions to close the intersectional gender AI gap include:

  1. Standardize digital literacy in all K-12 education

  2. Remove barriers for women and people of color in tech

  3. Keep women and people of color in tech by ensuring equitable opportunities for advancement in the workplace

(See here for more information on each of these interventions.)


2. Develop AI auditing standards


We need to focus on making AI auditable rather than explainable. Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, captures the “why” behind auditable AI well:


“How do we explain conclusions derived from a weighted, nonlinear combination of thousands of inputs, each contributing a microscopic percentage point toward the overall judgment?”


AI auditing standards need to include provisions for neutral third-party institutions to conduct the audits because they provide better oversight than the algorithm’s creator.


3. Reinstate the Office of Technology Assessment


Reinstating the Office of Technology Assessment (it was dismantled in 1995) would further strengthen the institutional fabric of our country. The OTA would provide Congress with a separate body of technical expertise to inform better legislative action.


The OTA would also ensure technology briefings, studies, and audits are conducted with gender and racial lensing.


Final thoughts:


The conversation around AI ethics will only continue. It’s our job (not to mention legal liability) as both manufacturers and consumers of AI to ensure our technology creates a better world, not a biased one.

 

These Q&A roundups can be delivered directly to you—a week before I publish them here.


Interested?

(All you need is an email address.)

Comments


bottom of page