Search

How Can We Build Better Systems Out of AI Corrupted by Human Biases?

Several important decisions from hiring to policing and governance are being turned over to AI, making bias a serious concern.

Advertisement
Highlights
  • If humans can't figure out sexism, then AI isn't going to do it for us
  • Dr. Ming favours auditing AI systems as compared to regulating them
  • IBM Research India released a range of tools to help mitigate bias in AI
How Can We Build Better Systems Out of AI Corrupted by Human Biases?

There is a huge risk of human bias creeping in to AI systems and amplifying the mistakes that people make

Photo Credit: Pixabay/ Gerd Altmann

Artificial intelligence (AI) systems are now controlling everything from moderating social media, to hiring decisions, to policy and governance. Today, we have companies that are building systems to use AI that can predict where COVID-19 will strike next, and make decisions about healthcare. But in creating these systems, and in choosing the data that informs their decisions, there is a huge risk of human bias creeping in and amplifying the mistakes that people have been making.

To better understand how one can build trust in AI systems, we caught up with IBM Research India's Director, Gargi Dasgupta, and Distinguished Engineer, Sameep Mehta, as well as Dr. Vivienne Ming, AI expert and founder of Socos Labs, a California-based AI incubator, to find some answers.

How does bias seep into an AI system in the first place?

Dr. Ming explained that bias becomes a problem when the AI is being trained to data that is already biased. Dr. Ming is a neuroscientist and founder of Socos Labs, an incubator that works to find solutions to messy human problems through the application of AI. “As an academic, I have had the chance to do lots of collaborative work with Google and Amazon and others,” she explained.

“If you actually want to build systems that can solve problems, it is important that you first look at where the problem exists. A huge amount of data and a bad understanding of the problem is virtually guaranteed to create issues.”

IBM's Dasgupta added, “Special tools and techniques are needed to make sure that we don't have biases. We need to be sure and take extra caution that we remove the bias, so that our biases don't inherently transmit into the models.”

Since machine learning is built on past data, it's all too easy for an algorithm to find correlation — and read that as causation. Noise and random fluctuations can be interpreted as the core concepts by the model. But then, when new data is entered, and it doesn't have the same fluctuations, the model will think that it doesn't match the requirements.

“How do we make an AI for hiring that isn't biased against women? Amazon wanted me to build exactly such a thing and I told them the way they were doing it wouldn't work,” explained Dr. Ming. “They were just training AI on their massive hiring history. They have a huge dataset of past employees. But I don't think it is surprising to any of us that almost all of its hiring history is biased in favour of men, for a lot of reasons.”

“It isn't just that they are bad people; they are not bad people, but AI isn't magic. If humans can't figure out sexism or racism or casteism then AI isn't going to do it for us.”

What can be done to remove bias and build trust in an AI system?

Dr. Ming favours auditing AI systems as compared to regulating them. “I'm not a big advocate of regulation. Companies, from big to small, need to embrace auditing. Auditing of their AI, algorithms, and data in exactly the same way they do for the financial industry,” she said.

“If we want AI systems in hiring to be unbiased, then we need to be able to see what ‘causes' someone to be a great employee and not what ‘correlates' with past great employees,” Dr Ming explained.

“What correlates is easy – elite schools, certain gender, certain race – at least in some parts of the world they are already part of the hiring process. When you apply causal analysis, going to an elite school is no more an indicator of why people are good at their job. A vast number of people who didn't go to elite schools are just as good at their jobs as those who went to one. We generally found in our data sets of about 122 million people, there were ten and in some cases about a 100 times equally qualified people that didn't attend elite universities.”

To solve this problem, one has to first understand if and how an AI model is biased, and secondly, to work out the algorithms to remove the biases.

According to Mehta, “There are two parts of the story - one is to understand if an AI model is biased. If so, the next step is to provide algorithms to remove such biases.”

The IBM Research team released a range of tools with the aim of addressing and mitigating bias in AI. IBM's AI Fairness 360 Toolkit is one such tool. It is an open-source of metrics to check for unwanted bias in datasets and machine learning models, which uses around 70 different methods to compute bias in AI.

Dasgupta says that there have been multiple cases where there was bias in a system and the IBM team was able to predict it. “After we predict the bias, it is in the hands of the customers about how they integrate it into the part of their remediation process.”

The IBM Research team has also developed the AI Explainability 360 Toolkit which is a toolkit of algorithms that support the explainability of machine learning models. This allows customers to understand and further improve and iterate upon their systems, Dasgupta explained.

Part of this is a system that IBM calls FactSheets — much like nutrition labels, or the App Privacy labels that Apple introduced recently.

FactSheets include questions like ‘why was this AI built?', ‘how was it trained?', ‘what are the characteristics of the training data', ‘is the model fair?', ‘is the model explainable?' etc. This standardisation also helps compare two AI against each other.

IBM also recently launched new capabilities to its AI system Watson. Mehta said that IBM's AI Fairness 360 Toolkit and Watson Openscale have been deployed at multiple places to help customers with their decisions.


Is Mi 11 Ultra the best phone you can buy at Rs. 70,000? We discussed this on Orbital, the Gadgets 360 podcast. Orbital is available on Apple Podcasts, Google Podcasts, Spotify, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated - see our ethics statement for details.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

 
Show Full Article
Please wait...
Advertisement

Related Stories

Popular Mobile Brands
  1. OnePlus 13T Display Details Revealed Ahead of April 24 Launch
  2. Samsung Galaxy S25 FE Will Reportedly Arrive With This Processor
  3. Apple's iPhone 17e Is Already Nearing Trial Production, Tipster Claims
  4. Realme Buds Air 7 Pro Debuts With Active Noise Cancellation, IP55 Rating
  5. Honor GT Pro With Snapdragon 8 Elite Chip, 7,200mAh Battery Launched
  6. Mad Square OTT Release Date: When and Where to Watch this Blockbuster Online?
  7. Redmi Turbo 4 Pro to Pack 7,550mAh Battery; Harry Potter Edition Teased
  8. Nikon Launches Z5II Full-Frame Mirrorless Camera With These Features
  9. Google Photos Will Soon Let You Turn Regular Images Into Ultra HDR Photos
  1. Lenovo IdeaPad Slim 3 (2025) With Intel Raptor Lake H or AMD HawkPoint Processor Launched in India
  2. Ray-Ban Meta Glasses to Launch in India Soon; Live Translation Feature Rolls Out Widely
  3. New Study Finds Hercules-Corona Borealis Great Wall Bigger and Nearer Than Thought
  4. Ancient Greenland Rocks Found in Iceland Sheds Light on Late Antique Ice Age
  5. SpaceX Sends Europe’s First Reentry Capsule into Orbit on Bandwagon-3 Rideshare Mission
  6. Bitcoin Reportedly Overtakes Google, Amazon, Meta to Become Fifth-Largest Asset by Market Cap
  7. Ghost of Yotei Sets October 2 Release Date, New Trailer Reveals Revenge Story, Pre-Order Details and More
  8. Tesla Says India's 100 Percent Car Tariffs Make Customers Anxious
  9. Apple, Meta Fined as EU Presses Ahead with Tech Probes
  10. Sennheiser HD 505 Over-Ear Headphones With Open-Back Design Launched in India
Gadgets 360 is available in
Download Our Apps
App Store App Store
Available in Hindi
App Store
© Copyright Red Pixels Ventures Limited 2025. All rights reserved.
Trending Products »
Latest Tech News »