Skip to content
Insights + News/Expert Opinions

Are we Failing to Address Bias in AI? Examining the Realities and Solutions

Robert Kiruta-Kigozi

Robert Kiruta-Kigozi
Consultant - Business Strategist

For artificial intelligence (AI) to play a fair role in our society, it is necessary to minimise bias in its algorithms. In this blog, Consultant Business Strategist Robert Kiruta-Kigozi sheds light on the issues of bias in AI and what can be done to remove them.

The reality of bias in AI

Have you ever heard of the 2020 Netflix documentary “Coded Bias”? If not, I highly recommend you give it a watch. It’s an eye-opening film that brings to light the issue of bias in AI and its potential consequences on communities. It got me thinking, how often do we stop and consider the potential impact of AI on our daily lives?

The truth is AI has brought about numerous benefits to our society, such as improved efficiency and automation in various industries. However, as the usage of AI in decision-making processes increases, so does the risk of bias. This bias can have a significant impact on individuals and society as a whole.

It’s not just a hypothetical issue; it’s happening in real-world scenarios.

Let me give you a few examples:

In the criminal justice system, biased algorithms are being used to predict the likelihood of reoffending, leading to unequal sentencing. One such algorithm, the COMPAS system, was found to be twice as likely to incorrectly label black defendants as high-risk compared to white defendants. This led to more severe bail and sentencing recommendations for black defendants.

Back in 2018, tech giant Amazon decided to ditch its AI recruiting engine when it was revealed that the algorithm was inherently biased against women. The problem lay in how the algorithm was trained, which involved studying resumes submitted to Amazon over the past ten years. Unfortunately, the majority of those who analysed these resumes were men who unconsciously brought their own biases against women into the equation. The algorithm was found to be penalising resumes containing the word “women” and downgrading the credentials of graduates from all-women colleges.

In healthcare, biased algorithms are being used to make medical diagnoses and treatment recommendations. A study in the US found that commercially available skin analysis algorithms were less accurate in identifying darker skin tones, especially in women, leading to potential misdiagnosis and unequal access to treatment.

In 2020, the COVID-19 pandemic led to social distancing measures, causing A-level exams to be cancelled. To determine student grades, Ofqual (The Office of Qualifications and Examinations Regulation) devised an algorithm to produce results that were, on a national level, comparable to previous years. According to The Guardian, this was largely achieved, with overall results even higher compared to previous years. However, the algorithm resulted in thousands of grades being lowered from teacher predictions – 40% of results were downgraded, with 35.6% adjusted down by one grade and 3.3% downgraded by two grades. This was devastating news for students who needed their predicted grades to secure their place at their preferred university. Additionally, data shows that private schools with fees disproportionately benefited from the algorithm, with an increase of 4.7% in grades A and above, compared to 2% for state-funded comprehensive schools. The algorithm’s reliance on a school’s historical performance meant that high-achieving students at underperforming schools were negatively impacted, leading to accusations of bias against students from disadvantaged backgrounds, largely from BAME backgrounds.

Working to reduce AI-induced bias

It’s not all bad news, however. I am happy to share that there are some success stories of technology companies working to address bias in AI and ensure that the technology is used ethically and responsibly. Perhaps we can all learn a thing or two from them?

IBM’s AI Fairness 360

IBM is actively working to address AI bias through its Fairness, Transparency, and Accountability (FTA) program. The program aims to develop AI systems that are fair, transparent, and accountable to all users, regardless of their backgrounds or characteristics. IBM has also released several tools and resources, such as the AI Fairness 360 toolkit, which allows developers to check for bias in their AI models.

Google’s What-If tool

Google is taking steps to tackle bias in AI through its AI Fairness and Responsible AI initiatives. These initiatives focus on developing AI systems that are fair, transparent, and accountable to all users, as well as ensuring that the company’s AI products are used ethically and responsibly. One example of this is Google’s “What-If” tool, which allows users to analyse how different decisions would impact different groups of people and identify any potential biases in their models.

Microsoft’s AI for Good initiative

Microsoft is also working to address bias in AI through its AI for Good initiative. This initiative focuses on using AI to solve some of the world’s most pressing problems, such as poverty, hunger, and disease. Microsoft has also released a number of tools and resources, such as the Fairlearn toolkit, which allows developers to check for bias in their AI models and take steps to mitigate it.

So what role can we play in addressing bias?

For us, it’s all about implementing ethical and responsible practices when developing and implementing our solutions for our clients.

Here are some ways we can combat bias in AI:

  • Utilise data from diverse sources to ensure that the training data for our models is representative of the population it will be used on.
  • Incorporate transparency and explainability into our models to ensure that decision-making processes can be audited for bias.
  • Actively seek out and include diverse perspectives in developing and implementing our AI solutions, including working with underrepresented communities and ensuring that our teams are diverse in terms of race, gender, and background.
  • Continuously monitor and evaluate the performance of our models to identify and address any potential biases.
  • Partner with organisations that specialise in addressing bias in AI to stay up-to-date on the latest best practices and research.
  • Implement a fair-ML framework to make sure that the data is fair and unbiased.
  • Use techniques like adversarial training, debiasing algorithms, and regularisation to further reduce bias in AI models.

“AI has the potential to be the greatest force for good in the history of humanity or the greatest force for destruction.”

– Kai-Fu Lee, AI Expert

In conclusion, bias in AI is a real and pressing issue that affects individuals and society as a whole. As AI becomes more prevalent in our daily lives, we must consider the potential impact of bias and take steps to ensure that the technology is used ethically and responsibly. It is important that we all take responsibility to ensure that AI is developed and implemented in a fair and just way for all members of society. By working together, we can make sure that AI is a force for good rather than a source of harm.

For all you Netflix fans, I encourage you to watch the Netflix documentary “Coded Bias” to gain a deeper understanding of the issue of bias in AI!

Don't miss the latest from Ensono

PHA+WW91J3JlIGFsbCBzZXQgdG8gcmVjZWl2ZSB0aGUgbGF0ZXN0IG5ld3MsIHVwZGF0ZXMgYW5kIGluc2lnaHRzIGZyb20gRW5zb25vLjwvcD4=

Keep up with Ensono

Innovation never stops, and we support you at every stage. From infrastructure-as-a-service advances to upcoming webinars, explore our news here.

Start your digital transformation today.