The bias of Chat.GPT and AI

Artificial Intelligence (AI) has become a ubiquitous part of our lives and is increasingly being used to make important decisions in various fields including finance, healthcare, and criminal justice. However, it is important to recognize that AI models like ChatGPT are not immune to inherent biases. These biases can be harmful and perpetuate existing inequalities, particularly with regards to race and gender.

In 2021, I created an Alpha Women Illustration series (instagram: kaleb.loosbrock) aimed showcasing and highlighting influential BIPOC women in recent history. I specifically focused on BIPOC women in order to challenge my own knowledge and raise awareness of these amazing women. Out of curiosity, I I decided to test Chat GPT's knowledge and also see if any bias may be lingering on the other end of the text box. So, I logged into Chat GPT and asked it "What are the 100 most influential BIPOC women in the last 150 years that made significant contributions to society and the advancement of women's rights?". What came back was a perfect illustration of the inherent bias in AI. It only provided a list of 49--not the original 100 requested--and, on top of that, the list had several duplicates. For instance, Angela Y Davis appeared several times as well as Audre Lorde.

Studies have shown that AI models can perpetuate existing biases in the data they are trained on. For example, a 2018 study found that commercially available facial recognition systems were more accurate for lighter-skinned individuals and men, compared to darker-skinned individuals and women. Another study found that language models like GPT-3 show a clear gender bias, where they associate male names with careers in STEM fields and female names with careers in the arts.

These biases can have real-world consequences and impact people’s lives. For instance, biased algorithms used in the criminal justice system may lead to unfair sentencing or wrongful convictions. Similarly, gender-biased algorithms used in recruitment may discriminate against female candidates.

To address this issue, it is important for organizations to adopt an ethical approach to AI development and deployment. This includes taking steps to identify and mitigate bias, such as using diverse and representative training data, regular bias assessments, and incorporating fairness metrics into the development process. This is why it's important to make sure we document and highlight the achievement of underserved populations and peoples.

Additionally, it is important for organizations to be transparent about their AI systems, including the data and algorithms used. This can help to increase public trust in AI and reduce the risk of harmful biases.

It is crucial for professionals in the field of AI to acknowledge the existence of inherent biases in AI models and take steps to mitigate their impact. By adopting an ethical approach and being transparent about their AI systems, organizations can help to reduce the risk of harmful biases and ensure that AI is used for the benefit of society as a whole.

What's been your experience with AI?

Resources:



  1. Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 65, 1-15.

  2. Zou, J., Cheng, X., & Lee, K. (2020). Mitigating gender bias in language models. arXiv preprint arXiv:2005.14187.

  3. The AI Now Institute. (2019). AI Now 2019 Report: The Social and Economic Implications of Artificial Intelligence Technologies in the Next Decade. AI Now Institute.

  4. The Algorithmic Justice League. (2021). Tools for accountability. Algorithmic Justice League.