AI, Equity, Diversity, and Inclusion

Another critical issue relating to the use of AI in higher education (and in other fields) is that human bias has infiltrated artificial intelligence. Human biases, revealed through tests and experiments, have been well-documented and their significant impact on outcomes acknowledged. With the rise of AI implementation in various companies, it is essential to promptly recognize and address the potential infiltration of biases into AI systems to mitigate negative consequences (Manyika, Silberg, and Presten, 2019).

AI models like ChatGPT and Bard learn from vast amounts of data that can contain biases. If the training data is biased or unrepresentative, the AI system can perpetuate and amplify existing biases, leading to discriminatory outcomes. For example, if the training data predominantly represents a particular gender or racial group, the AI system may exhibit biases in its responses. Teams involved in developing AI systems may also lack diversity leading to the biases noted above; perspectives and experiences of underrepresented groups may be overlooked leading to a lack of diversity in scope.

With all of this in mind, scholars from across the disciplines have discussed the negative ramifications of such biases for several years. Writing about the use of AI in the field of healthcare, for example, Argarwal, et al (2023) explained that “algorithms are trained on data that may reflect existing biases in diagnosis, treatment, and provision of services to marginalized populations” and that “they pose the danger of automating and further exacerbating that bias through subsequent learning cycles” (1). A study presented by the National Institute of Standards and Technology, for example, sought to find potential sources for bias in AI face recognition systems, noting “algorithm capability varies considerably by developer” (15).

AI systems like ChatGPT may inadvertently discriminate against individuals from marginalized communities. For example, if the system fails to understand and respond to certain accents, dialects, or speech patterns, it can exclude or marginalize individuals with diverse linguistic backgrounds. News reports and other scholarly studies have discussed how AI facial recognition can increase the risk of the police unjustly apprehending people of color such as reported by the ACLU, when, Robert Williams — a Black man living in a Detroit suburb —was arrested by Detroit Police after face recognition software owned by Michigan State Police suggested Williams was a suspect; as the ACLU article notes: “facial recognition can’t tell Black people apart.” Similarly, the University of California Berkley reported that mortgage algorithms have systematically charged Black and Latino borrowers higher interest rates. The study can be accessed here.

Furthermore, AI systems may present barriers to access and usability for individuals with disabilities. For instance, if a ChatGPT interface relies heavily on visual input without providing alternative modes of interaction, it may exclude people with visual impairments. The use of AI systems like ChatGPT can pose challenges to accountability and transparency. If the decision-making processes of AI algorithms are not explainable or auditable, it becomes difficult to identify and address discriminatory outcomes, making it harder to ensure fairness and justice.

Addressing these issues requires a multi-faceted approach involving diverse and inclusive data collection, diverse development teams, robust testing and evaluation procedures, regulatory frameworks, and ongoing dialogue with affected communities. Instructors across the disciplines must discuss such challenges with their students and map out possible ways of challenging and removing such bias as noted, AI technologies are already in use in a plethora of modes across numerous fields.

References

Agarwal, R., Bjarnadottir, M., Rhue, L., Dugas, M., Crowley, K., Clark, J., & Gao, G. (2023). Addressing algorithmic bias and the perpetuation of health inequities: An AI bias aware framework. Health Policy and Technology12(1), 100702.

Manyika, J., Silberg, J., & Presten, B. (2019). What do we do about the biases in AI. Harvard Business Review25.

Additional Resources about Bias in AI

Johnson, A. “Racism and AI: Here’s How It’s Been Criticized for Amplifying Bias.” Forbeshttps://www.forbes.com/sites/ariannajohnson/2023/05/25/racism-and-ai-heres-how-its-been-criticized-for-amplifying-bias/?sh=26a751b8269d

As Johnson notes, Although AI has become popular in recent months for its ability to perform advanced tasks and make life easier, there’s also increasing concern it can be used negatively, creating racial biases in the fields of healthcare, law enforcement and technology, among others. The article provides a series of examples that detail the biases that have been demonstrated in AI tech.

Schwartz, R., Vassilev, A., Greene, K., Perine, L., Burt, A., & Hall, P. (2022). Towards a standard for identifying and managing bias in artificial intelligence. NIST Special Publication1270, 1-77. https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1270.pdf

This special report from the National Institute of Standards and Technology (NIST) begins by explaining how there are latent human and systemic biases in AI in addition to the more overt statistical and computational biases. Although the article’s intended audience is those who are designing and developing AI systems, there is a section of the report that provides context and terminology regarding categories of AI bias that would helpful to all colleagues interested in learning more about this subject.

Singleton, M. 21 Jan. 2023. “Clear Bias Behind this AI Art Algorithm.” LinkedIn. https://www.linkedin.com/pulse/clear-bias-behind-ai-art-algorithms-malik-singleton/?trackingId=9P3oR1NL0pNiEecRPmA3tg%3D%3D

In this column, Singleton tests Midjourney, an AI that generates pictures and images based on user prompts, to see if the algorithm showed a racial bias; and based on the series of prompts fed into the AI, it did show bias. Singleton notes that AI is only as “socially or culturally or politically intelligent as the people who develop them,” showing how the AI’s images of “beautiful” women were those with light skin.