Who can artificial intelligence see, and whose faces and stories remain invisible? Joy Buolamwini, poet, computer scientist, and co-founder of the Algorithmic Justice League, asks this question in her research about how racial and gender biases influence the software and search algorithms that shape our everyday lives. In this provocative spoken-word video, Buolamwini demonstrates the shortcomings of facial recognition software and argues that the developers of AI technology “often forget[] to deal with race, gender, and class” (1:03).
Joy Buolamwini, "AI, Ain't I A Woman?" YouTube.com, 18 June 2018
- On their website, the Algorithmic Justice League states their mission: “We want the world to remember that who codes matters, how we code matters, and that we can code a better future." Check out their research. Where do biases in AI come from? Why is it important to be deliberate about who is coding AI and the processes used to create AI algorithms? What companies, according to Buolamwini, are “making bets on artificial intelligence” (0:49)? Why is it important that these companies in particular pay attention to how they develop AI?
- Buolamwini asks, “Can machines ever see our queens as I view them? Can machines ever see our grandmothers as we knew them?” (1:15). She then gives several examples of how iconic black women are misidentified through AI technology. What is the most common way black women are misclassified by facial recognition software? Which two of Buolamwini’s examples do you find most persuasive in supporting her argument?
- Buolamwini makes her argument through an unconventional genre: spoken word poetry meshed with music, video, and images. As explained in chapter 9, writers make their point not just through what they say, but also how they say it. What was your reaction to how Buolamwini makes her argument? How is Buolamwini’s central argument enhanced by her choices to use spoken word, images, video, and music?
- Buolamwini’s title references Sojourner Truth’s famous speech, “Ain’t I a Woman?” (0:37). Read the transcript of Truth’s speech. Why do you think Buolamwini chose to use this speech as a touchstone in her argument? What connections do you see between Buolamwini’s and Truth’s arguments? Remember that authors often use their titles as metacommentary, hinting at the bigger “so what?” and “who cares?” aspects of their argument. How does Buolamwini’s title hint at the broader stakes of her argument?
Artificial intelligence (AI) cannot be recognized by race, gender, color, or class, according to Joy Buolamwini's article. She disagrees with facial recognition software. In her speech, she supports colored people. The speech titled "Ain't I a Woman" explained how society discriminates against colored women.
Posted by: Noshad Azad | 12/29/2022 at 12:19 PM
Based on what I have watched and learned, I would agree with the issue that is happening with Artificial Intelligence (AI) when it comes to identifying and assigning someone’s gender. The danger of AI’s biases are that they are not designed to be inclusive or diverse. Joy Buolamwini, who is the speaker of the video, talks about the issues of how AI can perpetuate existing biases and discrimination, and that it has the potential to cause harm to individuals and communities when she says if the machines ever see her queens or grandmother as she views them. The video also talks about the experience with facial recognition software and the ways in which it failed to recognize her face due to the skin color and the gender. Buolamwini also elaborates how this is a widespread problem that affects many people of color, and especially women. The video also explains the importance of diversity in the development of AI. Buolamwini makes the case that diverse teams are better equipped to create inclusive and unbiased AI systems, and that this is essential if the world wants to create a better future that is fair and just for all. As AI does not show inclusivity yet, it is crucial to have a development of AI that has a diverse perspective in order to ensure that it includes everyone. It is important to develop future AIs that are not biased and that need to be more diverse around the community. The video reminds that AIs are not neutral and can discriminate against people of color and gender which can lead to biases and negative consequences for individuals and communities. People should step up for themselves by increasing diversity and also ensuring that AIs are developed in an inclusive and unbiased way that benefits everyone.
Posted by: Van Lai | 06/05/2023 at 04:34 AM