AI for Teaching and Learning |

Five Things to Know About Generative AI & Critical AI Literacy

Author: Amanda Leary

The rapid rise of generative artificial intelligence technologies and their increasingly pervasive use has, for many, surfaced deeply troubling moral and ethical issues. As a Catholic institution, we remember Pope Francis’s call to take a critical eye to technology:

“We have to accept that technological products are not neutral, for they create a framework which ends up conditioning lifestyles and shaping social possibilities along the lines dictated by the interests of certain powerful groups. Decisions which may seem purely instrumental are in reality decisions about the kind of society we want to build.” (Laudato Si’: On Care For Our Common Home, Encyclical, 24 May 2015)

The future is undeniably AI-driven; learning how to use these technologies is important to keep apace in industry, education, research, and more. As educators, we have a responsibility to prepare ourselves and our students for this future by learning and teaching with AI. But, as Pope Francis reminds us, technology—including artificial intelligence—is not neutral. Alongside the technical literacy needed to be successful, there is also an obligation to develop critical literacy—not just how but when, and when not, to use AI.

This critical AI literacy—“[t]he active awareness of affordances and limitations of AI technologies … an extension of existing critical thinking and digital literacies that seeks to help students develop a critical awareness of generative AI models, how they work, why their content should not be treated as a single source of truth and what their social, intellectual and environmental implications might be” (Anna Verges Bausili and Maria O’Hara, “How can we develop students’ critical AI literacy?”, FutureLearn)—is imperative to being ethical and responsible users of AI.

Here are five things we think you should know.

  1. The relationship between humans and technology is changing. AI can accomplish tasks that previously required human intervention, raising questions about the nature of creativity, knowledge, decision-making, and the role of human work. As AI technologies become more advanced and autonomous, it is important to be mindful of what work we are asking AI to do and why, as well as the potential consequences of offloading certain tasks to artificial intelligence.
  2. AI systems aren’t neutral. AI is the product of human design and decision-making, trained on human-generated data representing a range of perspectives, values, and ideologies. Newer models even have live access to the internet, scraping in response to prompts. Because these systems work on identifying patterns and have no understanding of truth, outputs have the potential to amplify the biases and assumptions present (or missing) in the data and, in turn, perpetuate racial, ethnic, religious, linguistic, gender, class, and other social biases, or spread false or incomplete information. Depending on AI as a single, authoritative source can lead to discriminatory outcomes.
  3. Using AI has a cost. Building and maintaining AI models requires significant energy and resources, from physical infrastructure to carbon emissions. A single ChatGPT conversation uses almost 10 times more energy than a Google search. But the impact of artificial intelligence is not limited to the model itself; the application of AI, from fossil fuel extraction to fast fashion marketing, has consequences for the environment. These systems also require vast amounts of human labor to develop. Data annotators and content moderators work through sensitive content in order to make AI systems safer, but this work is often invisible and often underpaid.
  4. There is an intellectual property problem. This problem extends in two directions. On the one hand, who owns AI-generated content—the user, the company, or the AI itself? Is the output of AI copyrightable? Patentable? These are ongoing questions with case-by-case answers. A large amount of copyrighted material also appears in AI training data. When AI output infringes upon the intellectual property of artists and writers by mimicking their style or technique to produce “derivative” works, there are few legal protections in place for the original creators. AI companies are facing lawsuits and public scrutiny over these intellectual property concerns, including copyright infringement, fair use, and rights of publicity.
  5. AI presents equity solutions—and also equity concerns. AI has the power to improve outcomes in social and economic inequity, accessibility, criminal justice, the environment, education, and healthcare with its ability to identify patterns in vast quantities of data. However, these practices are not evenly applied and still rely on human data, interpretation, and action, and have the potential to amplify systemic inequities. Who has access to and can ethically and effectively use AI could further disadvantage marginalized communities by exacerbating the digital divide. Understanding the development and deployment of AI systems, and understanding the implications of their use, is an essential part of knowing when and how to use them ethically and equitably.

Learn More:

AI for Teaching and Learning Videos “Building Critical AI Literacy” Resource Article