AI for Teaching and Learning |

Building Critical AI Literacy

“Our students can’t lead if they don’t understand how the world works technologically ... You want to engage, but you want to be careful. You want to guard against hubris, but you also want to explore ... If you don’t explore, the world is going to go on exploring without you.”
—John Behrens1

Learning to use AI responsibly will be a key skill for students moving forward. Just as our individual classrooms vary in acceptable use, so will their future employers. Incorporating it into the classroom provides students with the opportunity to practice with AI and develop critical thinking skills—such as transfer and evaluation—to decide when and how to use generative AI. The classroom is also an important space for helping students develop critical AI literacy—an understanding of AI’s principles, limitations, ethical considerations, and social impact—in order to use it responsibly. The best way to do that? Encourage its use, but expose its limitations.

Accuracy: AI chatbots—even those with a real-time connection to the internet—get things wrong. This is due, in part, to the predictive algorithms they use to analyze their datasets and respond to a prompt. But generative AI can’t think, and can’t distinguish between fact and fiction, right and wrong. Sometimes, it just makes things up. Part of developing critical AI literacy is adopting a hermeneutic of suspicion: Don’t believe everything you read. Have students fact-check or critique generative AI’s output to demonstrate its inaccuracies.

Bias: The output generated by AI is only as good as the data it was trained on. This means that platforms like ChatGPT and Stable Diffusion reproduce the biases, stereotypes, and misinformation present in their corpora. And because AI cannot distinguish between right and wrong, it may not recognize its own bias, either. Worryingly, new research shows that these biases are extending to users of generative AI. Have students try prompting a chatbot to provide two sides of an argument, or ask it explicitly to point out where its output may reflect bias.

Intellectual Property and Data Privacy: There is no standard data management position across AI systems. Prior to March 1, 2023, anything submitted to OpenAI’s platforms was incorporated into the model’s training data. Google Gemini stores chatbot activity for 18 months by default, and conversations undergo human review. It is important for you and students to be aware of how your data will be stored and used. AI also has a serious intellectual property problem. When AI output infringes upon the intellectual property of artists and writers by mimicking their style or technique to produce “derivative” works, there are few legal protections in place for the original creators. Equally unclear is who owns generative AI output. It is important for instructors and students alike to be mindful of the minefield of privacy concerns that come along with using AI.

You can see how ChatGPT and Google Gemini themselves describe their knowledge limitations by clicking the buttons below, which will take you to their respective sites:

ChatGPT Knowledge LimitationsGoogle Gemini Knowledge Limitations

Bearing these limitations in mind, there are ways to leverage generative AI to help students develop critical AI literacy and engage students in higher-order thinking and deep learning.

Use AI to help students perform disciplinary ways of thinking. You can have students annotate and edit output from generative AI tools to identify fallacies or gaps in its subject-area expertise. Try re-running the same prompt multiple times and have students look for patterns. Then, show them how an expert in the field might craft a response and give students the opportunity to practice writing, comparing their attempts to the AI output. Or, have them edit the output to demonstrate more disciplinary norms and include more specific content, examples, or structures. You can do these activities with text or image generators.

You might also have students test the limits of the AI’s subject-area expertise by fact-checking the output. Have students prompt a chatbot to respond as a notable figure from your discipline (or even just as an expert) to see what it does and does not know.

Check it out: You can find other assignments and activities that use generative AI and teach AI literacy here.


1Quoted in Margaret Fosmoe, “Students, Faculty Cautiously Embrace AI as a Supplementary Learning Tool,” Notre Dame Magazine (Winter 2023-24)