AI for Teaching and Learning |

Five Things to Know About Generative AI & Technical Literacy

Author: Amanda Leary

Artificial intelligence technologies are pervasive in our daily lives, powering everything from our Netflix recommendations to our Instagram filters and bank fraud alerts. But generative models, and large language models (such as ChatGPT) more specifically, have captured the attention of higher education over the last couple of years. LLMs and other generative AI tools are transforming the landscape of education, with incredible potential—and limitations—for classroom use. With so many tools and systems, it can be overwhelming figuring out where even to start.

Here are five things we think you should know about using generative AI.

  1. The field is rapidly and constantly evolving. New tools are created daily, each with their own strengths and weaknesses, and preexisting tools are undergoing continuous improvements. In the past 18 months, generative AI has gotten more sophisticated, better at mimicking human language, and image generation tools are becoming increasingly photorealistic as advancements are made in the technology. Even video generation—such as OpenAI’s Sora—have evolved leaps and bounds. Staying informed, whether through newsletters, workshops and events, research, or experimenting on your own will help you keep pace with the challenges and opportunities of generative AI in your personal and professional lives.

  2. LLMs don’t think, they guess based on data. Understanding how generative AI technologies work is key to using them effectively, efficiently, and ethically, but you don’t need a degree in computer science to use ChatGPT. Essentially, LLMs are just really good guessers; they analyze the corpus of data on which they were trained to predict the statistical probabilities of words humans are most likely to use next in a sequence. AI isn’t human; it isn’t thinking, nor does it have any model of truth. This is why LLMs can sometimes make things up. Without intervention, LLMs rely on narratives and patterns seen in their human-made training data, which could result in the statistical likelihood of false information.

  3. No two generative AI models are alike. Let’s take LLMs for example: Claude, ChatGPT, Copilot, Gemini, and Llama are generative language models. However, they don’t operate on the same model, making them suitable for different tasks. Copilot, Gemini, and ChatGPT are especially well-suited to creative and mixed-media compositions, whereas Llama and Claude may be better suited for more sensitive tasks and formal, academic writing. And depending on which version of a system you use, you could potentially experience a different language model. OpenAI recently rolled out limited access to its newest model, ChatGPT-4o, to free users, but prior to that, the free platform operated with ChatGPT-3.5—leaving the more advanced model to paid users. You may encounter paywalls for more sophisticated tools, so pay close attention to which version of a model you’re using; outputs will vary in terms of accuracy, human likeness, and privacy protections.

  4. Garbage in, garbage out. Prompts matter! The quality of the input you give a generative tool is a strong indicator of the quality of the output you’ll receive. To get the most out of your interaction, remember:

              Context1 — relevant background information or situational factors
              Role2 — the persona you want the AI to adopt or act as
              Action3 — the action or output you want
              Format4 — the desired structure or style of the output
              Target5 — the intended audience or purpose of the output

    Example: You are an experienced instructor2 at a top-20 university.1 Generate3 a list4 of 10 icebreakers3 with instructions4 that I can use in my lab on the first day of class.5

    The more detailed your prompt, the more nuanced your output will be—and the closer it will be to what you wanted. But generative AI models are iterative, they invite conversation (the chat in ChatGPT). Think of the first output as a draft; you can continue to refine the resulting text (or image) with additional prompting.

  5. Data privacy standards vary across tools and paywalls. It’s important to know how the information you put into a generative AI platform—through a created profile and in your chats with the LLM—will be protected and retained, if at all. You are responsible for whatever output is generated in response to your prompts, so be mindful of how you use AI; familiarize yourself with the privacy policy of any tool you’re planning to use and be a good steward of your own data. Most systems have the option to opt out of having your data collected for model training purposes.

Learn More:

AI for Teaching and Learning Videos “AI Overview and Definitions” Resource Article