As Andrew Moore (1997) explains, “AI is the science and engineering of making computers behave in ways that, until recently, we thought required human intelligence.” GW’s Lorena Barba suggests that AI is somewhat of a misnomer: computers are not smart and cannot think, act, or learn like humans. However, without being human-like in their intelligence, these machines mimic or reproduce certain human cognitive abilities by following an algorithm. The machine does not learn, but it is doing something that, if humans did it, could be characterized as learning.
AI tools are already broadly deployed in our society in ways many of us use every day--for example, computer navigation can give us directions while we travel, and auto-complete help us enter text on smartphones. As Barba highlights, there are two features that characterize the newly available tools this resource discusses:
Large Language Models (LLMs) are neural networks trained on huge amounts of data that can not only analyze texts but generate natural language outputs. They can be fine-tuned on more particular data sets that focus on particular domains.
For overviews that describe how these tools work and what they are capable of, see:
The introduction to Adapting College Writing For the Age of Large Language Models
A GWU Conversation on OpenAI’s New ChatGPT:
To experiment, sign up for a free account.
Keep in mind that ChatGPT, the most popular of these tools, is currently experiencing high traffic. You may get a message saying that it is at capacity right now. If so, you can opt to receive an email notification when the service is available or check back--it tends to be easier to access after 5 PM.
These recommended resources provide succinct suggestions for different kinds of responses, from strategies for preventing the use of AI tools to pedagogically valuable ways to incorporate them
This guide from the University of Central Florida highlights 3 categories of responses depending on how much you wish to engage with the technology.
This resource from Critical AI focuses specifically on college writing and includes several recommendations
GW’s Ryan Watkins offers a several ways to update your syllabus policies and course objectives and assignments
These readings help contextualize AI tools within higher education environments:
Design assignments that are difficult to generate using AI tools
Design engaging assignments that matter to students and will motivate them to do their own work
Run your assignments through an AI tool to see what it generates, and use the information to decide whether to modify your assignment prompts
Focus on process: build students’ metacognitive reflection skills to encourage critical thinking about their own work
Ask students to submit a “disclosure of learning” with assignments where they describe which tools, resources, and people they used to help produce their work
Clarify academic integrity expectations, including whether tools are permitted, permitted with citation, or prohibited. GWU’s Student Rights & Responsibilities office provides a helpful guide to clarifying academic integrity expectations
See Classroom Policies for AI Tools and Some Example Language, which collect statements from several universities and organizations. These policies cover the spectrum from allowing to attempting to prohibit these tools
Accuracy: AI-generated writing can sound plausible and professional, but the examples it produces are not always accurate or truthful. For example, it can generate real-sounding citations that do not refer to actually published work.
Privacy: AI generators, such as OpenAI, do not protect data, and their terms of service allow them to track personal information (including keystrokes) and sell data to third parties. Students who are concerned about this may not wish to open accounts or may wish to use an untraceable email address.
Ethics: it takes a great deal of human labor to develop generative AI tools, and that labor is often exploited or invisible
Generators are trained on language and images from human creators who often did not give permission for their work to be used in this manner and are not able to profit from its use. Getty Images, for example, is suing for unlicensed use.
The work of fine-tuning AI tools is often outsourced to low-paid human workers--for example, Kenyan workers paid less than $2/hour reviewed auto-generated material containing violent, hateful, and abusive speech.
Bias: AI tools reproduce existing biases in the data sets they are trained on
These tools will get better over time and with additional use, but they will eventually be monetized and less available to students. ChatGPT is currently operating as a beta for research purposes but may, in the future, choose profitability over access.
SafeAssign, TurnItIn, and other plagiarism tools feed the beast: they create huge datasets of student writing that can be sold to help train AI software. One reason that tools like ChatGPT are so good at generating plausible student writing is that they have been trained on a large corpus of work from college students.
The cat is already out of the bag: it’s too late to stop these tools from growing in capability, but small individual acts of protest can help you feel like you are not contributing to the problem
Systemic change is possible: if faculty make it known to their institutions that they will not use these expensive tools, they can dissuade institutions from subscribing to them
As plagiarism-detection software companies attempt to incorporate AI detection, we must consider whether the solution to these challenges should involve more tools
Eventually, higher education’s relationship to AI tools like ChatGPT might look more like the relationship to Wikipedia: something to consider and set parameters around, but not necessarily a fundamental threat to what we do
Even if ChatGPT ends up not being powerful enough to affect work in your courses, or if you develop successful workarounds, it can feel like one more thing to manage and worry about at a time when many instructors are already exhausted and burnt out
What are you choosing to design for? Just as any plagiarism policy will not stop all students who intend to cheat, there is always the potential that some students will use tools like ChatGPT in ways you do not support. Countermeasures that might prevent some students from using AI tools might create barriers for other students or sow a climate of distrust
Assessments that can circumvent AI tools, like oral exams or handwriting in class, can pose barriers for students with disabilities or who suffer from test anxiety; multimodal assignments are pedagogically effective but must be designed with accessibility in mind
In other words, ask yourself: are the solutions more damaging than the problems?
A math instructor learning about AI tools writes, “I asked ChatGPT for proofs of two theorems. The first proof was correct. But it is a well-known result that a student could look up in a textbook or, more likely, Google for it. But the second proof was wrong. The second theorem is also well known, but it occurred to me that I could end up in debates with students who don't know any better and believe ChatGPT. This is going to make giving out-of-class assignments very difficult. If we respond by assuming students have access to ChatGPT, and correspondingly make assignments more challenging so that they need ChatGPT's assistance (much like students need to use calculators for number-crunching assignments), it will put students who do not access ChatGPT at a disadvantage.”
How would you respond to this instructor if they wish to prevent student access to AI tools?
How would you respond to this instructor if they are open to using AI tools with students?
An instructor teaches undergraduate students in classes of 80-120, sometimes without TA support. Their go-to strategies for student engagement and accountability include short, auto-graded quizzes.
They want to stay ahead of the curve, adjust to new challenges, and innovate, but their workload is substantial. What would you suggest?
What could this instructor do if they want to attempt to prevent student use of AI tools?
How might this instructor use AI tools if they were open to doing so?