Skip to Main Content
It looks like you're using Internet Explorer 11 or older. This website works best with modern browsers such as the latest versions of Chrome, Firefox, Safari, and Edge. If you continue with this browser, you may see unexpected results.

Responding to Generative Artificial Intelligence (AI) Tools

What are generative AI tools like ChatGPT, and how can I use them in my teaching? This guide defines these tools and points to a variety strategies to help adjust your teaching in response.

What are generative AI tools?

As Andrew Moore (1997) explains, “AI is the science and engineering of making computers behave in ways that, until recently, we thought required human intelligence.” GW’s Lorena Barba suggests that AI is somewhat of a misnomer: computers are not smart and cannot think, act, or learn like humans. However, without being human-like in their intelligence, these machines mimic or reproduce certain human cognitive abilities by following an algorithm. The machine does not learn,  but it is doing something that, if humans did it, could be characterized as learning. 

AI tools are already broadly deployed in our society in ways many of us use every day--for example, computer navigation can give us directions while we travel, and auto-complete help us enter text on smartphones. As Barba highlights, there are two features that characterize the newly available tools this resource discusses:

  • Generative AI is a branch of AI focused on creating new content, such as text (via tools like ChatGPT) and images (via tools like DALL-E). 

  • Large Language Models (LLMs) are neural networks trained on huge amounts of data that can not only analyze texts but generate natural language outputs. They can be fine-tuned on more particular data sets that focus on particular domains.

 

For overviews that describe how these tools work and what they are capable of, see:

How can I access AI tools to try them out?

To experiment, sign up for a free account

Keep in mind that ChatGPT, the most popular of these tools, is currently experiencing high traffic. You may get a message saying that it is at capacity right now. If so, you can opt to receive an email notification when the service is available or check back--it tends to be easier to access after 5 PM. 


 

I’m interested in learning more about the use (or abuse) of AI tools. What are some resources to help me explore?

These recommended resources provide succinct suggestions for different kinds of responses, from strategies for preventing the use of AI tools to pedagogically valuable ways to incorporate them

These readings help contextualize AI tools within higher education environments:

What pedagogical strategies can help me respond?

Design assignments that are difficult to generate using AI tools

Design engaging assignments that matter to students and will motivate them to do their own work

Run your assignments through an AI tool to see what it generates, and use the information to decide whether to modify your assignment prompts

Focus on process: build students’ metacognitive reflection skills to encourage critical thinking about their own work

  • Scaffold writing assignments so that you can see students’ work develop throughout the course of the semester. This can familiarize you with your students’ voices, which will help you detect departures from their typical work
  • Ask students to explain their reasoning, not just to provide answers--e.g., “showing your work” when solving a math or science problem or describing the choices they made when drafting or revising 

Ask students to submit a “disclosure of learning” with assignments where they describe which tools, resources, and people they used to help produce their work 

How can I talk about AI tools with my students?

Clarify academic integrity expectations, including whether tools are permitted, permitted with citation, or prohibited. GWU’s Student Rights & Responsibilities office provides a helpful guide to clarifying academic integrity expectations 

See Classroom Policies for AI Tools and Some Example Language, which collect statements  from several universities and organizations. These policies cover the spectrum from allowing to attempting to prohibit these tools

What other issues might I consider?

General implications

  • Accuracy: AI-generated writing can sound plausible and professional, but the examples it produces are not always accurate or truthful. For example, it can generate real-sounding citations that do not refer to actually published work.

  • Privacy: AI generators, such as OpenAI, do not protect data, and their terms of service allow them to track personal information (including keystrokes) and sell data to third parties. Students who are concerned about this may not wish to open accounts or may wish to use an untraceable email address.

  • Ethics: it takes a great deal of human labor to develop generative AI tools, and that labor is often exploited or invisible

    • Generators are trained on language and images from human creators who often did not give permission for their work to be used in this manner and are not able to profit from its use. Getty Images, for example, is suing for unlicensed use.

    • The work of fine-tuning AI tools is often outsourced to low-paid human workers--for example, Kenyan workers paid less than $2/hour reviewed auto-generated material containing violent, hateful, and abusive speech.

  • Bias: AI tools reproduce existing biases in the data sets they are trained on

Implications for teaching

  • These tools will get better over time and with additional use, but they will eventually be monetized and less available to students. ChatGPT is currently operating as a beta for research purposes but may, in the future, choose profitability over access. 

  • SafeAssign, TurnItIn, and other plagiarism tools feed the beast: they create huge datasets of student writing that can be sold to help train AI software. One reason that tools like ChatGPT are so good at generating plausible student writing is that they have been trained on a large corpus of work from college students. 

    • The cat is already out of the bag: it’s too late to stop these tools from growing in capability, but small individual acts of protest can help you feel like you are not contributing to the problem

    • Systemic change is possible: if faculty make it known to their institutions that they will not use these expensive tools, they can dissuade institutions from subscribing to them 

    • As plagiarism-detection software companies attempt to incorporate AI detection, we must consider whether the solution to these challenges should involve more tools

  • Eventually, higher education’s relationship to AI tools like ChatGPT might look more like the relationship to Wikipedia: something to consider and set parameters around, but not necessarily a fundamental threat to what we do 

  • Even if ChatGPT ends up not being powerful enough to affect work in your courses, or if you develop successful workarounds, it can feel like one more thing to manage and worry about at a time when many instructors are already exhausted and burnt out

  • What are you choosing to design for? Just as any plagiarism policy will not stop all students who intend to cheat, there is always the potential that some students will use tools like ChatGPT in ways you do not support. Countermeasures that might prevent some students from using AI tools might create barriers for other students or sow a climate of distrust

    • Assessments that can circumvent AI tools, like oral exams or handwriting in class, can pose barriers for students with disabilities or who suffer from test anxiety; multimodal assignments are pedagogically effective but must be designed with accessibility in mind

    • In other words, ask yourself: are the solutions more damaging than the problems?

What scenarios can I use to practice responding to AI tools and to discuss these tools with my colleagues?

A math instructor learning about AI tools writes, “I asked ChatGPT for proofs of two theorems. The first proof was correct. But it is a well-known result that a student could look up in a textbook or, more likely, Google for it. But the second proof was wrong. The second theorem is also well known, but it occurred to me that I could end up in debates with students who don't know any better and believe ChatGPT. This is going to make giving out-of-class assignments very difficult. If we respond by assuming students have access to ChatGPT, and correspondingly make assignments more challenging so that they need ChatGPT's assistance (much like students need to use calculators for number-crunching assignments), it will put students who do not access ChatGPT at a disadvantage.”

  • How would you respond to this instructor if they wish to prevent student access to AI tools?

  • How would you respond to this instructor if they are open to using AI tools with students? 

An instructor teaches undergraduate students in classes of 80-120, sometimes without TA support. Their go-to strategies for student engagement and accountability include short, auto-graded quizzes. 

They want to stay ahead of the curve, adjust to new challenges, and innovate, but their workload is substantial. What would you suggest?

  • What could this instructor do if they want to attempt to prevent student use of AI tools?

  • How might this instructor use AI tools if they were open to doing so?

GW Libraries • 2130 H Street NW • Washington DC 20052202.994.6558AskUs@gwu.edu