Skip to Main Content

Responding to Generative Artificial Intelligence (AI) Tools

What are generative AI tools like ChatGPT, and how can I use them in my teaching? This guide defines these tools and points to a variety strategies to help adjust your teaching in response.

What are generative AI tools?

As Andrew Moore (1997) explains, “AI is the science and engineering of making computers behave in ways that, until recently, we thought required human intelligence.” GW’s Lorena Barba suggests that AI is somewhat of a misnomer: computers are not smart and cannot think, act, or learn like humans. However, without being human-like in their intelligence, these machines mimic or reproduce certain human cognitive abilities by following an algorithm. The machine does not learn,  but it is doing something that, if humans did it, could be characterized as learning. 

AI tools are already broadly deployed in our society in ways many of us use every day--for example, computer navigation can give us directions while we travel, and auto-complete help us enter text on smartphones. As Barba highlights, there are two features that characterize the newly available tools this resource discusses:

  • Generative AI is a branch of AI focused on creating new content, such as text (via tools like ChatGPT) and images (via tools like DALL-E). 

  • Large Language Models (LLMs) are neural networks trained on huge amounts of data that can not only analyze texts but generate natural language outputs. They can be fine-tuned on more particular data sets that focus on particular domains.

 

For overviews that describe how these tools work and what they are capable of, see:

How can I access AI tools to try them out?

To experiment, sign up for a free account, then visit the main site to log in.

This comprehensive video tutorial shows both beginning and advanced ways to use ChatGPT. For more ideas, see The Practical Guide to Using AI to Do Stuff.

Keep in mind that ChatGPT, the most popular of these tools, is currently experiencing high traffic. You may get a message saying that it is at capacity right now. If so, you can opt to receive an email notification when the service is available or check back--it tends to be easier to access after 5 PM.



 

I’m interested in learning more about the use (or abuse) of AI tools. What are some resources to help me explore?

These recommended resources provide succinct suggestions for different kinds of responses, from strategies for preventing the use of AI tools to pedagogically valuable ways to incorporate them:

These readings help contextualize AI tools within higher education environments:

What pedagogical strategies can help me respond?

  • Design assignments that are difficult to generate using AI tools

  • Design engaging assignments that matter to students and will motivate them to do their own work

  • Run your assignments through an AI tool to see what it generates, and use the information to decide whether to modify your assignment prompts

  • Focus on process: build students’ metacognitive reflection skills to encourage critical thinking about their own work

    • Scaffold writing assignments so that you can see students’ work develop throughout the course of the semester. This can familiarize you with your students’ voices, which will help you detect departures from their typical work

    • Ask students to explain their reasoning, not just to provide answers--e.g., “showing your work” when solving a math or science problem or describing the choices they made when drafting or revising 

    • Try a format like the I-Search Paper, which focuses on the process of finding information 

  • Ask students to submit a “disclosure of learning” with assignments where they describe which tools, resources, and people they used to help produce their work 

  • Develop assignments that ask students to use AI tools in thoughtful, creative ways

How can I talk about AI tools with my students?

What other issues might I consider?

General implications

  • Ethics: it takes a great deal of human labor to develop generative AI tools, and that labor is often exploited or invisible

    • Generators are trained on language and images from human creators who often did not give permission for their work to be used in this manner and are not able to profit from its use. Getty Images, for example, is suing for unlicensed use.

    • The work of fine-tuning AI tools is often outsourced to low-paid human workers--for example, Kenyan workers paid less than $2/hour reviewed auto-generated material containing violent, hateful, and abusive speech.

  • Bias: AI tools reproduce existing biases in the data sets they are trained on

  • Accuracy: AI-generated writing can sound plausible and professional, but the examples it produces are not always accurate or truthful. For example, it can generate real-sounding citations that do not refer to actually published work. As Jordan S.  Carroll puts it, ChatGPT is “fluent, if not also correct.”

  • Privacy: AI generators, such as OpenAI, do not protect data, and their terms of service allow them to track personal information (including keystrokes) and sell data to third parties. Students who are concerned about this may not wish to open accounts or may wish to use an untraceable email address or Google Voice phone number. You might also consider creating dummy accounts for your class with a password you can share; students could check these out the way one would sign out a library book.

  • Labor: high teaching loads, a reliance on contingent faculty, and overall instructor stress and burnout can create conditions where instructors are unable to get to know individual students as writers and/or do not have the space to devote to writing instruction and feedback. These conditions can make it more difficult to detect unauthorized use of AI tools and can distance students from the value of writing, which can drive them towards these tools. Advocating for smaller classes, better compensation, and other improved working conditions is part of disincentivizing unwanted use of AI generators.

Implications for teaching

  • These tools will get better over time and with additional use, but they will eventually be monetized and less available to students. ChatGPT is currently operating as a beta for research purposes but may, in the future, choose profitability over access. 

  • SafeAssign, TurnItIn, and other plagiarism tools feed the beast: they create huge datasets of student writing that can be sold to help train AI software. One reason that tools like ChatGPT are so good at generating plausible student writing is that they have been trained on a large corpus of work from college students. 

    • The cat is already out of the bag: it’s too late to stop these tools from growing in capability, but small individual acts of protest can help you feel like you are not contributing to the problem

    • Systemic change is possible: if faculty make it known to their institutions that they will not use these expensive tools, they can dissuade institutions from subscribing to them 

    • As plagiarism-detection software companies attempt to incorporate AI detection, we must consider whether the solution to these challenges should involve more tools

  • Eventually, higher education’s relationship to AI tools like ChatGPT might look more like the relationship to Wikipedia: something to consider and set parameters around, but not necessarily a fundamental threat to what we do 

  • Even if ChatGPT ends up not being powerful enough to affect work in your courses, or if you develop successful workarounds, it can feel like one more thing to manage and worry about at a time when many instructors are already exhausted and burnt out

  • What are you choosing to design for? Just as any plagiarism policy will not stop all students who intend to cheat, there is always the potential that some students will use tools like ChatGPT in ways you do not support. Countermeasures that might prevent some students from using AI tools might make learning difficult for other students or sow a climate of distrust

    • Assessments that can circumvent AI tools, like oral exams or handwriting in class, can pose barriers for students with disabilities or who suffer from test anxiety.

      •  As our colleagues at George Mason’s Stearns Center put it, “avoid actions that over-prioritize ‘catching/preventing cheating’ while placing undue stresses on equity, accessibility, and innovative thinking.”

    • Multimodal assignments are pedagogically effective but must be designed with accessibility in mind

    • In other words, ask yourself: are the solutions more damaging than the problems? Will your relationship with students be negatively impacted by your response to AI tools?

  • Whether you plan to embrace AI tools, are concerned about them, or a little bit of both, consider: do you want to reorganize your teaching around the availability of these tools? It is okay not to make any adjustments at all!

What assignments have colleagues used to successfully and critically engage students with AI tools?

Consider assignment structures that ask students to: 1) generate a prompt for an AI tool using a template you provide; 2) critically engage with the tool’s output by annotating and evaluating its effectiveness; 3) transform the AI output by making it their own. 

  • Emily Pitts Donahoe’s example highlights the need to provide students guidance and feedback as they learn to use these new tools.

Consider assignments that ask students to refine and analyze the prompts they are using to generate AI outputs. Ethan Mollick outlines a series of approaches for refining prompts.

What scenarios can I use to practice responding to AI tools and to discuss these tools with my colleagues?

A math instructor learning about AI tools writes, “I asked ChatGPT for proofs of two theorems. The first proof was correct. But it is a well-known result that a student could look up in a textbook or, more likely, Google for it. But the second proof was wrong. The second theorem is also well known, but it occurred to me that I could end up in debates with students who don't know any better and believe ChatGPT. This is going to make giving out-of-class assignments very difficult. If we respond by assuming students have access to ChatGPT, and correspondingly make assignments more challenging so that they need ChatGPT's assistance (much like students need to use calculators for number-crunching assignments), it will put students who do not access ChatGPT at a disadvantage.”

  • How would you respond to this instructor if they wish to prevent student access to AI tools?

  • How would you respond to this instructor if they are open to using AI tools with students? 

An instructor teaches undergraduate students in classes of 80-120, sometimes without TA support. Their go-to strategies for student engagement and accountability include short, auto-graded quizzes. 

They want to stay ahead of the curve, adjust to new challenges, and innovate, but their workload is substantial. What would you suggest?

  • What could this instructor do if they want to attempt to prevent student use of AI tools?

  • How might this instructor use AI tools if they were open to doing so?

“I am particularly concerned about how we teach writing to undergraduates. How do we ensure that students learn from the AI tools rather than using them as a crutch or a cheat? And how do I fairly evaluate students' written product? Teaching writing is one of the most important things we need to do. Many students write poorly which also means they think less clearly than they should or could. Asking students to write about their personal experiences does not allow me to assess whether they have mastered core bodies of knowledge in my field (international politics). Asking them how they feel about a topic does not tell me whether they understand the substance and logic of various theories of war in political science.”

  • What could this instructor do if they want to attempt to prevent student use of AI tools?

  • How might this instructor use AI tools if they were open to doing so?


 

A student whose first language is not English writes a science essay in their native language. They then use Google Translate to translate it into English. Finally, they use an AI tool like ChatGPT to smooth the language and ensure that the essay is written in an academic tone. They make additional edits and revisions before submitting the work, and they do not acknowledge in their submission that they used Google Translate or ChatGPT. (Via Martin Compton)

  • Would you consider this cheating or plagiarism?

  • How would you respond?

GW Libraries • 2130 H Street NW • Washington DC 20052202.994.6558AskUs@gwu.edu