A large, rusty machine

A Professor’s Critically Engaged AI Course Policy

As AI-assisted writing tools such as OpenAI’s ChatGPT have proliferated, some faculty have responded by developing their own AI course policies. A well-written policy, included on a syllabus or provided to students as a standalone document, can help clarify expectations and consequences around students’ use of AI.

SUNY Geneseo’s Teaching and Learning Center offers a number of templates faculty can adopt or modify in developing a policy, ranging from “the prohibitive statement” barring all use of generative artificial intelligence to the “open-use statement” encouraging experimentation with AI as long as it’s accompanied by full disclosure.

The templates prohibiting or restricting students’ use of AI tend to justify themselves, understandably, by invoking the potential of automated writing tools to undermine students’ development as critical and independent thinkers—precisely what (we hope) they’ve enrolled in college to become.

In two of her environmental studies courses, Maywa Montenegro, currently an assistant professor of agroecology and critical technology studies at UC Santa Cruz, offers her students a more comprehensive and detailed justification for her own policy, a policy that boils down to the following: “In this class, I ask that you complete your work without using AI-generated sources to brainstorm, augment, think through, or write your assignments.”

Montenegro, too, is worried about the potential impact of AI on her students’ critical thinking skills: “Last but hardly least,” she tells them, “your own learning is paramount. You have invested time, energy, and money in an undergraduate education. I imagine this investment is meaningful for you, and it saddens me to see that students may be getting shortchanged of an authentic education because ChatGPT appears to be a magical way to succeed.”

But what sets her policy apart from the standard templates is its invitation to students to engage with the context in which new AI tools have emerged: Who created ChatGPT? Who funds it? Why do some people express concern that AI is racist? Her document contains links to nearly two dozen resources students can consult to educate themselves further about AI. It’s not so much a policy, in the end, as an effort to empower. Rather than regarding universal AI adoption as something outside our control, we can “ask questions, demand accountability from AI developers and regulators, and advance technologies that work on behalf of racially minoritized communities, protect environmental health, and safeguard workers’ rights. We can talk about anticolonial and antiracist possibilities given the overwhelming demands for labor, water energy, compute power, development frameworks, and models required by AI.”

Even faculty more AI-tolerant than Montenegro may wish to borrow from or point their students to this document as a way to get them thinking about more than just their own intellectual development (as important as that is) as a context for considering their choices. Equally worth reading is this interview with Montenegro in The Markup, conducted by journalist Thomas Apodaca, who has also written there about his efforts to use generative AI ethically as a “journalism engineer.”

Image credit: “it's a big machine” by gin_able, CC BY-NC 2.0 .

Leave a Reply