SKIP AHEAD TO
At a Glance
As AI tools like ChatGPT gain popularity on campus, instructors face new questions around academic integrity. Some worry that they could inadvertently give higher grades to students who use AI compared to those who don’t use AI for coursework. Others are concerned that reliance on AI tools could hinder students’ development of critical thinking skills. Whether or not you integrate these technologies into your courses, it’s important to reflect on how you’ll address them with students. How can you foster academic honesty and critical thinking when every student has easy access to generative AI?
In response to these concerns, some companies have developed “AI detection” software. This software aims to flag AI-generated content in student work. However, AI detection software is far from foolproof—in fact, it has high error rates and can lead instructors to falsely accuse students of misconduct (Edwards, 2023; Fowler, 2023). OpenAI, the company behind ChatGPT, even shut down their own AI detection software because of its poor accuracy (Nelson, 2023).
In this guide, we’ll go beyond AI detection software. We’ll discuss how clear guidelines, open dialogue with students, creative assignment design, and other strategies can promote academic honesty and critical thinking in an AI-enabled world.
Set Clear Policies and Expectations
It’s important to be clear with your students about if, when, and how they should use AI in your courses (Eberly Center, n.d.; Schmidli et al., 2023). Here are some potential strategies:
- Announce your policies on AI use both in person and in writing. First, make sure to talk about these policies with your students during class at the beginning of the semester. It’s also essential to include the policies in your syllabus and course site (as recommended in MIT Sloan’s Generative AI Guiding Principles) so students can easily go back and reference your expectations (Teaching + Learning Lab, n.d.-b).
- Provide definitions of key terms like plagiarism and cheating in the context of generative AI tools.
- Share clear examples of appropriate versus inappropriate AI applications for specific tasks (Eberly Center, n.d.; Columbia Center for Teaching and Learning, n.d.). For example, you might allow students to use ChatGPT to brainstorm ideas or review grammar, but not to generate significant portions of essay content.
Setting clear expectations from the start can help you guide appropriate use of generative AI tools. Furthermore, by aligning our policies and practices with MIT Sloan’s Values, we can foster a culture of academic honesty and ethical leadership even as new technologies emerge.
Promote Transparency and Dialogue
In addition to transparent policies, you can support academic integrity through open conversations with your students. Consider these possible approaches:
- Hold class discussions where students can ask questions and share their perspectives about AI tools (Stanford Center for Teaching and Learning, 2023).
- Explain the rationale behind your AI policies so students understand that the goal is to facilitate meaningful learning—not just enforce compliance (Teaching + Learning Lab, n.d.-a)
- If your students will be using generative AI tools, establish clear expectations around how they’ll acknowledge and cite their use of these technologies (McAdoo, 2023). Note that OpenAI’s terms of use state that users may not “Represent that Output was human-generated when it was not.”