At a Glance

This guide offers practical strategies to help MIT Sloan faculty responsibly integrate generative AI tools into their teaching. You’ll find specific recommendations on safeguarding student privacy, addressing AI limitations in the classroom, encouraging critical thinking about AI, and developing your own AI literacy. Whether you’re an AI novice or expert, our goal is to provide tailored guidance so you can mitigate AI’s risks while harnessing its power to create more dynamic, hands-on learning experiences for students.

Teaching responsibly with generative AI means protecting your, students’, and MIT’s data when you’re using publicly available AI tools. To learn more, see Navigating Data Privacy.

1. Be Transparent with Students

If you’re thinking of using an AI tool in your course, it’s important to let students know in advance. That’s especially important if they’ll need to share any personal information (such as a phone number to create an account) to access the tool. Here are some steps you can take to set the scene:

  • Provide context. Before introducing an AI tool, offer a brief overview of how it works and why it will benefit students’ learning.
  • Highlight the importance of data privacy. Educate your students on generative AI data privacy practices. Distribute readings or resources that delve into data privacy in AI. Consider sharing articles from reputable business journals or case studies that discuss real-world implications of data privacy breaches.
  • Offer alternatives. Always provide students with an alternative if they’re uncomfortable with sharing their data. This could be another tool, a different assignment, or a manual approach to achieve the same learning outcome.

By taking these steps, you’re not just asking students to use a tool; you’re preparing them to make informed decisions as future business leaders.

Microsoft Copilot provides the MIT Sloan community with data-protected access to AI tools GPT-4 and DALLE-3. Chat data is not shared with Microsoft or used to train their AI models. Access Microsoft Copilot by logging in with your MIT Kerberos account at https://copilot.microsoft.com/. To learn more, see What is Microsoft Copilot (AI Chat)?

2. Mitigate Known Limitations

Generative AI tools have limitations that can cause problems if not addressed proactively. One challenge is that AI can produce content that seems accurate but isn’t – we call this a “hallucination” (O’Brien, 2023). For example, ChatGPT might provide a compelling yet incorrect explanation of a business model. To address issues with hallucination, make sure to cross-reference AI-generated content with trusted sources, such as the MIT Libraries’ resources. Treat any AI content as needing validation.

AI can also produce output that reflects the harmful biases present in its training data. This can lead to skewed representations. For example, when an MIT student provided her photo to AI image creator PlaygroundAI and asked it to generate a “professional LinkedIn profile photo,” the AI generated an image of a Caucasian girl with blue eyes (Buell, 2023). Racial and other biases are a recurring issue with AI tools (Sher & Benchlouch, 2023). To mitigate the impact of biased AI output, consider these approaches:

  • Use inclusive content. Actively seek out inclusive teaching materials to ensure that your teaching materials include a wide range of perspectives. For example, consider integrating resources from Harvard Business Publishing (HBP)’s Diversity, Equity, and Inclusion: Resources for Educators.
  • Foster open dialogue. Create a classroom environment where students feel empowered to discuss and challenge any biases they observe in AI outputs.

Proactively addressing AI’s limitations can help you responsibly harness its potential to support your teaching. For a deeper understanding of the biases, misinformation, and errors associated with generative AI tools, see our article When AI Gets It Wrong: Addressing AI Hallucinations and Bias.

3. Guide Student Engagement

Consider these strategies to help maximize AI’s positive impact on your course and mitigate potential challenges.

  • Collaborate with students on AI decisions. Students are our most important stakeholders. Consider involving your students in the decision-making process about how and when to use AI for teaching. Collect student feedback throughout the term so you can hear their perspectives on the AI tools they’ve been using.
  • Try before you teach. Before integrating a new tool into your teaching, take the time to explore its features in-depth. If you’ll encourage students to use this tool, make sure you’ve tested its outputs and functionalities comprehensively enough to guide them effectively.
  • Ensure accessibility. Make sure any tool that you’re encouraging students to use is compatible with screen readers, voice commands, and other assistive technologies. Prioritize free AI tools so costs don’t become a barrier to learning. Avoid assignments that will disproportionately benefit students who can pay for access to expensive AI tools.
  • Support academic integrity. Review our recommendations for maintaining academic integrity in the age of AI.
  • Encourage critical thinking. Encourage students to think critically about AI’s limitations. Highlight the danger of hallucinations and offer students resources and methods for fact checking. Discuss AI’s potential to reproduce harmful biases. Emphasize the importance of carefully reviewing AI-generated content.

By taking these steps, we can help students engage with AI thoughtfully, responsibly, and effectively.

4. Develop AI Literacy

AI literacy is “a set of competencies that enables individuals to critically evaluate AI technologies, communicate and collaborate effectively with AI, and use AI as a tool online, at home, and in the workplace” (Long & Magerko, 2020, p.2).

Generative AI is becoming particularly relevant in higher education. Consider the rise of AI-driven tools that can simulate business scenarios or generate financial models based on a set of input parameters. Such tools can be invaluable in a classroom setting, allowing students to explore a multitude of business situations without manually crafting each one. By integrating these tools into their curriculum, faculty can offer students hands-on experiences that were previously hard to achieve.

Many different resources can help you develop AI literacy and become a savvy user. For example, you can have conversations about AI with your colleagues and students. You can join workshops or courses focused on AI. You can read expert articles. You can also explore relevant courses on platforms like LinkedIn Learning. The more you know, the more effectively you can use AI yourself and guide your students.

Conclusion

ChatGPT and generative AI tools create new opportunities and risks in education. As these technologies advance, MIT Sloan faculty can lead the way in teaching responsibly with generative AI. This will require understanding AI tools’ ethical implications as well as their limitations. It may also mean intentionally guiding students’ engagement with these new technologies.

Whether or not you integrate AI tools into your course, you can help students critically evaluate AI and its potential impacts. MIT Sloan students will enter a world in which they may encounter generative AI tools and their outputs every day. By discussing or modeling responsible AI use in the classroom, you can help students build the skills they will need for ethical leadership. Together, let’s shape MIT Sloan’s education where advanced technologies complement—rather than compromise—human priorities.

References

Buell, S. (2023, July 19). An MIT student asked AI to make her headshot more ‘professional.’ It gave her lighter skin and blue eyes. The Boston Globe. https://www.bostonglobe.com/2023/07/19/business/an-mit-student-asked-ai-make-her-headshot-more-professional-it-gave-her-lighter-skin-blue-eyes

Center for Teaching and Learning. (2023). Ethical and privacy concerns. Brandeis University. https://www.brandeis.edu/teaching/chatgpt-ai/ethical-concerns.html

Long, D., & Magerko, B. (2020). What is AI literacy? Competencies and design considerations. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, USA, 1-16. https://doi.org/10.1145/3313831.3376727

O’Brien, M. (2023, August 1). Chatbots sometimes make things up. Is AI’s hallucination problem fixable? AP News. https://apnews.com/article/artificial-intelligence-hallucination-chatbots-chatgpt-falsehoods-ac4672c5b06e6f91050aa46ee731bcf4

Sher, G., & Benchlouch, A. (2023, July 21). Unmasking AI bias: A collaborative effort. Reuters. https://www.reuters.com/legal/legalindustry/unmasking-ai-bias-collaborative-effort-2023-07-21