At a Glance

Generative AI is reshaping what’s possible in the classroom—and it’s easier to use than ever before. This guide will help you understand what generative AI is, how it works, and what it can (and can’t) do. You’ll explore practical use cases, try out tools, and learn how to bring AI into your teaching in a thoughtful, responsible way. Along the way, we’ll cover key ethical considerations, show you how to support student engagement and data privacy, and point you to deeper resources for building AI literacy. Whether you’re new to AI or looking to deepen your approach, this guide is your starting point.

The Basics

Generative AI is an artificial intelligence subset that learns from data to produce new, unique outputs on a vast scale, ranging from educational content to software code and more. Central to this are foundational AI models trained on massive datasets. Generative AI models are essentially prediction tools, able to generate text, images, and code by predicting sequences based on the data they’ve been trained on.

There’s a lot of jargon involved in discussing generative AI systems. Learn more about generative AI terminology in the AI Glossary.

The following video is the first in Wharton Interactive’s five-part course on Practical AI for Instructors and Students. In these videos, MIT Sloan alum and Wharton Associate Professor Ethan Mollick, along with Lilach Mollick, Director of Pedagogy at Wharton Interactive, provide an accessible overview of large language models and their potential for enhancing teaching and learning.

In this first video, you can learn about the following:

  • Why AI is now accessible to everyone and how students are using it
  • What we mean by AI, specifically large language models and generative AI
  • How models like ChatGPT work and their surprising capabilities
  • The potentially outsized impact of AI on educators and creative professionals
  • Ethical considerations and risks related to generative AI

You can watch the other four videos in the Mollicks’ Practical AI for Instructors and Students Course to learn more about large language models, prompting AI, using AI to enhance your teaching, and how students can use AI to support their learning.

Generative AI Tools

We encourage you to spend some time exploring the generative AI tools including the ones in this resource hub. It’s important to get a sense of any technology’s capabilities and limitations before you integrate it into your teaching. Also, trying these technologies yourself may help you get a sense of how your students are using generative AI.

Before you start using AI tools in your teaching, make sure to review MIT Sloan’s Guiding Principles for the Use of Generative AI in Courses.

Today’s AI tools are increasingly versatile: a single platform can support a wide range of tasks, from drafting syllabi and interpreting feedback data to creating visuals or translating content. There are tools that can help streamline writing, automate feedback, generate graphics or summaries, and assist with data interpretation. As these capabilities converge, it’s less about finding a different tool for each task and more about using AI flexibly to support your instructional goals.

Browse the AI Tools to explore how each tool can support your teaching, from content creation and feedback automation to multimedia design and productivity enhancements. While you explore each platform’s potential, make sure to closely monitor for quality, bias, and responsible usage.

Core Foundations for Responsible Use

Before incorporating AI into your course, it’s worth taking a step back to think through a few key responsibilities. The practices below are designed to help you create a supportive and transparent learning environment—one where students understand what’s being asked of them, where their data is protected, and where everyone can engage meaningfully and critically with these evolving tools.

1. Be Transparent with Students

If you’re thinking of using an AI tool in your course, let students know in advance. That’s especially important if they’ll need to share any personal information (such as a phone number to create an account) to access the tool. Here are some steps you can take to set the scene:

  • Provide context. Before introducing an AI tool, offer a brief overview of how it works and why it will benefit students’ learning.
  • Highlight the importance of data privacy. Educate your students on generative AI data privacy practices. Distribute readings or resources that delve into data privacy in AI. Consider sharing articles from reputable business journals or case studies that discuss real-world implications of data privacy breaches.
  • Offer alternatives. Always provide students with an alternative if they’re uncomfortable with sharing their data. This could be another tool, a different assignment, or a manual approach to achieve the same learning outcome.

By taking these steps, you’re not just asking students to use a tool; you’re preparing them to make informed decisions as future business leaders.

2. Mitigate Known Limitations

Generative AI tools have limitations that can cause problems if not addressed proactively. One challenge is that AI can produce content that seems accurate but isn’t – we call this a “hallucination” (O’Brien, 2023). For example, ChatGPT might provide a compelling yet incorrect explanation of a business model. Make sure to cross-reference AI-generated content with trusted sources, such as the MIT Libraries’ resources. Treat any AI content as needing validation.

AI can also produce output that reflects the harmful biases present in its training data. This can lead to skewed representations. For example, when an MIT student provided her photo to AI image creator PlaygroundAI and asked it to generate a “professional LinkedIn profile photo,” the AI gave her paler skin and blue eyes (Buell, 2023). Racial and other biases are a recurring issue with AI tools (Sher & Benchlouch, 2023). To mitigate the impact of biased AI output, consider these approaches:

  • Use inclusive content. Actively seek out inclusive teaching materials to ensure that your teaching materials include a wide range of perspectives. For example, consider integrating resources from Harvard Business Publishing (HBP)’s Diversity, Equity, and Inclusion: Resources for Educators.
  • Foster open dialogue. Create a classroom environment where students feel empowered to discuss and challenge any biases they observe in AI outputs.

Proactively addressing AI’s limitations can help you responsibly harness its potential to support your teaching. For a deeper understanding of the biases, misinformation, and errors associated with generative AI tools, see When AI Gets It Wrong: Addressing AI Hallucinations and Bias.

3. Guide Student Engagement

Consider these strategies to help maximize AI’s positive impact on your course and mitigate potential challenges.

  • Collaborate with students on AI decisions. Students are our most important stakeholders. Consider involving your students in the decision-making process about how and when to use AI for teaching. Collect student feedback throughout the term so you can hear their perspectives on the AI tools they’ve been using.
  • Try before you teach. Before integrating a new tool into your teaching, take the time to explore its features in-depth. If you’ll encourage students to use this tool, make sure you’ve tested its outputs and functionalities comprehensively enough to guide them effectively.
  • Ensure accessibility. Make sure any tool that you’re encouraging students to use is compatible with screen readers, voice commands, and other assistive technologies. Avoid assignments that will disproportionately benefit students who can pay for access to expensive AI tools.
  • Support academic integrity. Review our recommendations for maintaining academic integrity in the age of AI.
  • Encourage critical thinking. Encourage students to think critically about AI’s limitations. Highlight the danger of hallucinations and offer students resources and methods for fact checking. Discuss AI’s potential to reproduce harmful biases. Emphasize the importance of carefully reviewing AI-generated content.

By taking these steps, we can help students engage with AI thoughtfully, responsibly, and effectively.

4. Develop AI Literacy

AI literacy is “a set of competencies that enables individuals to critically evaluate AI technologies, communicate and collaborate effectively with AI, and use AI as a tool online, at home, and in the workplace” (Long & Magerko, 2020, p.2).

Generative AI is becoming particularly relevant in higher education. Consider the rise of AI-driven tools that can simulate business scenarios or generate financial models based on a set of input parameters. Such tools can be invaluable in a classroom setting, allowing students to explore a multitude of business situations without manually crafting each one. By integrating these tools into their curriculum, faculty can offer students hands-on experiences that were previously hard to achieve.

Many different resources can help you develop AI literacy and become a savvy user. For example, you can have conversations about AI with your colleagues and students. You can join workshops or courses focused on AI. You can read expert articles. You can also explore relevant courses on platforms like LinkedIn Learning. The more you know, the more effectively you can use AI yourself and guide your students.

Ethical Considerations

The emergence of powerful generative AI systems presents exciting possibilities for enhancing teaching and learning. However, integrating these technologies into teaching also raises important ethical questions. Three key areas of concern are data privacy, AI-generated falsehoods, and bias in AI systems.

Data Privacy

Make sure to treat unsecured AI systems like public platforms. As a general rule, and in accordance with MIT’s Written Information Security Policy, you should never enter any data or input that is confidential or sensitive into publicly accessible generative AI tools. This includes (but is not limited to) individual names, physical or email addresses, identification numbers, and specific medical, HR, financial records, as well as proprietary company details and any research or organizational data that are not publicly available. If in doubt, please consult with MIT Sloan Technology Services Office of Information Security.

Note that some of this data is also governed by FERPA (Family Educational Rights and Privacy Act), the federal law in the United States that mandates the protection of students’ educational records (U.S. Department of Education), as well as various international privacy regulations including the European GDPR and Chinese PIPL.

Microsoft Copilot provides the MIT Sloan community with data-protected access to text- and image-generating AI tools. Chat data is not shared with Microsoft or used to train their AI models. Access Microsoft Copilot by logging in with your MIT Kerberos account at https://copilot.microsoft.com/. To learn more, see What is Microsoft Copilot (AI Chat)?

Beyond never sharing sensitive data with publicly available AI systems, we recommend that you remove or change any details that can identify you or someone else in any documents or text that you upload or provide as input. If there’s something you wouldn’t want others to know or see, it’s best to keep it out of the AI system altogether (Nield, 2023). This is not just about personal details, but also proprietary information (including ideas, algorithms or code), unpublished research, or sensitive communications.

It’s also essential to recognize that once data is entered into most AI systems, it’s challenging—if not impossible—to remove it (Heikkilä, 2023). Always exercise caution and make sure any information you provide aligns with your comfort level and understanding of its potential long-term presence in the AI system, as well as with MIT’s privacy and security requirements.

Falsehoods and Bias

There are well-documented issues around AI systems generating content that includes falsehoods (“hallucinations”) and harmful bias (Germain, 2023; Nicoletti & Bass, 2023). Educators have a responsibility to monitor AI output, address problems promptly, and encourage critical thinking about AI’s limitations.

We encourage you to review our resources on protecting privacy, integrating AI responsibly into your course, and mitigating AI’s issues with hallucinations and bias:

By proactively addressing ethical considerations and AI’s limitations, we can realize the promise of generative AI while upholding principles of fairness, accuracy, and transparency.

Get Support

As you consider how to best use generative AI in your course, questions will arise. Contact us for a personalized consultation. We’re here to be your thought partner during your development and implementation process.

Conclusion

Integrating artificial intelligence into your teaching offers both opportunities and challenges. In this guide, we’ve provided an initial roadmap to begin exploring this new space. We’ve covered the basics of what generative AI is and considered its potential benefits. We’ve also emphasized the importance of ethical considerations like prioritizing student privacy and addressing potential biases.

While AI offers powerful tools to augment teaching, the human touch remains irreplaceable. The goal is not to replace educators but to empower them with additional resources. By combining the strengths of AI with the expertise of skilled instructors, we can create richer, more effective learning experiences for our students.

As you move forward, remember that you’re not alone on this journey. Our team is here to support you, answer questions, and provide guidance. We’re excited to see how you’ll harness the potential of AI in your classrooms and look forward to hearing about your experiences. Let’s explore, learn, and innovate together.

MIT Sloan Faculty: We want to know how you’re incorporating generative AI in your courses—big or small. Your experiences are more than just personal milestones; they’re shaping the future of pedagogy. By sharing your insights, you contribute to a community of innovation and inspire colleagues to venture into new territories. Contact us to be featured. We’re here to help you tell your story!

References

Buell, S. (2023, July 19). An MIT student asked AI to make her headshot more ‘professional.’ It gave her lighter skin and blue eyes. The Boston Globe. https://www.bostonglobe.com/2023/07/19/business/an-mit-student-asked-ai-make-her-headshot-more-professional-it-gave-her-lighter-skin-blue-eyes

Germain, T. (2023, April 13). ‘They’re all so dirty and smelly:’ study unlocks ChatGPT’s inner racist. Gizmodo. https://gizmodo.com/chatgpt-ai-openai-study-frees-chat-gpt-inner-racist-1850333646

Heikkilä, M. (2023, April 19). OpenAI’s hunger for data is coming back to bite it. MIT Technology Review. https://www.technologyreview.com/2023/04/19/1071789/openais-hunger-for-data-is-coming-back-to-bite-it

Long, D., & Magerko, B. (2020). What is AI literacy? Competencies and design considerations. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, USA, 1-16. https://doi.org/10.1145/3313831.3376727

Mollick, E., & Mollick, L. (2023, July 31). Practical AI for instructors and students part 1: Introduction to AI for teachers and students [Video]. YouTube. https://www.youtube.com/watch?v=t9gmyvf7JYo

Nicoletti, L., & Bass, D. (2023, June 14). Humans are biased. Generative AI is even worse. Bloomberg Technology + Equality. https://www.bloomberg.com/graphics/2023-generative-ai-bias

Nield, D. (2023, July 16). How to use generative AI tools while still protecting your privacy. Wired. https://www.wired.com/story/how-to-use-ai-tools-protect-privacy

O’Brien, M. (2023, August 1). Chatbots sometimes make things up. Is AI’s hallucination problem fixable? AP News. https://apnews.com/article/artificial-intelligence-hallucination-chatbots-chatgpt-falsehoods-ac4672c5b06e6f91050aa46ee731bcf4

Sher, G., & Benchlouch, A. (2023, July 21). Unmasking AI bias: A collaborative effort. Reuters. https://www.reuters.com/legal/legalindustry/unmasking-ai-bias-collaborative-effort-2023-07-21