Simulated customer interviews gave me a week’s worth of research insights—but they also forced me to confront AI’s blind spots, including bias, hallucinations and the limits of synthetic data.
Simulated interviews may sound artificial, but as a student learning the messy art of customer discovery, they offered something surprisingly real. In my Generative AI for Managers course at MIT Sloan, Professor John Horton asked us to explore user demand using AI personas. As a project for the course, I was designing MindBuddy—a wearable headband that gently nudges you when your focus drifts. For this assignment, I used ChatGPT to simulate interviews and surveys with hypothetical users. What began as a creative shortcut quickly turned into a hands-on lesson in how to ask better questions, interpret patterns, and navigate early-stage ambiguity.
Stepping Through the Assignment
Here’s how my AI‑augmented user research unfolded:
- Define personas and context. I created a few hypothetical users with distinct goals and challenges. For each persona, I asked the AI to elaborate on their motivations, routines, and frustrations.
- Design and refine prompts. I drafted open‑ended interview questions about focus, comfort, privacy, and willingness to pay. When a persona raved about a nonexistent feature, I adjusted the prompt to focus on actual capabilities and asked for step‑by‑step reasoning.
- Conduct simulated interviews. For each persona, I ran multiple rounds of Q‑and‑A. When responses were shallow, I used chain‑of‑thought prompting (MIT Sloan Teaching & Learning Technologies, n.d.) elicit deeper reasoning and reveal hidden assumptions.
- Analyze and synthesize findings. I organized responses into themes—distraction triggers, privacy concerns, price sensitivity—and looked for patterns and contradictions.
As Professor John Horton said, “This approach can be useful even if it’s not realistic or predictive – simply by raising considerations that you had not originally considered, simulating user research can better prepare you for other approaches you might take to collect information.”
In case you’d like to design a similar assignment, here’s a PDF version of the research assignment guidelines (approved by the teaching team).
Download the AssignmentHow Simulated Research Can Inform Learning
Generative AI gave me more than a fast way to generate data. It gave me a sandbox for learning how research works. The process combined the structure of a case study, the conversational depth of a focus group, the iteration of survey design, and the insight of a segmentation exercise. But more importantly, it forced me to refine my questions, spot contradictions, and reflect on what wasn’t being said. That’s what makes AI-augmented research so valuable—it’s not just efficient, it’s instructive when used with care.
Why It Worked
- Structure with room to explore. The assignment provided a clear process—create a product, define personas, design interview questions –but left the ideas and lines of inquiry up to each student.
- AI to augment, not replace, learning. Generative AI helped me generate ideas, spot patterns, and explore different lines of questioning, but it did not do the thinking for me. I had to decide which responses made sense, which ones to challenge, and how to turn raw output into meaningful insight.
- Risk-free simulation. AI personas allowed me to explore early conversations I might not have had access to otherwise, helping surface initial product risks and design trade-offs before engaging real users. This sharpened my thinking and made my questions more focused. It did not replace human input, but it prepared me to have higher quality conversations when I did talk to actual people. The simulation created a low-stakes space to practice, so that my real-world research could be more intentional and informed (MIT Sloan Teaching & Learning Technologies, 2025)
Navigating the Limits of Generative AI
Working with generative AI for user research was fast and flexible—but it also revealed just how fragile and fallible these tools can be. Large language models like ChatGPT don’t “understand” in a human sense. They generate text based on statistical patterns, not lived experience or factual verification. The results often sound confident, even when they’re incorrect.
As When AI Gets It Wrong: Addressing AI Hallucinations and Bias points out, generative AI models can amplify stereotypes, fabricate citations, and produce hallucinations that seem credible. These aren’t reasons to avoid AI in the classroom, but rather to approach its use with a critical lens. As a student, I found that responsible AI use doesn’t come from just following the best practices but also comes from running into the limitations firsthand. Here’s how those moments shaped my approach:
- Treat AI outputs as starting points, not answers. In my simulations, some personas praised features MindBuddy didn’t have. That forced me to slow down and treat outputs as speculative not factual. It also reminded me that plausible language doesn’t equal truth.
- Cross-check with real-world sources. When an AI-generated survey suggested strong demand and willingness to pay, I compared it with market reports and my own intuition. The discrepancy helped me see where the model was being overly optimistic or generic.
- Use prompts that encourage reasoning. Chain-of-thought prompting didn’t always improve accuracy, but it made the model’s assumptions visible. That gave me the chance to step in, correct faulty logic, and sharpen my own interpretation.
Simulated personas didn’t just save time; they reshaped how I approach early-stage research. Compared to past projects without AI, this process lets me test ideas faster, iterate on questions in real time, and explore scenarios I wouldn’t have considered on my own. Using generative AI for user research exposed gaps in my assumptions and surfaced unexpected patterns, but the real learning came from interpreting those outputs—figuring out what felt true, what felt off, and why. It didn’t replace real users, but it made me better prepared to talk to them. That preparation helped me refine the product idea itself.
References
MIT Sloan Teaching & Learning Technologies. (n.d.). Glossary of terms: Generative AI basics. https://mitsloanedtech.mit.edu/ai/basics/glossary/
MIT Sloan Teaching & Learning Technologies. (n.d.). When AI gets it wrong: Addressing AI hallucinations and bias. https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/
So, E. (2025, March 26). Deep research: Transforming the creation of learning materials with research-backed AI. MIT Sloan Teaching & Learning Technologies. https://mitsloanedtech.mit.edu/2025/03/26/deep-research-transforming-the-creation-of-learning-materials-with-research-backed-ai/








