At a Glance

Artificial Intelligence (AI) stands at the forefront of technological advancement, shaping our daily interactions and revolutionizing industries. We’ve created this glossary to help you build a foundational understanding of generative AI tools.

Whether you’re a newcomer or an AI veteran, learning the basic vocabulary behind these technologies can help you gain a better understanding of AI tools’ opportunities and subtleties. Developing literacy in AI concepts will also enable our community stay at the forefront of technological advancement. We can lead nuanced conversations about balancing innovation with ethical considerations and help steer AI toward positive impact.

This glossary is inspired by the New York Times Artificial Intelligence Glossary and clarifies essential terms related to the generative AI landscape here at MIT Sloan.

Anthropomorphism

We use the term anthropomorphism to describe the habit of assigning human-like qualities to AI. While AI systems can imitate human emotions or speech, they don’t possess feelings or consciousness. We might interact with various AI models as if they were colleagues or thought partners, but in reality, they serve as tools for learning and resource development.

Bias

Bias in AI models refers to output errors caused by skewed training data. Such bias can cause models to produce inaccurate, offensive, or misleading predictions. Biased AI models arise when algorithms prioritize irrelevant or misleading data traits over meaningful patterns (Smith, 2019).

Emergent Behavior

We call the unexpected skills showcased by vast language models emergent behaviors (Pasick, 2023). These talents span coding, musical composition, poetry crafting, and even the creation of fictional narratives.

Generative AI

Generative AI is an advanced technological approach that enables the creation of content including text, images, and videos. By analyzing and discerning patterns within extensive training datasets, generative AI can autonomously construct material that shares comparable characteristics to its training input. This capability stems from the AI’s understanding of data patterns and its ability to replicate or innovate based on these patterns.

Whether it’s generating art, writing prose, or crafting other digital content, generative AI leverages its learned knowledge to produce results that often mirror human-like creativity. While generative AI systems may seem human in nature, they do not possess human consciousness or emotions themselves.

Hallucination

We call the occurrences where large language models generate factually inaccurate or illogical answers due to data and architecture constraints hallucinations.

Large Language Model (LLM)

Neural networks known as large language models work by forecasting word sequences. Large language models’ capabilities have rapidly advanced in the last year and continue to evolve with increased use. They can now hold dialogues, write prose, and scrutinize enormous text quantities from the internet.

Natural Language Processing (NLP)

Natural Language Processing (NLP) is a subfield of artificial intelligence and computational linguistics that focuses on enabling machines to understand, interpret, and generate human language to be understood by humans.

Neural Networks

Neural Networks, modeled after the human brain, are a mathematical system that actively learns skills by identifying and analyzing statistical patterns in data. This system features multiple layers of artificial neurons, which are computational models inspired by the neurons in our brain

These artificial neurons process information and transmit signals to other connected neurons. While the first layer processes the input data, the final layer delivers the results (Hardesty, 2017). Intriguingly, even the experts who meticulously design these neural networks often find themselves puzzled by the intricate processes occurring between the layers.

Parameters

In the realm of AI systems, developers establish numerical values referred to as parameters. For context, OpenAI’s GPT-4 is believed to incorporate hundreds of billions of parameters that drive its ability to predict words and create dialogue. Consider these two parameters, which play a pivotal role in shaping both the construction and behavior of a large language model:

  • The construction parameter refers to the underlying structure and architecture of the model. This includes how layers of artificial neurons are organized, interconnected, and weighted. It’s akin to the framework or skeleton that gives shape to the model.
  • The behavior parameter refers to how the model operates, reacts, and evolves in response to input data. It defines the model’s responsiveness, adaptability, and its specific output patterns. The behavior can vary based on factors such as the type of input data and external connectivity, like internet access.

Reinforcement Learning

Reinforcement Learning is a method in AI training where models learn optimal decision-making strategies through cycles of actions and feedback, with human interaction playing a pivotal role in refining the learning process. Models learn by making decisions, observing the outcomes of those decisions, and adjusting their strategies accordingly.

Transformer Model

Transformer models can process entire sentences simultaneously rather than in sequence, aiding in grasping context and the language’s long-term associations. This means these models can detect and interpret relationships between words and phrases in a sentence, even if they are positioned far apart from each other.

References

Hardesty, L. (2017, April 14). Explained: Neural networks. MIT News. https://news.mit.edu/2017/explained-neural-networks-deep-learning-0414

Pasick, A. (2023, March 27). Artificial intelligence glossary: Neural networks and other terms explained. The New York Times. https://www.nytimes.com/article/ai-artificial-intelligence-glossary.html

Smith, C. S. (2019, November 19). Dealing with Bias in artificial intelligence. The New York Times. https://www.nytimes.com/2019/11/19/technology/artificial-intelligence-bias.html