A Brief Introduction to Common AI Concepts
AI 101: Article 2 “The Maelstrom of AI Jargon” by Bart Niedner, 22 June 2023

Concepts and Terms
Conversations about artificial intelligence are frequently laden with technical terms that can be challenging to understand. Let’s demystify a few of the common technical concepts. If you are not particularly engaged in computer science, don’t let the details frustrate you. Having a general sense of these concepts is enough for most casual conversations.
Artificial Intelligence
Artificial intelligence is the development of computer systems capable of performing tasks that typically require human intelligence, such as understanding natural language, recognizing objects, learning from experience, and making decisions.
A simple litmus test is if a machine can learn and adapt, it is probably AI. If not, it isn’t. For example, ChatGPT gets better at responding by analyzing each user request it processes and consuming other massive data sets. ChatGPT learns, so it is AI. On the other hand, a scientific calculator computes complex functions with blinding speed; however, it does not learn or adapt over time. A calculator’s ability is static, so it is not AI.
Natural Language Processing (NLP)
NLP enables computers to understand, interpret, and generate human language, allowing for applications like voice assistants, language translation, and sentiment analysis.
For example, applications like Google Translate use NLP algorithms to analyze the structure of sentences and translate them accurately. Chatbots and personal assistants such as Alexa and Siri also use NLP to recognize unique individual voices, understand them, and respond verbally.
An algorithm is a procedural set of rules a computer or other machine can follow to solve a problem or perform a specific task. Some common NLP algorithms are Tokenization, Part-of-Speech Tagging, Named Entity Recognition, and Sentiment Analysis.
— ChatGPT‑4
Machine Learning (ML)
ML is a core component of AI that focuses on algorithms and statistical models that enable computers to learn from data, identify patterns, and make predictions or decisions without being explicitly programmed.
For example, Gmail uses machine learning algorithms to learn from your behavior and the content of the emails you mark as spam. Over time, it can accurately identify spam emails and automatically filter them into the appropriate folder.
Machine learning is like when you learn to recognize a friend’s handwriting — the more you see it, the better you get at identifying it. Machine learning algorithms do the same with data.
— ChatGPT‑4
Neural Networks
Neural networks are a fundamental component of artificial intelligence (AI) inspired by the structure and functioning of the human brain. They consist of interconnected nodes called neurons. If the technology sounds complex (it is), focus on what it does to understand why it is relevant.
Imagine a neural network as a team of brain cells working together to recognize a face in a crowd.
— ChatGPT‑4
One example of an AI neural network is image recognition. Neural networks can be trained to recognize images by analyzing patterns in the pixels of the image. For example, a neural network can be trained on a dataset of pictures of cats, and it learns to recognize patterns in the photos unique to cats (their shape, fur, facial features, etc.). Once the neural network is trained, it can accurately recognize images of cats even if they are in different positions or lighting conditions.
Neural networks are used in various applications, such as self-driving cars, security cameras, and medical diagnosis.
Generative AI (added 05 July 2023)
Generative AI refers to a type of artificial intelligence that is capable of creating original content, such as images, music, or text, without being explicitly programmed to do so. This is achieved through the use of neural networks, which are trained on large datasets and are able to learn patterns and generate new content based on that learning. Generative AI has many practical applications, including in fields such as art, music, and writing, as well as in the development of chatbots and virtual assistants.
Deep Learning
Deep learning is a more advanced form of machine learning (ML) that uses neural networks with multiple layers to process and extract meaningful patterns, leading to more nuanced decision-making capabilities. Technically, deep learning is a subset of ML. However, in casual use, an important distinction is understood: deep learning is more nuanced and resource intensive.
As an example, we can consider an AI object recognition task. Machine learning might be used when describing AI trained to recognize a cat in any image containing a cat, while deep learning might be used when the AI can also identify the cat’s breed and activity.
Although deep learning requires much greater computer resources and data training than typical ML, this technology has succeeded remarkably in various domains, including image and speech recognition.
Reinforcement Learning
Reinforcement learning is a type of machine learning where an AI system (an agent) learns to take actions in an environment to maximize a reward signal. The agent receives rewards or punishments based on its actions and uses this feedback to adjust its behavior over time.
Imagine training a puppy. You reward it for good behavior and withhold treats when it misbehaves. The puppy, eager for treats, soon learns what actions earn rewards. This is similar to reinforcement learning, a type of machine learning. Instead of a puppy, you have an AI system (an agent) learning to navigate its environment. The rewards? Positive feedback when it performs well and negative feedback for poor performance.
— ChatGPT‑4
Take a chess-playing AI agent, for example. If it makes a smart move, like capturing an opponent’s piece, it receives positive feedback. If it loses its queen in a reckless move, it gets negative feedback. Over time, the agent learns to make moves that increase its rewards and, ultimately, its chances of winning the game.
Reinforcement learning is used in many real-world applications besides game playing, such as robotics, recognizing faces, and autonomous driving.
Supervised Learning
Supervised learning is a type of machine learning where the model is trained on labeled data to learn a mapping between input features and output labels. The goal is to learn a general rule that can be applied to new, unseen data.
Picture a classroom scenario. A teacher provides a problem to the students and asks them to solve it. They get feedback on whether their solutions are correct or incorrect. Over time, students learn to solve these problems correctly by learning from their previous mistakes. This teacher-student dynamic is an excellent way to understand supervised learning.
For example, imagine using a digital ‘classroom’ as a computer program that needs to learn how to identify handwritten numbers. The teacher is a data scientist, and the students are the AI software. The task is to recognize patterns from labeled data. To train this program (or ‘student’), a data scientist (or ‘teacher’) gives it many images of handwritten numbers, each labeled with the correct numeral. The program looks at the images and tries to find patterns in the pixels that indicate what number it might be. If it gets it wrong, it adjusts its ‘thinking’ to improve its future guesses. After seeing enough examples, the program can look at new images and accurately identify the numbers, even if it hasn’t seen them before. It’s like a student who has practiced enough math problems to solve new ones on a test.
This concept of supervised learning powers many of our everyday technologies. It’s what lets Siri understand our voice commands, and it’s the secret sauce behind email filters that know how to sort out spam from our inboxes. Supervised learning is also widely used in many other applications, such as fraud detection.
Unsupervised Learning
Unsupervised learning is a type of machine learning where the model is trained on unlabeled data to find patterns or a structure in the data. The goal is to discover hidden relationships or dependencies between the input features.
For example, let’s look at a marketing campaign that aims to segment customers based on their purchasing behavior. In this case, we want to group customers into clusters based on similarities in their purchasing behavior without knowing in advance what those similarities might be. To do this, we would use an unsupervised learning algorithm to group customers into clusters based on their purchasing behavior. The algorithm would analyze the customers’ purchasing histories and identify patterns or similarities in their behavior. Once the clusters have been identified, we can use this information to tailor our marketing campaigns to each group of customers. For example, we might send differing promotional offers to customers in each cluster based on their purchasing preferences.
Unsupervised learning is widely used in many other applications, such as anomaly detection, recommendation systems, dimensionality reduction, and data compression.
Some common unsupervised learning algorithms are K‑means Clustering, Hierarchical Clustering, and Density-based Clustering.
— ChatGPT‑4
AI Hallucination (Unveiling the Imaginative Side of AI)
AI hallucination, despite its cryptic name, is an intriguing aspect of artificial intelligence. Similar to human hallucinations, where we perceive or imagine things that aren’t real, an AI hallucination refers to situations where an AI model generates results that diverge from accuracy or realism.
Consider this scenario: an AI system designed to identify objects in images starts seeing — or ‘hallucinating’ — items that are not actually present in the photo. This typically happens due to inadequate training data or glitches in the AI’s learning process. For example, if the AI has been trained primarily on images of dogs, it might mistakenly identify a cat in an image as a dog.
Language-based AI models, such as chatbots or text generators, are also susceptible to this phenomenon. They might produce incoherent or entirely incorrect responses, especially if their training data was insufficient or flawed. This is akin to a person trying to form sentences in a language they’ve only partially learned — the result is often a mix of accurate phrases interspersed with nonsensical or erroneous ones.
AI hallucinations can appear convincingly real, leading users to accept them as true. This is why an active role of scrutiny is crucial when interacting with AI systems. You can spot these hallucinations by checking for inconsistencies and errors. This could be anything from grammatical mistakes in a text to object identifications in an image that don’t match reality. It’s important to remember that as advanced as AI has become, it’s not infallible. A healthy dose of skepticism can go a long way in navigating the world of AI.
Weak AI (ANI)
Weak AI is also called narrow AI because it is designed to perform specific or limited tasks. Examples include recommendation algorithms on streaming platforms like Netflix or Spotify DJ, voice assistants like Siri or Alexa, and even autonomous vehicles. These systems excel in their designated functions but lack versatility for tasks outside their scope.
Narrow AI is limited to learning and adapting in a specific arena. For example, a narrow AI chatbot such as ChatGPT may excel, learn, and adapt in understanding our natural language but misunderstand the context of that conversation. A narrow AI image generator, such as Midjourney, may produce an image that is indistinguishable from a real-life digital photo. However, its inability to understand the concept of depth is a challenge when prompting.
Well, partner, saddle up for a ride into the wild, wild west of technology. We’re here to talk about a pretty thing named ANI.
— ChatGPT‑4 (in the voice of John Wayne, d. June 11, 1979)
Imagine a trusty, hardworking cowboy — let’s call him Slim. Now, Slim is one heck of a roper and rider. He’s the best there is at breakin’ broncos and roundin’ up strays. But, ask him to cook a meal, and you might be better off eatin’ your saddle. He’s got a narrow set of skills, and that’s just fine because that’s what he’s there for. That’s what we call specialized.
Now, this is a lot like how Narrow AI, or ANI, works. These are smart computer systems designed to do one specific job, and they do it real good, but they ain’t meant to do everything. They’re sorta like our cowboy Slim – the best at what they do, but not great at things outside their specialty.
Take, for instance, the voice assistant on your smartphone — let’s call her Siri. She can set a reminder, tell you the weather, or even crack a joke if you’re feelin’ glum. But ask her to write a novel or paint a sunset? She’d be as lost as a calf in a hailstorm. That’s because Siri, like our cowboy Slim, is good at what she’s trained for — but that’s about it.
So next time you ask Siri to play your favorite song or your GPS to guide you to the nearest saloon, remember our cowboy Slim. That’s narrow AI, partner — a one-trick pony, but one heck of a trick it is!
Strong AI (AGI)
Strong AI is also commonly called General AI. In contrast to weak AI, strong AI refers to a hypothetical machine capable of understanding and performing any intellectual task a human being can do. This type of AI remains a subject of ongoing research and has yet to be achieved. Strong AI would possess human-like cognitive abilities, allowing it to learn, understand, and adapt across all human tasks.
Super AI (ASI)
Super AI, also called Artificial Super Intelligence, refers to an advanced form of artificial intelligence that surpasses the capabilities of human intelligence in various domains, such as problem-solving, decision-making, and learning. It would process and analyze vast amounts of data at a speed far exceeding human capacity and perform complex tasks with high accuracy and efficiency.
The rise of powerful AI will be either the best or the worst thing ever to happen to humanity. We do not yet know which.
— Stephen Hawking in 2016, d. March 14, 2018
Super AI is still a theoretical concept, and its development and implications are subject to ongoing research and debate. The possible dangers surrounding ASI are of notable public and scientific concern primarily due to the exponential speed at which such technology might advance.
Let’s Build This Resource of AI Jargon!
Is there a term, phrase, or concept you want to see on this list? What might be helpful for someone relatively new to artificial intelligence? If so, please, drop me an email at Bart.Niedner@gmail.com or post an AI-RISE blog comment. Thank you in advance!
Your Role in The AI Discussion
Please engage in the ongoing AI discussion here and elsewhere. It is essential to remain informed, curious, and open to its potential. Let’s explore the possibilities of AI together! What are your thoughts? What would you like to explore?
About the “AI 101” Article Series
AI-RISE articles in the AI 101 series are introductory material for anyone who wants accurate, conversational knowledge of this important technology shaping our world. This article makes the discussion more accessible to someone new to the AI conversation.
Article by Bart Niedner

All hail our technological overlords!
— Bart Niedner
Now, where did I put my eyeglasses?!
Bart Niedner, a versatile creative, embarks on a journey of discovery as he delves into both novel writing and the intriguing realm of AI-assisted writing. Bart warmly welcomes you on this journey from novice to master as he leverages his creative abilities in these innovative domains. His contributions to AI-RISE and BioDigital Novels reflect AI collaboration and exploratory work – the purpose of these websites.
“Get Your Geek On!” (Related Reads)
- TELUS International. “50 AI Terms Every Beginner Should Know.” TELUS International, www.telusinternational.com/insights/ai-data/article/50-beginner-ai-terms-you-should-know.
- The New York Times. “Artificial Intelligence Glossary: AI Terms Everyone Should Learn.” The New York Times, www.nytimes.com/article/ai-artificial-intelligence-glossary.html.
- Coursera. “Artificial Intelligence (AI) Terms: A to Z Glossary.” Coursera, www.coursera.org/articles/ai-terms.
- Wikipedia. “Glossary of Artificial Intelligence.” Wikipedia, en.wikipedia.org/wiki/Glossary_of_artificial_intelligence.
- TheNextWeb. “A Glossary of Basic Artificial Intelligence Terms and Concepts.” TheNextWeb, thenextweb.com/news/glossary-basic-artificial-intelligence-terms-concepts.
- G2. “A to Z Artificial Intelligence Terms in Technology Space.” G2, www.g2.com/articles/artificial-intelligence-terms.
Encourage Participation
Interested?
Featured Image
Image Creation Remarks
This featured image was created with Midjourney and cropped for borderless proportions using Photoshop v24.7.0 Beta.
Midjorney Prompt
It seemed appropriate to use my description of narrow AI for the Midjourney prompt. 😉
“Weak AI (ANI) Weak AI is also called Narrow AI because it is designed to perform specific or limited tasks. Examples include recommendation algorithms on streaming platforms like Netflix or Spotify DJ, voice assistants like Siri or Alexa, and even autonomous vehicles. These systems excel in their designated functions but lack versatility for tasks outside their scope. Narrow AI is limited to learning and adapting in a specific arena. For example, a Narrow AI chatbot such as ChatGPT may excel, learn, and adapt in understanding our natural language but misunderstand the context of that conversation. A Narrow AI image generator, such as Midjourney, may produce an image that is indistinguishable from a real-life digital photo. However, its inability to understand the concept of depth is a challenge when prompting. –v 5 –s 250”
Postprocessing
The Midjourney image was cropped for borderless proportions using Photoshop v24.7.0 Beta.