These three terms get used interchangeably, but they’re actually nested inside each other — each one more specific than the last.
🧠 Artificial Intelligence (AI)
The broadest umbrella. Any technique that makes machines seem smart — from rules-based logic to full learning systems. “Machines that appear to think.”
📈 Machine Learning (ML)
A subset of AI. Instead of hand-coding rules, the system learns patterns from data. “Show it enough examples and it figures out the rest.”
🧠 Deep Learning
A subset of ML using neural networks with many layers — loosely inspired by the brain. Powers image recognition, voice assistants, and modern language models. “ML that learns features automatically from raw data.”
🧠
AI
Any system designed to mimic intelligent behavior. Includes expert systems, rules engines, chess programs, and modern LLMs.
📈
Machine Learning
Systems that improve with experience. Instead of being programmed with explicit rules, they learn statistical patterns from large datasets.
🎉
Deep Learning
Multi-layer neural networks. Learns hierarchical features automatically. What powers ChatGPT, image generation, and speech recognition today.
💬
Generative AI
A type of deep learning that generates new content — text, images, code, audio — rather than just classifying or predicting from existing data.
03 — GLOSSARY
The AI Glossary: Terms That Trip Everyone Up
These are the six terms you’ll hear constantly — and what they actually mean in plain English.
📙
LLM
Large Language Model
What people think: “Some kind of giant dictionary.” What it is: A neural network trained on massive text data to predict what comes next in a sequence. The “intelligence” emerges from this prediction task at enormous scale.
🚀
GPT
Generative Pre-trained Transformer
What people think: “The name of OpenAI’s AI.” What it is: An architecture type — the Transformer model trained generatively. GPT-4o is one product. The architecture itself is used by many companies.
🧮
Token
The unit of text
What people think: “A word.” What it is: A chunk of text — roughly 3–4 characters or ~0.75 words on average. “unbelievable” is 3–4 tokens. Pricing and context limits are measured in tokens, not words.
💬
Prompt
Your input to the model
What people think: “Just the message you type.” What it is: Everything the model sees before responding: system instructions, conversation history, pasted documents, and your message. The full context is the prompt.
🏴
Hallucination
Confident wrongness
What people think: “The AI is lying.” What it is: The model generates plausible-sounding text that is factually wrong. It’s not lying — it’s pattern-matching without a ground-truth knowledge check. Always verify important facts.
💿
Context Window
The AI’s working memory
What people think: “How smart it is.” What it is: The maximum amount of text the model can “see” at once — inputs + outputs combined. Modern models range from 32K to 1M+ tokens. Larger window = more context it can use.
The Paper That Changed Everything
In June 2017, eight Google researchers published a paper called “Attention Is All You Need”. It introduced the Transformer architecture — a fundamentally new way for machines to process language. Instead of reading text one word at a time (like older models), Transformers could look at entire passages at once and figure out which words matter most to each other. That one idea — self-attention — turned out to be the key that unlocked modern AI.
Every major AI model you hear about today — GPT, Claude, Gemini, Llama, Grok — is built on the Transformer architecture from that single paper. The “T” in GPT literally stands for Transformer.
Meanwhile, at Nvidia…
At the same time, Nvidia was a company most people knew for making graphics cards for video games. Their GPUs were designed to do one thing extremely well: run thousands of simple math calculations simultaneously. Gaming required it — rendering millions of pixels per frame is massively parallel work.
Researchers realized that training Transformers was also massively parallel work. The same chips that rendered Call of Duty were perfect for training AI models. Nvidia’s CEO Jensen Huang saw this early and made a bold bet: pivot the company’s future toward AI computing.
The Perfect Storm
📄
2017: The Transformer Paper
Google publishes “Attention Is All You Need.” The architecture makes scaling language models practical for the first time.
📸
2018: Nvidia Ships V100s
Nvidia releases the V100 GPU — purpose-built with “Tensor Cores” for AI training. Suddenly, training massive models becomes 10× faster.
🚀
2018–2020: The Scaling Race
OpenAI trains GPT-2, then GPT-3, each time proving that bigger Transformers + more Nvidia GPUs = dramatically better AI. The arms race begins.
💰
2023–Now: Nvidia Becomes a Titan
Every AI lab in the world needs Nvidia’s chips. The company’s market cap soars past $3 trillion. A gaming GPU company becomes the backbone of the AI revolution.
The takeaway: AI didn’t happen because one company built a smart chatbot. It happened because a breakthrough in how machines read language (Transformers) collided with hardware that could actually run it at scale (Nvidia GPUs). Neither alone would have been enough. The timing of both is what created the AI moment we’re living through right now.
04 — GLOSSARY
More Terms Decoded
Six more concepts that come up in nearly every AI conversation, explained without the jargon.
🎓
Fine-Tuning
Customized training
What people think: “You reprogram the AI.” What it is: Additional training on a smaller, domain-specific dataset after the base model is built. Like giving a generalist doctor a medical specialty residency. Changes the model’s weights.
🔍
RAG
Retrieval-Augmented Generation
What people think: “Some kind of rag cleaning thing?” What it is: Before the model responds, relevant documents are fetched from a database and included in the prompt. Like letting a lawyer look things up before answering, instead of relying only on memory.
🤖
Agent
AI that takes actions
What people think: “A chatbot.” What it is: An AI that can plan multi-step tasks, use tools (search, code execution, file access), and take real-world actions autonomously — not just respond to a single message.
🌐
Multimodal
Multiple input types
What people think: “It has multiple personalities.” What it is: A model that can process more than one type of input — text, images, audio, video, documents. GPT-4o, Claude 3, and Gemini are all multimodal.
🌡
Temperature
Creativity dial
What people think: “How hot the computer runs.” What it is: A setting that controls output randomness. Temperature 0 = deterministic, predictable, factual. Temperature 1+ = more creative, varied, surprising. Low for code or facts; higher for creative writing.
🛡
Guardrails
Safety controls
What people think: “Just censorship.” What it is: Technical and policy constraints that shape model behavior — both built into the base model (RLHF, Constitutional AI) and added by operators for specific deployments. Range from content filters to custom rules.
05 — DEEPER CUTS
Deeper Cuts — For the Curious
These terms come up as soon as you go slightly below the surface. You don’t need to master them — just know what they mean when you hear them.
📐
Embedding
A way to represent text (or images) as a list of numbers — a vector — that captures semantic meaning. Words with similar meaning end up with similar vectors. The mathematical backbone of semantic search and RAG.
🗃
Vector Database
A database optimized for storing and searching embeddings. Instead of exact keyword matches, it finds “nearest neighbor” matches by meaning. Powers RAG systems. Examples: Pinecone, Weaviate, pgvector.
🛠
Parameters
The numerical weights inside a neural network. A “7B model” has 7 billion parameters. More parameters generally means more capability — but also more compute, memory, and cost to run.
⚡
Inference
Running the model to get a response. When you send a message to Claude or ChatGPT and get a reply, that’s inference. Distinct from training, which is building the model. Most AI costs are inference costs.
⚖
Training vs. Inference
Training: Expensive, GPU-intensive process to build the model. Done once (or periodically) by the AI company. Inference: Running the finished model to generate responses. This is what you pay per token for in an API.
🔓
Open-Source vs. Closed-Source
Open-source AI models release their weights publicly — you can download, run, and modify them. Closed-source models are only accessible via API. Note: “open-weight” is more precise — the weights are free but the training process/data may not be.
06 — THE LANDSCAPE
Meet the Models: The Big Players
The AI model landscape in 2026. Each company has a different philosophy, strength, and target audience.
Near-frontier performance at a fraction of the cost; open-weight
The Takeaway: You don’t need to pick one forever. Different models excel at different tasks. Many professionals keep 2–3 active subscriptions for different use cases.
07 — CHOOSING A MODEL
Model Tiers: Choosing the Right Size
Every major AI company offers a tiered family. Here’s how to think about which tier fits your task.
👑
Tier 1 — Flagship
Claude Opus • GPT-4.5 • Gemini 2.5 Pro • o3 — Most capable, most expensive. Complex reasoning, nuanced analysis, difficult multi-step tasks. Use when quality is critical.
$$$ High Cost ⏰ Slower
⚡
Tier 2 — Balanced
Claude Sonnet • GPT-4o • Gemini Flash — Great quality at a fraction of the cost. Fast enough for interactive use. This is the everyday workhorse tier for most professionals.
$$ Mid Cost ⏰ Fast
🚀
Tier 3 — Fast & Cheap
Claude Haiku • GPT-4o-mini • Gemini Flash Lite — Optimized for high-volume, low-latency tasks. Perfect for classification, summarization, simple Q&A, and apps with thousands of calls per day.
$ Low Cost ⏰ Fastest
Rule of thumb: Start with Tier 2 (Balanced) for almost everything. Step up to Flagship only when the task genuinely requires it. Step down to Fast/Cheap when you’re building automation or doing simple repetitive tasks at scale.
08 — OPEN vs CLOSED
Open vs. Closed: What’s the Difference?
One of the most important strategic decisions in AI adoption. Each has meaningful tradeoffs.
🔓 Open-Source / Open-Weight
Llama 4 • Mistral • DeepSeek • Phi • Gemma
✅ Free to download and run locally
✅ Data never leaves your machine
✅ Fine-tune for your domain
✅ No per-token API costs at scale
✅ Community ecosystem, plugins, tools
❌ Requires hardware (GPU) to run well
❌ You manage updates and security
❌ Generally behind frontier quality
🔒 Closed-Source / Proprietary
GPT-4o • Claude • Gemini • Grok
✅ Cutting-edge frontier performance
✅ No hardware required — just an API call
✅ Managed infrastructure, updates, safety
✅ Easy to integrate via API or web UI
✅ Enterprise support and SLAs available
❌ Data passes through their servers
❌ Ongoing API costs per token
❌ No control over the model weights
Important Nuance: “Open-source” in AI usually means open-weight — the trained model weights are released, but the training data and process are often proprietary. True open-source AI (full data + code + weights) is rare. Meta’s Llama and Google’s Gemma release weights but not training data.
09 — PROMPT ENGINEERING
Prompt Engineering 101
The quality of what comes out depends entirely on what goes in.
You don’t need to be a developer to benefit from better prompting. These seven techniques will immediately improve your results — in any AI tool, with any model.
🤔 Think of it this way: Prompting is like briefing a brilliant but brand-new intern. They’re incredibly capable — but they have zero context about your work, your preferences, or what “good” means to you. Clear direction unlocks everything.
1Be Specific
Say exactly what you want — format, length, audience, tone, constraints.
2Give It a Role
Set the AI’s expertise and perspective before asking your question.
3Show Examples
A few examples of what you want often beats paragraphs of description.
4Break It Down
Step-by-step prompts outperform one giant “do everything” prompts.
5Specify the Format
Table, bullets, JSON, markdown, numbered list — tell it the shape you need.
6Iterate
Refine the response in the same thread. Don’t restart — build on what’s good.
7Set Constraints
Tell it what not to do — specific words to avoid, length limits, topics to skip. Negative constraints are often as powerful as positive ones.
10 — TIP 1
Tip 1: Be Specific
Vague prompts get vague answers. The more context you provide about what you want, the closer the first response will be to what you need.
❌ BEFORE
You
Write about dogs.
Result: Generic, unfocused, probably not what you needed.
✅ AFTER
You
Write a 300-word paragraph about the top 3 health benefits of owning a dog, aimed at adults who are considering adoption. Use a warm but factual tone. Cite the benefit type (physical, mental, social) for each point.
Result: Precisely what you wanted, first try.
Specificity checklist: What is the output? • Who is the audience? • What tone/style? • How long? • What format? • What’s the purpose?
11 — TIP 2
Tip 2: Give It a Role
Setting a role or persona at the start of the conversation shapes the entire response. It changes vocabulary, depth of expertise, and framing.
❌ BEFORE
You
How do I handle a difficult employee?
Result: Generic career-advice-style tips with no context.
✅ AFTER
You
You are an experienced HR director at a mid-size tech company. I’m a first-time manager. One of my reports consistently misses deadlines but produces quality work when they do deliver. How should I approach a performance conversation that’s honest but preserves the relationship?
Categorize each email as BILLING, TECHNICAL, or GENERAL.
Email: “My invoice is wrong.” → BILLING Email: “App keeps crashing.” → TECHNICAL Email: “What are your hours?” → GENERAL
Now categorize: “I was charged twice this month.”
Result: Consistent format, exactly your defined categories.
When to use few-shot: Anytime you need consistent structure, specific labels, or a particular style you’d struggle to describe in words. 2–3 examples is usually enough.
13 — TIP 4
Tip 4: Break It Down
AI models, like people, do better when complex tasks are broken into clear steps. One large “do everything” prompt often produces mediocre work across all dimensions.
❌ BEFORE
You
Create a complete marketing strategy for my app.
Result: Generic surface-level advice covering too many topics poorly.
✅ AFTER (step-by-step)
You
Let’s build a marketing strategy step by step.
Step 1 only: Identify 3 distinct target audience segments for a productivity app aimed at freelancers. For each: demographics, pain points, and where they spend time online. Don’t move on to tactics yet.
Result: Deep, useful analysis on just this step. Then you build from there.
Step 1 Audience
→
Step 2 Channels
→
Step 3 Messaging
→
Step 4 Content Plan
14 — TIP 5
Tip 5: Specify the Format
If you need a table, ask for a table. If you need JSON, say JSON. If you need bullet points under headings, describe that. The model will try to guess — but guessing wastes time.
❌ BEFORE
You
Compare project management tools.
Result: Wall of prose with inconsistent structure.
✅ AFTER
You
Compare Asana, Trello, and Monday.com in a markdown table with these exact columns: Feature, Pricing Tier, Best For, Main Limitation. After the table, write a 2-sentence recommendation for a 10-person startup with no dedicated ops team.
Result: Clean table, instantly usable, plus a focused recommendation.
Format options to try: Markdown table • Numbered list • Bullet hierarchy • JSON • CSV • Code block • Q&A pairs • Headers + sections • Single sentence per line
15 — TIP 6
Tip 6: Iterate and Refine
The first response is a draft, not a final product. Treat the AI like a collaborative editing partner — build on what’s working rather than starting over from scratch.
You
Draft a professional email declining a vendor’s proposal for a new software contract.
Dear [Name], Thank you for your proposal… After careful consideration, we have decided not to move forward at this time…
AI
You
Good start. Make the tone warmer and more relationship-preserving. Keep it under 4 sentences. We want to stay connected with this vendor.
We genuinely appreciate the time you put into this proposal… [warmer, shorter version]
AI
You
Perfect. Add one sentence mentioning we’d like to revisit in Q3 when our budget cycle resets.
Final version with Q3 mention added. ✅
AI
Key insight: Three short follow-up prompts beat one perfect prompt every time. The context of the conversation helps the AI understand exactly what you’re refining toward.
16 — TIP 7
Tip 7: Set Constraints
Telling the AI what not to do is just as powerful as telling it what to do. Negative constraints trim the output space precisely where you need it.
Explain quantum computing to a 12-year-old. Use only everyday analogies. Under 150 words. Do not use the words “superposition” or “entanglement” — explain those concepts without naming them. End with one example of a real-world problem it could solve.
Result: Accessible, concise, exactly right for the intended audience.
Constraint types to use: Word/character limits • Banned words or phrases • Off-limits topics • No hedging language • No preamble/filler • Don’t repeat what I said • One paragraph only
17 — CHEAT SHEET
The Prompt Engineering Cheat Sheet
Seven techniques. One page. Print it. Tape it to your monitor.
#
Technique
One-Line Summary
Example Phrase
1
Be Specific
Say exactly what you want
300 words, warm tone, for adults...
2
Give a Role
Set the AI’s expertise
You are an experienced HR director...
3
Show Examples
Teach by demonstration
Email: “Invoice wrong” → BILLING
4
Break It Down
One step at a time
Step 1 only: identify the audience...
5
Set the Format
Table, bullets, JSON, code
In a markdown table with columns...
6
Iterate
Refine, don’t restart
Good. Now make it shorter and warmer.
7
Set Constraints
Tell it what NOT to do
Under 150 words. No jargon. No preamble.
💡 The meta-tip: Stack these together. A prompt with a role + specific ask + format + constraint will consistently outperform a prompt with just one technique.
18 — COMMON MISTAKES
Common Mistakes to Avoid
Most frustrating AI experiences come from these five patterns. Recognize them and you’ll immediately get better results.
🖧
Zero-Context Code Requests
“Fix my code” — with no language, no error message, no file. The model guesses wildly. Always include the error, the relevant snippet, and what you expected to happen.
📄
Unfocused Document Dumps
Pasting a 50-page document and saying “Summarize this.” Tell the AI which section matters, what decision the summary supports, and what format you need the output in.
🕐
Real-Time Fact Requests
Asking AI for today’s stock price, yesterday’s news, or who won last night’s game. LLMs have knowledge cutoffs — they don’t browse live unless connected to a search tool.
📊
Trusting Numbers & Citations
AI-generated statistics, research citations, and URLs should always be verified. Hallucinations are most dangerous when they look precise and authoritative. Double-check before you share.
🔄
Starting Over Instead of Iterating
Getting a 70% good response and starting a brand new conversation instead of saying “keep the structure but change X.” The conversation context is valuable — iterating is faster and produces better results than re-explaining from scratch every time.
19 — GETTING STARTED
Getting Started — Which Tool Should I Use?
The best AI tool depends entirely on your use case, workflow, and budget. Here’s a practical decision guide.
👋
Just Exploring?
Start with ChatGPT free or Claude.ai free. Both have generous free tiers. No credit card needed. Perfect for experimenting with prompting concepts before committing.
ChatGPT (free)Claude.ai (free)
💼
For Daily Work?
Upgrade to Claude Pro ($20/mo) or ChatGPT Plus ($20/mo). Priority access, faster models, higher rate limits, and access to the latest flagship models.
Claude ProChatGPT Plus
✏
For Long-Form Writing?
Claude is widely regarded as the best for nuanced, long-form writing — essays, documents, reports. Its outputs tend to be more natural and less formulaic.
Claude
🔎
For Research?
Google Gemini integrates with Google Search to provide sourced, up-to-date information. Best for research tasks where you need citations and current information.
Google Gemini
🔒
For Privacy?
Run a local model on your own machine with Ollama (free, open-source). Download Llama 4, Mistral, or Phi and run everything offline. Zero data leaves your device.
Ollama (local)Llama 4
💻
For Coding?
GitHub Copilot integrates directly into your code editor (VS Code, JetBrains, etc.) and suggests code as you type. Claude and ChatGPT are strong for longer architectural questions and code review in chat form.
GitHub CopilotClaude
⚠️
Wait — Which “Copilot”?
Microsoft uses the name Copilot for two very different things — and it trips everyone up:
Microsoft 365 Copilot — AI built into your everyday Microsoft apps (Word, Excel, Outlook, Teams, PowerPoint). It helps write emails, summarize meetings, build spreadsheet formulas, and draft documents. This is the one most people in an organization will use. It’s included in certain Microsoft 365 business plans or available as an add-on.
GitHub Copilot — A completely separate product for software developers. It lives inside code editors and suggests code as you type. Built by GitHub (owned by Microsoft) using OpenAI models. Different product, different subscription, different audience.
Same brand name, very different tools. If your IT department rolls out “Copilot,” they almost certainly mean the Microsoft 365 one.
20 — NEXT STEPS
Resources & Next Steps
You now have the vocabulary, the model map, and seven proven techniques. Here’s where to go from here.
📚
Learn Prompting
promptingguide.ai — Comprehensive free resource covering basic to advanced prompting techniques with examples.
🎓
Anthropic Academy
anthropic.com/learn — Free courses on AI safety, Claude usage, and prompt engineering from the team that built Claude.
📜
OpenAI Cookbook
cookbook.openai.com — Practical examples and guides for real-world AI tasks. Great for developers and power users.
🤖
Try Local AI
ollama.com — Run Llama, Mistral, and other open models locally. Privacy-first. One command to get started on Mac, Linux, or Windows.
📈
AI News & Research
The Rundown AI — Daily newsletter summarizing the most important AI developments. Not technical. 5-minute read.
🎉
AI Enthusiasts Groups
The Cincinnati area has a growing AI community. Check out groups like Cincy AI and other local meetups focused on AI, machine learning, and data science. The best way to stay current isn’t reading — it’s hearing what people are actually building and using. Find a group, show up, and keep the conversation going.
💡 The single best piece of advice: The best way to learn AI is to use AI. Start with one task you do every day — drafting an email, summarizing a document, planning a meeting — and see if AI can help. You’ll learn more from one real task than from any tutorial.
🚀
You’ve got the vocabulary. Now go use it.
The gap between AI users who get great results and those who don’t isn’t talent. It’s practice and intentional prompting. Every session is a chance to get better.