// CHAPTER 05 OF 05 β€” FINAL CHAPTER

Building
AI Systems

Prompting tricks, AI agents, RAG, vector databases, and how images are created from pure noise.

15
Prompt Engineering

The way you ask an AI matters a lot. The same question asked in two different ways can produce wildly different results. Prompt engineering is the skill of crafting inputs that get great outputs.

Bad prompt vs Good prompt β€” spot the difference
❌ VAGUE PROMPT
"explain APIs"
Gets you: A broad, surface-level answer with no examples. Not very useful.
βœ… ENGINEERED PROMPT
"Explain how REST APIs handle authentication, with a real code example in Python for a beginner"
Gets you: Specific, practical, with code, at the right level. Actually usable!

Good prompts do these things: set a role ("you are an expert..."), be specific (format, length, level), give examples, and break complex tasks into steps.

πŸ’‘ Prompt engineering isn't a trick β€” it's the primary way you communicate with AI. A vague prompt gets generic output. A sharp prompt gets something genuinely useful.
16
Chain of Thought (CoT)

Sometimes a model gives a wrong answer not because it doesn't know β€” but because it rushed. Chain of Thought (CoT) prompting makes the model work through problems step-by-step instead of jumping to a conclusion.

Same maths problem β€” two approaches
Question: A shirt costs $24. It's 25% off. You have a $5 coupon on top. How much do you pay?

$13. (Wait... is that right? πŸ€”)
πŸ“
CoT = scratch paper: Instead of forcing an instant answer, you give the model space to reason. For multi-step problems, this small change makes a huge difference in accuracy.
17
RAG (Retrieval-Augmented Generation)

Remember hallucinations? RAG is one of the best ways to fix them. Instead of relying on what the model "memorised" during training, you give it real, current documents to reference at query time.

How RAG works β€” step by step
1
User asks a question β€” "What's our refund policy?"
2
System searches knowledge base β€” finds the relevant policy document
3
Document + question are sent to the AI together as context
4
AI answers using the actual document β€” no guessing, no hallucination

The key insight: separate the roles. The model explains answers. The knowledge base provides facts. Update your docs β€” the AI uses new info immediately, no retraining needed.

πŸ’‘ Most enterprise AI products use RAG. It's why company chatbots can answer questions about your specific products, policies, or internal docs.
18
Vector Database

RAG needs to find relevant documents fast. A vector database stores embeddings (those number lists from Chapter 2) and searches by meaning rather than exact keywords.

Semantic search β€” meaning beats exact words
πŸ—ΊοΈ
It searches like a brain, not a ctrl+F: "heart condition" will find "cardiac arrest" even if those exact words don't appear together. Vector search understands intent, not just characters.
πŸ’‘ Popular vector databases: Pinecone, Weaviate, Qdrant, ChromaDB. Even PostgreSQL can do it with the pgvector extension!
19
AI Agents

Everything so far has been about AI that generates text. An AI agent can actually do things β€” run code, search the web, call APIs, interact with tools, and chain these steps together to complete a task.

The Agent Loop β€” click each step
πŸ‘οΈ
OBSERVE
Read the current situation
🧠
THINK
Decide what to do next
⚑
ACT
Call a tool or run code
πŸ”„
REPEAT
Loop until done
πŸ‘† Click a step to learn more
πŸ’‘ The hard part of agents isn't making them capable β€” it's making them reliable. Each step can fail, and errors stack up. That's why modern agent systems invest heavily in error handling and self-correction.
20
Diffusion Models

So far it's all been text. But what about image generation (Midjourney, DALLΒ·E, Stable Diffusion)? These use a completely different and counterintuitive approach.

The model first learns to destroy images β€” adding noise until they become static. Then it learns to reverse that process, cleaning up noise step by step.

The diffusion process β€” from noise to image
🎨
Ink in water: Diffusion comes from physics β€” particles spreading into randomness, like ink in water. The model learns the reverse: how to bring order back from chaos. Same idea is now used for video, audio, 3D, and even molecule design!
πŸ’‘ You start with pure random noise. The model gets a text prompt. Then, step by step, it removes noise guided by the prompt β€” turning static into a detailed image. That's Stable Diffusion, DALLΒ·E, Midjourney, all of them.
πŸŽ‰

You've covered all 20 concepts!

Neural Networks β†’ Transformers β†’ LLMs β†’ Training β†’ Building Systems. You now understand the full stack of modern AI β€” better than most adults. Time to prove it!

Take the Final Quiz ⚑
←
Back to
Ch 4: Training
Test your knowledge
Take the Quiz ⚑
🎯