🚀 We're in early access! Submit feedback — your input shapes the platform.
← All Topics

LLM Fundamentals

📖 11 lessons🎯 13 missions🔧 4 workshops🚀 1 project⏱️ ~16 hours
Filter by rank:

📖Lessons

1
beginner📖 12 minlessonRank 01

What are Large Language Models?

Introduction to LLMs, how they work, and when to use them

llmfundamentalsintroduction
2
beginner📖 15 minlessonRank 01

Tokens and Tokenization

Learn how LLMs break down text into tokens and why it matters for costs and context limits

tokenstokenizationfundamentalscosts
🔒
beginner📖 14 minlessonPRO

Context Windows and Memory

Understanding how LLMs handle conversations, long documents, and token limits

contextmemorylimitsconversations
🔒
beginner📖 13 minlessonRank 01PRO

Temperature and Sampling

Control LLM creativity and randomness with temperature and sampling parameters

temperaturesamplingcreativityparameters
🔒
beginner📖 12 minlessonRank 01PRO

System Prompts and Roles

Learn how to use system prompts to control LLM behavior and define roles

system-promptsrolesinstructionsbehavior
🔒
beginner📖 16 minlessonRank 01PRO

Prompt Engineering Basics

Learn techniques to write better prompts and get higher quality outputs from LLMs

promptsengineeringtechniquesbest-practices
🔒
beginner📖 18 minlessonRank 01PRO

Applied Prompting Patterns

Put prompt techniques to work on real tasks: classification, extraction, sentiment analysis, and structured output

classificationextractionsentimentstructured-outputfew-shotprompts
🔒
beginner📖 14 minlessonRank 01PRO

Common Pitfalls and Limitations

Understand what LLMs cannot do well and how to work around their limitations

limitationspitfallsbest-practicestroubleshooting
🔒
beginner📖 25 minlessonRank 01PRO

Workshop: Your First API Call

Build your first LLM-powered application from scratch

workshophands-onapipracticeproject
🔒
intermediate📖 18 minlessonPRO

Streaming & Real-Time Responses

Implement streaming LLM responses with Server-Sent Events for responsive chat interfaces

streamingssereal-timechatproductionux
🔒
intermediate📖 16 minlessonPRO

Fine-Tuning vs Prompt Engineering

Know when to fine-tune, when to prompt-engineer, and when to use RAG — the decision framework for production AI

fine-tuningloradecision-frameworkproductionoptimization

🎯Missions

🔒
advanced🎯 35–50 minmissionRank 03PRO

M-013Build a Context Window Manager

Nebula Corp's chatbot keeps crashing when conversations get too long — it exceeds the model's context window. Build a context window manager that tracks token usage, implements smart truncation strategies (keep system prompt + recent messages + important messages), and warns when approaching the limit. The manager should support multiple truncation strategies: 'sliding-window', 'summarize-old', and 'priority-based'.

🔒
intermediate🎯 30–45 minmissionRank 06PRO

M-009Build a Fine-Tuning Dataset Validator

Nebula Corp is preparing training data for fine-tuning their customer support model. Before spending money on training, they need to validate the dataset quality. Build a validator that checks training examples for format compliance, detects contradictions, measures diversity, and produces a readiness report with a go/no-go recommendation.

🔒
beginner🎯 20–35 minmissionRank 01PRO

M-006Build a Sentiment Classifier

Nebula Corp's product team wants to automatically classify customer reviews from their app store listing. They need a function that builds a few-shot prompt to classify reviews as Positive, Negative, or Neutral. The current implementation just returns a hardcoded value. Write a prompt-building function that uses few-shot examples to reliably classify sentiment — and make all the test cases pass.

🔒
intermediate🎯 30–45 minmissionRank 03PRO

M-012Build a Streaming Token Renderer

Nebula Corp's chatbot waits for the full LLM response before showing anything — users think it's broken. Build a streaming renderer that processes Server-Sent Events (SSE), displays tokens as they arrive, tracks time-to-first-token, and handles cancellation. The renderer should buffer partial chunks, detect the [DONE] signal, and report streaming metrics.

🔒
beginner🎯 25–40 minmissionRank 01PRO

M-001Build an API Response Router

Nebula Corp is building an AI-powered support system. When a customer message comes in, it needs to: (1) call the LLM API to classify the message intent, (2) parse the structured response, and (3) route to the correct handler. The current implementation has broken parsing, missing error handling, and routes everything to the wrong handler. Fix the router so it correctly classifies, parses, and routes messages.

6
beginner🎯 10–20 minmissionRank 01

M-003Build Your First LLM Prompt

Nebula Corp's new intern needs to send their first request to an LLM API, but the prompt builder function is incomplete. It should take a user question and a system persona, then return a properly structured messages array that any LLM API can consume. The function currently returns an empty array. Wire it up so it produces the correct chat-completion message format with a system message and a user message.

🔒
intermediate🎯 25–40 minmissionRank 01PRO

M-011Defend Against Prompt Injection

Nebula Corp's customer support chatbot has been exploited three times this week. Attackers are using prompt injection to make the bot reveal its system prompt, ignore its restrictions, and pretend to be a different AI. The security team needs you to build a defense layer: a function that detects common injection patterns in user input and a hardened system prompt that resists override attempts.

🔒
beginner🎯 20–35 minmissionRank 01PRO

M-007Extract Structured Data from Text

Nebula Corp's finance team receives hundreds of invoices as plain text emails. They need a function that builds a prompt to extract structured JSON data (vendor, amount, date, invoice number) from unstructured invoice text. The current implementation produces a vague prompt that returns inconsistent formats. Build a robust prompt constructor that reliably extracts the right fields as valid JSON.

🔒
beginner🎯 20–35 minmissionRank 01PRO

M-002Fix the Context Window Overflow

Nebula Corp's chatbot keeps crashing with 'context length exceeded' errors in production. The conversation manager is supposed to trim old messages when the token count approaches the model's limit — but the trimming logic has bugs. Some conversations never get trimmed, others lose the system prompt entirely. Debug the conversation manager and make it handle long conversations gracefully.

🔒
beginner🎯 15–30 minmissionRank 01PRO

M-004Fix the Token Cost Calculator

Nebula Corp's billing dashboard has a broken token cost calculator. The function is supposed to estimate API costs based on input tokens, output tokens, and the selected model — but customers are being shown wildly wrong numbers. Some bills show $0 when they should be $5, and others are off by orders of magnitude. Find the bugs in the pricing logic and fix them before finance notices.

🔒
beginner🎯 15–30 minmissionRank 01PRO

M-005One-Shot Email Categorizer

Nebula Corp's support inbox is overflowing. They need an automated triage system that categorizes incoming emails into Billing, Technical, or General using a one-shot prompt. The current function doesn't use any examples and the model keeps returning inconsistent labels. Build a prompt constructor that uses exactly one well-chosen example to teach the model the expected format, and handles any email topic.

🔒
intermediate🎯 20–30 minmissionRank 01PRO

M-010Speed Up Fibonacci

The recursive Fibonacci function is too slow for large inputs. Optimize it.

🔒
beginner🎯 15–30 minmissionRank 01PRO

M-008Tune the Temperature Settings

Nebula Corp's AI platform lets users run different tasks — data extraction, creative writing, code generation, and chatbot conversations. But the temperature configuration is all wrong: creative tasks use temperature 0, extraction uses temperature 1.5, and the chatbot is set to 2.0. Users are complaining about boring marketing copy and broken JSON outputs. Fix the configuration function so each task type uses an appropriate temperature and top-p setting.