🚀 We're in early access! Submit feedback — your input shapes the platform.
← All Topics

Prompt Engineering

📖 12 lessons🎯 12 missions🔧 4 workshops🚀 2 projects⏱️ ~25 hours
Filter by rank:

📖Lessons

1
beginner📖 14 minlesson

Prompt Engineering Fundamentals

Master the core principles and patterns of effective prompt engineering

promptingfundamentalsbest-practicespatterns
2
beginner📖 15 minlesson

Zero-Shot and Few-Shot Learning

Master the art of teaching LLMs new tasks through examples

few-shotzero-shotexampleslearning
🔒
intermediate📖 16 minlessonPRO

Chain-of-Thought Prompting

Improve reasoning by guiding LLMs to think step-by-step

chain-of-thoughtreasoningstep-by-stepcot
🔒
intermediate📖 15 minlessonPRO

Instruction Engineering

Master the art of writing clear, effective instructions for LLMs

instructionsclarityformattingconstraints
🔒
intermediate📖 20 minlessonPRO

Tool Calling & Function Calling

Enable LLMs to call external functions and APIs based on natural language

tool-callingfunction-callingapisstructured-output
🔒
intermediate📖 14 minlessonPRO

Role-Based Prompting

Use roles and personas to shape LLM behavior and expertise

rolespersonassystem-promptsbehavior
🔒
advanced📖 17 minlessonPRO

Advanced Prompting Techniques

Master sophisticated prompting methods for complex reasoning tasks

advancedtree-of-thoughtsself-critiquechaining
🔒
intermediate📖 15 minlessonPRO

Domain-Specific Prompting

Master prompting techniques for code, data analysis, creative writing, and technical docs

domainscodewritinganalysisdocumentation
🔒
intermediate📖 25 minlessonPRO

Workshop: Prompt Optimization

Build a complete prompt testing and optimization framework

workshopoptimizationtestinghands-on
🔒
intermediate📖 16 minlessonPRO

Prompt Evaluation

Systematically measure and improve prompt quality

evaluationmetricstestingquality
🔒
advanced📖 16 minlessonPRO

Production Prompt Management

Version, monitor, and maintain prompts in production systems

productionversioningmonitoringmaintenance
🔒
intermediate📖 18 minlessonPRO

Structured Output & JSON Mode

Get reliable, schema-compliant JSON from LLMs using structured output modes, tool calling, and validation

structured-outputjsonschemavalidationproductionapi

🎯Missions

1
beginner🎯 15–30 minmissionRank 02

M-015Build a Reusable Prompt Template

Nebula Corp's team keeps copy-pasting prompts and manually swapping out variables. They need a simple prompt template engine that takes a template string with {{variable}} placeholders and fills them in from a data object. The current function just returns the raw template without any substitution. Make it work so the team can reuse prompts across their app.

🔒
intermediate🎯 30–45 minmissionRank 04PRO

M-021Build a Schema-Validated Data Extractor

Nebula Corp's data pipeline receives unstructured text (emails, support tickets, reviews) and needs to extract structured data reliably. Build a schema-validated extractor that defines output schemas, validates LLM responses against them, retries on validation failure with error feedback, and handles nested objects and arrays.

🔒
intermediate🎯 25–40 minmissionRank 02PRO

M-022Build Prompt Guardrails

Your team's chatbot at Nebula Corp is responding to off-topic queries and leaking internal information. Write a guardrail function that filters user input and blocks anything unrelated to the product domain.

🔒
intermediate🎯 30–45 minmissionRank 02PRO

M-017Chain-of-Thought Math Solver

Nebula Corp's educational platform needs a math tutoring system that doesn't just give answers — it shows the reasoning process. Students learn better when they see each step. The current prompt just asks for the answer, and the model often makes arithmetic errors on multi-step problems. Build a Chain-of-Thought prompt that forces the model to show its work step-by-step, verify the answer, and catch its own mistakes before presenting the final result.

🔒
beginner🎯 25–40 minmissionRank 02PRO

M-014Few-Shot Product Classifier

Nebula Corp's e-commerce platform receives thousands of product listings daily, but they're uncategorized. The current zero-shot classifier is inconsistent — sometimes 'wireless headphones' goes to Electronics, sometimes to Audio, sometimes to Accessories. Build a few-shot prompt constructor that uses 3 diverse examples to teach the model the exact categorization rules. The examples must cover edge cases and demonstrate the distinction between similar categories.

🔒
intermediate🎯 35–55 minmissionRank 02PRO

M-018Function Calling Weather Bot

Nebula Corp is building a weather assistant that needs to call external APIs based on user queries. When a user asks 'What's the weather in Seattle?', the system should extract the location and call get_weather(location). When they ask 'Will it rain tomorrow in Portland?', it should call get_forecast(location, days=1). The current implementation doesn't structure the function calls properly — it returns free text instead of structured function call requests. Build a prompt that instructs the model to respond with valid function call JSON when weather information is requested.

🔒
intermediate🎯 30–45 minmissionRank 02PRO

M-019Multi-Shot Data Extractor

Nebula Corp's sales team receives hundreds of inquiry emails daily. They need to extract key information: company name, contact person, budget range, and urgency level. The current zero-shot extractor misses fields and formats data inconsistently. Build a 4-shot prompt that demonstrates how to extract structured data from messy emails, handle missing fields gracefully, and classify urgency based on keywords. The examples must cover: complete data, missing fields, urgent request, and ambiguous budget.

🔒
advanced🎯 40–55 minmissionRank 04PRO

M-023Multi-Stage Prompt Pipeline

Nebula Corp's content generation system needs to produce high-quality blog posts through a multi-stage pipeline. Stage 1: Research and outline generation. Stage 2: Write the first draft. Stage 3: Critique and identify improvements. Stage 4: Produce the final polished version. The current system tries to do everything in one prompt and produces inconsistent quality. Build a prompt chaining system where each stage's output feeds into the next, and each stage has a specific, focused responsibility.

🔒
advanced🎯 45–65 minmissionRank 03PRO

M-024Real-World API Integration

Nebula Corp needs a complete prompt + API integration pipeline. The system should: 1) Build a prompt with proper parameters (temperature, max_tokens, system message), 2) Make an actual API call to an LLM provider, 3) Handle errors gracefully (rate limits, invalid responses, timeouts), 4) Parse and validate the response, 5) Implement retry logic with exponential backoff. The current implementation has no error handling and fails silently when the API returns errors. Build a robust integration that handles real-world API challenges.

🔒
intermediate🎯 25–40 minmissionRank 02PRO

M-020Role-Based Email Rewriter

Nebula Corp's communication platform needs to rewrite emails in different tones depending on the recipient. The same message to a CEO should be formal and concise, to a technical team should be detailed and precise, and to a casual colleague can be friendly and relaxed. The current system uses the same prompt for all scenarios and produces inconsistent tone. Build a role-based prompt constructor that takes an email and a target persona (executive, technical, casual) and generates a system prompt that shapes the rewriting style appropriately.

🔒
advanced🎯 35–50 minmissionRank 03PRO

M-025Self-Critique Content Improver

Nebula Corp's content team needs a system that doesn't just generate blog posts — it critiques and improves them iteratively. The current workflow generates content once and ships it, but quality is inconsistent. Build a two-stage prompt system: Stage 1 generates initial content, Stage 2 critiques it (identifying weaknesses, missing elements, and improvements), and Stage 3 produces an improved version addressing the critique. The critique must evaluate clarity, completeness, engagement, and structure.

🔒
beginner🎯 20–35 minmissionRank 02PRO

M-016Structured JSON Output

Nebula Corp's API team needs prompts that reliably produce valid, structured JSON output from an LLM. The current prompts return free-form text that breaks downstream parsers. Write a prompt-generating function that instructs the model to return data in a specific JSON schema — and make sure every test case passes.