Curriculum Overview
11 modules covering the full AI engineering stack, from LLM basics to production systems.
All Modules
LLM Fundamentals
Tokens, context windows, temperature, streaming, fine-tuning decisions, and applied prompting patterns.
Prompt Engineering
System prompts, few-shot learning, chain-of-thought, tool calling, structured output, and prompt security.
RAG
Embeddings, chunking, similarity search, reranking, evaluation, and production RAG architectures.
AI Agents
Tool calling, ReAct pattern, multi-agent systems, error recovery, and agent orchestration.
Model Context Protocol
MCP servers, resources, prompt templates, tool composition, and auth middleware.
AI Memory
Memory architectures, Mem0, conversation memory, entity extraction, and production memory systems.
LLM Gateways
LiteLLM, advanced routing patterns, semantic caching, and gateway observability.
Testing & Evaluation
Eval frameworks, LLM judges, test datasets, regression detection, and safety guardrails.
AI Observability
Tracing, latency analysis, metrics, OpenTelemetry for LLMs, and production evaluation.
Developer Tools
LangChain, LangGraph, CrewAI, RAGAS, and best practices for AI dev tooling.
Agentic Coding
Kiro, Claude Code, Cursor, and modern AI-assisted development workflows.
Module Structure
Each module contains 7-12 lessons progressing from fundamentals to advanced topics. Most modules end with a workshop lesson for hands-on practice. Lessons include code examples, interactive diagrams, and key takeaways.