← All courses
intermediateTheoryTransformers
LLMs & Transformers
See inside the model — without the math wall.
Tokens, embeddings, attention, sampling. Build a working mental model of how a transformer turns text into text — visually, then in code.
9h
Duration
10
Lessons
6.2k
Learners
Path map
Lessons unlock as you complete the previous one. Your progress is saved on this device.
Lesson 1
Tokens — what models actually see
9m35 XPLesson 2
Embeddings — words as coordinates
10m35 XPLesson 3
Attention — the trick that made LLMs work
12m45 XPLesson 4
Inside a transformer block
10m40 XPLesson 5
Positional encoding — why order matters
8m30 XPLesson 6
Sampling — how the next token gets picked
10m40 XPLesson 7
Reading a model card
10m40 XPLesson 8
Context windows, KV cache & long context
11m45 XPLesson 9
Reasoning models & test-time compute
10m40 XPLesson 10
Capstone: pick the right model for the job
12m60 XP