← All courses
advancedTrainingAdvanced
Fine-tuning & Adaptation
When prompting isn't enough.
LoRA, QLoRA, instruction tuning, RLHF basics. Decide when fine-tuning is worth it (most of the time, it isn't) and how to do it without burning a GPU budget.
10h
Duration
10
Lessons
2.1k
Learners
Path map
Lessons unlock as you complete the previous one. Your progress is saved on this device.
Lesson 1
Should you fine-tune?
10m40 XPLesson 2
LoRA, QLoRA, and PEFT
11m45 XPLesson 3
Building a training dataset
12m45 XPLesson 4
A QLoRA training run, end-to-end
13m55 XPLesson 5
Hyperparameters that actually matter
9m35 XPLesson 6
Evaluating a fine-tuned model
11m45 XPLesson 7
RLHF, DPO, and "alignment" — briefly
10m40 XPLesson 8
Catastrophic forgetting
9m35 XPLesson 9
Hosting your fine-tune
9m35 XPLesson 10
Capstone: fine-tune for a specific task
14m70 XP