✅ Youtube: Intro to Large Language Models (1hr) Blog notes
State of GPT - Microsoft Build 2023 (42m) Learn about the training pipeline of GPT assistants like ChatGPT, from tokenization to pretraining, supervised finetuning, and Reinforcement Learning from Human Feedback (RLHF). Dive deeper into practical techniques and mental models for the effective use of these models, including prompting strategies, finetuning, the rapidly growing ecosystem of tools, and their future extensions.
✅ Youtube: Stanford CS25: V2 I Introduction to Transformers w/ Andrej Karpathy Full CS25 playlist
Youtube: Let's reproduce GPT-2 (124M) (4hrs) nanoGPT video code nanoGPT code We reproduce the GPT-2 (124M) from scratch. This video covers the whole process: First we build the GPT-2 network, then we optimize its training to be really fast, then we set up the training run following the GPT-2 and GPT-3 paper and their hyperparameters, then we hit run, and come back the next morning to see our results, and enjoy some amusing model generations. Keep in mind that in some places this video builds on the knowledge from earlier videos in the Zero to Hero Playlist (see my channel). You could also see this video as building my nanoGPT repo, which by the end is about 90% similar.
Deep Dive into LLMs like ChatGPT (3.5hrs) This is a general audience deep dive into the Large Language Model (LLM) AI technology that powers ChatGPT and related products. It is covers the full training stack of how the models are developed, along with mental models of how to think about their "psychology", and how to get the best use them in practical applications. I have one "Intro to LLMs" video already from ~year ago, but that is just a re-recording of a random talk, so I wanted to loop around and do a lot more comprehensive version.
Playlist: Neural Networks: Zero to Hero (10 videos) karpathy.ai/zero-to-hero.html
- The spelled-out intro to neural networks and backpropagation: building micrograd micrograd code micrograd notebooks This is the most step-by-step spelled-out explanation of backpropagation and training of neural networks. It only assumes basic knowledge of Python and a vague recollection of calculus from high school.
- The spelled-out intro to language modeling: building makemore makemore code makemore notebooks We implement a bigram character-level language model, which we will further complexify in followup videos into a modern Transformer language model, like GPT. In this video, the focus is on (1) introducing torch.Tensor and its subtleties and use in efficiently evaluating neural networks and (2) the overall framework of language modeling that includes model training, sampling, and the evaluation of a loss (e.g. the negative log likelihood for classification).
- Building makemore Part 2: MLP We implement a multilayer perceptron (MLP) character-level language model. In this video we also introduce many basics of machine learning (e.g. model training, learning rate tuning, hyperparameters, evaluation, train/dev/test splits, under/overfitting, etc.).
- Building makemore Part 3: Activations & Gradients, BatchNorm We dive into some of the internals of MLPs with multiple layers and scrutinize the statistics of the forward pass activations, backward pass gradients, and some of the pitfalls when they are improperly scaled. We also look at the typical diagnostic tools and visualizations you'd want to use to understand the health of your deep network. We learn why training deep neural nets can be fragile and introduce the first modern innovation that made doing so much easier: Batch Normalization. Residual connections and the Adam optimizer remain notable todos for later video.
- Building makemore Part 4: Becoming a Backprop Ninja We take the 2-layer MLP (with BatchNorm) from the previous video and backpropagate through it manually without using PyTorch autograd's loss.backward(): through the cross entropy loss, 2nd linear layer, tanh, batchnorm, 1st linear layer, and the embedding table. Along the way, we get a strong intuitive understanding about how gradients flow backwards through the compute graph and on the level of efficient Tensors, not just individual scalars like in micrograd. This helps build competence and intuition around how neural nets are optimized and sets you up to more confidently innovate on and debug modern neural networks.
- Building makemore Part 5: Building a WaveNet We take the 2-layer MLP from previous video and make it deeper with a tree-like structure, arriving at a convolutional neural network architecture similar to the WaveNet (2016) from DeepMind. In the WaveNet paper, the same hierarchical architecture is implemented more efficiently using causal dilated convolutions (not yet covered). Along the way we get a better sense of torch.nn and what it is and how it works under the hood, and what a typical deep learning development process looks like (a lot of reading of documentation, keeping track of multidimensional tensor shapes, moving between jupyter notebooks and repository code, ...).
- Let's build GPT: from scratch, in code, spelled out We build a Generatively Pretrained Transformer (GPT), following the paper "Attention is All You Need" and OpenAI's GPT-2 / GPT-3. We talk about connections to ChatGPT, which has taken the world by storm. We watch GitHub Copilot, itself a GPT, help us write a GPT (meta :D!) . I recommend people watch the earlier makemore videos to get comfortable with the autoregressive language modeling framework and basics of tensors and PyTorch nn, which we take for granted in this video.
- Let's build the GPT Tokenizer The Tokenizer is a necessary and pervasive component of Large Language Models (LLMs), where it translates between strings and tokens (text chunks). Tokenizers are a completely separate stage of the LLM pipeline: they have their own training sets, training algorithms (Byte Pair Encoding), and after training implement two fundamental functions: encode() from strings to tokens, and decode() back from tokens to strings. In this lecture we build from scratch the Tokenizer used in the GPT series from OpenAI. In the process, we will see that a lot of weird behaviors and problems of LLMs actually trace back to tokenization. We'll go through a number of these issues, discuss why tokenization is at fault, and why someone out there ideally finds a way to delete this stage entirely.
Neural networks and Transformers
- But what is a neural network? | Chapter 1, Deep learning
- Gradient descent, how neural networks learn | Chapter 2, Deep learning
- What is backpropagation really doing? | Chapter 3, Deep learning
- Backpropagation calculus | Chapter 4, Deep learning
- But what is a GPT? Visual intro to transformers | Chapter 5, Deep Learning
- Attention in transformers, visually explained | Chapter 6, Deep Learning
PLaylist: LLMs (4 videos)
- Developing an LLM: Building, Training, Finetuning
- Understanding PyTorch Buffers
- Finetuning Open-Source LLMs
- Insights from Finetuning LLMs with Low-Rank Adaptation
https://sebastianraschka.com/blog/2023/self-attention-from-scratch.html
https://sebastianraschka.com/blog/2024/using-finetuning-transformers.html
https://sebastianraschka.com/blog/2023/llm-reading-list.html
github.com/rasbt/LLM-workshop-2024
Youtube: Building LLMs from the Ground Up: A 3-hour Coding Workshop
https://web.stanford.edu/class/cs25/
Playlist: Stanford CS25 - Transformers United (33 videos)
Playlist: Stanford CS25 - Transformers United V3 (7 videos)
Playlist: CS25 Transformers United 23
Mechanics of Seq2seq Models With Attention - Jalammar
The Illustrated Transformer - Jalammar
Youtube: How GPT3 Works - Jalammar
The Annotated Transformer - Harvard
https://github.com/harvardnlp/annotated-transformer/
Transformer models: an introduction and catalog — 2023 Edition
https://amatria.in/blog/transformer-models-an-introduction-and-catalog-2d1e9039f376/
Transformer Explainer App
https://poloclub.github.io/transformer-explainer/
llama3 implemented from scratch - 2024
https://github.com/naklecha/llama3-from-scratch
Transformers from Scratch - Aug 2019
https://peterbloem.nl/blog/transformers
Transformers Laid Out - Jan 2025
https://goyalpramod.github.io/blogs/Transformers_laid_out/
Transformers & Attention 1: Self Attention - Rasa
Illustrated Guide to Transformers Neural Network: A step by step explanation
Backprop: The Most Important Algorithm in Machine Learning
Attention Mechanisms and Transformers - Ch 11