-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Videos and Lectures [LEGACY]
Jirka Borovec edited this page May 4, 2021
·
1 revision
- PyTorch Lightning Channel
- PyTorch Lightning - William Falcon - In this talk, William Falcon goes through the implementation details of the 10 most useful of these techniques, including DataLoaders, 16-bit precision, accumulated gradients and 4 different ways of distributing model training across hundreds of GPUs. We’ll also show how to use these already built-in in PyTorch Lightning, a Keras-like framework for ML researchers.
- Converting from PyTorch to PyTorch Lightning - In this video, William Falcon refactors a PyTorch VAE into PyTorch Lightning. As it's obvious in the video, this was an honest attempt at refactoring a new repository without having prior knowledge of it. Despite this, the full conversion took under 45 minutes.
- Run PyTorch on TPU and GPU without changing code
- Efficient PyTorch debugging with PyTorch Lightning
- Lightning Data Modules In this video Nate Raw will walk you through how to make sharing and reusing data splits and transforms across projects easier with LightningDataModules.
- From PyTorch to PyTorch Lightning This video covers the magic of PyTorch Lightning! We convert the pure PyTorch classification model we created in the previous episode to PyTorch Lightning, which makes all the latest AI best practices trivial. We go over training on single and multi GPUs, logging and saving models, and many more!
- Training a classification model on MNIST with PyTorch This video covers how to create a PyTorch classification model from scratch! It introduces all the fundamental components like architecture definition, optimizer, loss function, data loader, and Alfredo's infamous 5 steps training! It shows you also how to train on a GPU, how to add residual connections, and how to use dropout to fight overfitting.