Practical Deep Learning with PyTorch Masterclass
Rs. 999/- Only!
Next Session: 24th – 25th February 2024 | 10AM – 5PM IST
Level: Advanced | Hands-On Technologies: PyTorch & Python 3
✓ Master fundamentals of neural networks, convolutional networks, recurrent networks, Transformers and their applications
✓ Develop a Strong command of Facebook’s popular pyTorch framework to build complex deep learning models confidently
✓ Gain practical skills through real-world projects, enhancing your ability to tackle complex AI challenges
✓ Prepare For Mid- and Senior-Level Data Science Job Interviews for data science and machine learning
✓ Receive A Certificate Of Completion
✓ BONUSES: Kaggle-winning solutions decoded, 50 Conceptual Exercises, Cheatsheet and extra material on other cutting-edge deep learning algorithms!
What you will learn
The Masterclass is a practical class that covers the cutting-edge field of Deep Learning, walking the learner in detail through the myriad of Neural Network architectures that have been developed, fundamentals of the PyTorch framework for developing Deep Learning models, and HuggingFace for leveraging Transformer models.
The pedagogy is hands-on with exercises designed after significant concepts have been covered, since designing Deep Learning models is a skill and an art. There are 3 mini-projects to develop practical skills during Masterclass.
Prerequisites: Participants should have a good working knowledge of Python programming and Machine Learning concepts. Familiarity with PyTorch is helpful but not required, as it will be covered during the Masterclass.
1. Foundations of deep learning
✓ Introduction to Deep Learning including what is deep learning, its applications, what led to the paradigm shift into artificial intelligence
✓ Building a Neural Network including the anatomy of a neural network, creating a simple feedforward neural network, activation functions such as sigmoid, ReLU, tanh, threshold step functions and their role
✓ Training a Neural Network including introduction to optimization, gradient descent method, backpropagation, the update rule, variants of gradient descent, key terminology such as batch size and epoch, loss functions such as MSE, MAE, binary and categorical cross-entropy loss, scaling methods
✓ Hands-on Exercise where participants work on a simple neural network project from scratch using only Python to reinforce concepts
3. Computer Vision with Convolutional neural nets
✓ What is Computer Vision including introduction to the field, different applications of computer vision, tasks in computer vision such as classification, localization and segmentation, images as numbers
✓ Introduction to CNNs including what these special neural nets are, why do we need them in spite of feedforward neural nets, architecture of a CNN, feature learning and classification using CNN
✓ Understanding Convolutional Layers including key terminology such as kernel, filter, feature map, layer; what is a convolution operation; 1x1, 2D and 3D convolutions; spatial arrangement with padding, stride and filter size; applying non-linear activations; pooling layers and their purpose
✓ Transfer Learning including fine-tuning considerations, popular pre-trained CNN models such as VGGNet, ResNet, InceptionNet, AlexNet along with their architectures
✓ Mini-Project where participants work on detecting defects in steel sheets using Computer Vision with PyTorch to reinforce concepts
5. The transformer revolution
✓ Paradigm Shift by Transformers explaining earlier domain-specific architectures, initial research, significant models such as Vision Transformers, BERT, GPT, Wav2vec 2.0, TimeSformer and others
✓ Building a Transformer Model including key components of a transformer, deep-dive into Attention, Self-Attention and Multi-Head Attention, positional encoding, residual connections, layer norm and encoder-decoder attention
✓ Overview of BERT including introduction to masked language modelling, the GLUE benchmark for model evaluation, variants of BERT such as RoBERTa, DistilBERT, ALBERT among others, why not BERT for Generative AI
✓ GPT-X Model Family including timeline of release, base vs. instruction-tuned Large Language Models (LLMs), GPT models (1, 2, 3, 3.5, 4), in-context learning such as zero-shot, one-shot and few-shot learning, brief introduction to ChatGPT, understanding parameters, prompt engineering
✓ Mini-Project where participants work on sentiment analysis using transformer models with HuggingFace to reinforce concepts
2. Applied Deep Learning with PyTorch
✓ Introduction to PyTorch including why Facebook's PyTorch among other available frameworks, installing PyTorch, what are tensors, dataset class and dataloader object, autograd package, optimizers in PyTorch, initializing, training and saving models with PyTorch
✓ Improving Deep Neural Networks with Regularization including introduction to regularization, L1 and L2 regularization, why it works for neural nets, dropout and inverted dropout, early stopping, batch normalization and its difference from layer normalization
✓ Overview of Optimization methods including stochastic gradient descent (SGD), SGD with momentum, adaptive gradient (Adagrad), root mean squared propagation (RMSProp), adaptive moment estimation (Adam), and how to choose between them
✓ Hands-on Exercise where participants work on a multi-layer neural network project from scratch using PyTorch to reinforce concepts
4. Natural Language Processing (NLP) with Recurrent Neural Nets
✓ What is Natural Language Processing (NLP) including introduction to the field, applications of NLP, its history, NLP development flow, text preprocessing such as tokenization, stemming and lemmatization
✓ Representing Language as Numbers including introduction to text vectorization, count-based methods like classic bag-of-words (BoW) and TF-IDF, their pros and cons, neural word embeddings such as Word2Vec (CBoW & skip-gram), GloVe, ELMo among others, when to choose which one
✓ Introduction to RNNs including the language modelling objective, limitations of feedforward neural nets, notion of recurrence, architecture of an RNN and its training, limitations such as exploding & vanishing gradients
✓ RNN Architectures such as long short-term memory networks (LSTMs), its variations, bidirectional RNNs, deep RNNs, encoder-decoder architecture
✓ Mini-Project where participants work on a text generation problem using language modelling with PyTorch to reinforce concepts
BONUS MATERIALS
✓ Decode Kaggle-winning solutions and uncover the secrets behind Kaggle champions' success, gaining insights into winning strategies and techniques
✓ Gain access to a collection of 50 conceptual exercises designed to help you prepare for deep learning interviews. These exercises cover a wide range of topics and algorithms in Deep Learning.
✓ Receive a handy Practical Deep Learning cheatsheet for quick reference. This cheatsheet provides key concepts and comprehensive list of deep learning architectures including ANN, CNN, RNN and Transformers
✓ Learn more algorithms such as Facebook's NeuralProphet, Amazon's DeepAR and Temporal Fusion Transformer for time series forecasting with pre-recorded videos
1. Foundations of deep learning
✓ Introduction to Deep Learning including what is deep learning, its applications, what led to the paradigm shift into artificial intelligence
✓ Building a Neural Network including the anatomy of a neural network, creating a simple feedforward neural network, activation functions such as sigmoid, ReLU, tanh, threshold step functions and their role
✓ Training a Neural Network including introduction to optimization, gradient descent method, backpropagation, the update rule, variants of gradient descent, key terminology such as batch size and epoch, loss functions such as MSE, MAE, binary and categorical cross-entropy loss, scaling methods
✓ Hands-on Exercise where participants work on a simple neural network project from scratch using only Python to reinforce concepts
2. Applied Deep Learning with PyTorch
✓ Introduction to PyTorch including why Facebook's PyTorch among other available frameworks, installing PyTorch, what are tensors, dataset class and dataloader object, autograd package, optimizers in PyTorch, initializing, training and saving models with PyTorch
✓ Improving Deep Neural Networks with Regularization including introduction to regularization, L1 and L2 regularization, why it works for neural nets, dropout and inverted dropout, early stopping, batch normalization and its difference from layer normalization
✓ Overview of Optimization methods including stochastic gradient descent (SGD), SGD with momentum, adaptive gradient (Adagrad), root mean squared propagation (RMSProp), adaptive moment estimation (Adam), and how to choose between them
✓ Hands-on Exercise where participants work on a multi-layer neural network project from scratch using PyTorch to reinforce concepts
3. Computer Vision with Convolutional neural nets
✓ What is Computer Vision including introduction to the field, different applications of computer vision, tasks in computer vision such as classification, localization and segmentation, images as numbers
✓ Introduction to CNNs including what these special neural nets are, why do we need them in spite of feedforward neural nets, architecture of a CNN, feature learning and classification using CNN
✓ Understanding Convolutional Layers including key terminology such as kernel, filter, feature map, layer; what is a convolution operation; 1x1, 2D and 3D convolutions; spatial arrangement with padding, stride and filter size; applying non-linear activations; pooling layers and their purpose
✓ Transfer Learning including fine-tuning considerations, popular pre-trained CNN models such as VGGNet, ResNet, InceptionNet, AlexNet along with their architectures
✓ Mini-Project where participants work on detecting defects in steel sheets using Computer Vision with PyTorch to reinforce concepts
4. Natural Language Processing (NLP) with Recurrent Neural Nets
✓ What is Natural Language Processing (NLP) including introduction to the field, applications of NLP, its history, NLP development flow, text preprocessing such as tokenization, stemming and lemmatization
✓ Representing Language as Numbers including introduction to text vectorization, count-based methods like classic bag-of-words (BoW) and TF-IDF, their pros and cons, neural word embeddings such as Word2Vec (CBoW & skip-gram), GloVe, ELMo among others, when to choose which one
✓ Introduction to RNNs including the language modelling objective, limitations of feedforward neural nets, notion of recurrence, architecture of an RNN and its training, limitations such as exploding & vanishing gradients
✓ RNN Architectures such as long short-term memory networks (LSTMs), its variations, bidirectional RNNs, deep RNNs, encoder-decoder architecture
✓ Mini-Project where participants work on a text generation problem using language modelling with PyTorch to reinforce concepts
5. The transformer revolution
✓ Paradigm Shift by Transformers explaining earlier domain-specific architectures, initial research, significant models such as Vision Transformers, BERT, GPT, Wav2vec 2.0, TimeSformer and others
✓ Building a Transformer Model including key components of a transformer, deep-dive into Attention, Self-Attention and Multi-Head Attention, positional encoding, residual connections, layer norm and encoder-decoder attention
✓ Overview of BERT including introduction to masked language modelling, the GLUE benchmark for model evaluation, variants of BERT such as RoBERTa, DistilBERT, ALBERT among others, why not BERT for Generative AI
✓ GPT-X Model Family including timeline of release, base vs. instruction-tuned Large Language Models (LLMs), GPT models (1, 2, 3, 3.5, 4), in-context learning such as zero-shot, one-shot and few-shot learning, brief introduction to ChatGPT, understanding parameters, prompt engineering
✓ Mini-Project where participants work on sentiment analysis using transformer models with HuggingFace to reinforce concepts
BONUS MATERIALS
✓ Decode Kaggle-winning solutions and uncover the secrets behind Kaggle champions' success, gaining insights into winning strategies and techniques
✓ Gain access to a collection of 50 conceptual exercises designed to help you prepare for deep learning interviews. These exercises cover a wide range of topics and algorithms in Deep Learning.
✓ Receive a handy Practical Deep Learning cheatsheet for quick reference. This cheatsheet provides key concepts and comprehensive list of deep learning architectures including ANN, CNN, RNN and Transformers
✓ Learn more algorithms such as Facebook's NeuralProphet, Amazon's DeepAR and Temporal Fusion Transformer for time series forecasting with pre-recorded videos
Instructor
anant agarwal
Anant works as a Data Science Manager at a Fortune Global 500 company. An alumnus of The Doon School and IIT Kharagpur, he holds an MS from University of Minnesota Twin Cities where he received the Forrest Fellowship and MBA from Indian School of Business Hyderabad with Dean's and Merit List awards.
Anant has been featured as a Guest Speaker at Analytics Vidhya's DataHack Summit and DataHour sessions, Zhejiang University China and has been a Judge in Responsible AI category for Altair Global Enlighten Award.
He is also a 2-time 99.8 percentiler in CAT, National-level Squash player and a fingerstyle guitarist.
For any assistance or queries, please reach out to us at [email protected]
Contact Us
© 2024 All Rights Reserved.