Introduction to Deep Learning (Part 1)
Speaker: Andrew Collier
Track: Data Science
Room: Boundary Room
Time: Oct 09 (Wed), 09:00
This is the first half of the 2-session tutorial. It will be followed by Part 2 in the afternoon.
Deep Learning is a vast and convoluted topic. It’s hard to know where to start. This workshop will help you take your first steps with Deep Learning.
The workshop will introduce you to the fundamental concepts behind Deep Learning and show you how to get started building models using Python and Keras. You'll learn some of the underlying maths (a PhD in Mathematics will not be required!) and work through several examples.
You’ll walk away with an appreciation for what’s possible with Deep Learning and sufficient hands-on experience to start building your own models.
All material will be available as Jupyter Notebooks.
- Introduction to Neural Networks
- Weights and bias
- Activation functions
- Loss functions
- Chain rule and back-propagation
- Project — Simple binary classifier
- Where neural networks fail: images
- Deep Learning
- An overview of TensorFlow
- First steps with Keras
- Convolutional Neural Networks
- Convolution layers
- Filters and padding
- Activation functions: sigmoid, relu and softmax
- Project — Image classification
- Recurrent Neural Networks
- Back-propagation through time
- Long Short-Term Memory (LSTM)
- Project — Text prediction
This workshop is aimed at people with little or no prior experience with Deep Learning. If you're already a Deep Learning ninja, then this is not for you!
Familiarity with programming in Python. A basic understanding of Machine Learning concepts will be helpful but certainly not essential.
You'll need the following to get the most out of the workshop:
If you have not used Jupyter Notebooks before, then quickly read through the following resources: