Machine Learning Crash Course
Suitable for beginners with coding experience. A hands-on approach on how machine learning and neural networks work.
Phase 1 - Rules and Data (the basics)
Session 1 - What’s Machine Learning
Topics
- AI vs ML vs DL - understand the difference between these acronyms
- Types of AI - what tasks can AI solve and when to use each
- Setup Dragon Jump - setting up your Dragon Jump environment
- Running a random agent - once the environment is setup we run an AI that only does random inputs Optional Homework
- Building an If-Else agent - read the documentation on what the signals do and try to make an AI that can finish the first level only using if-else statements
Session 2 - Imitation Learning
Topics
- Supervised Learning - what is it and how we do it
- Decision Trees - how do they work
- Overfitting - what is it and how to catch it
- Recording gameplay from Dragon Jump - how to record and read gameplay
- Training decision trees - how to train decision trees using scikit-learn Optional Homework
- Explore decision trees - record more data and train more models to see if you can improve the AI
Session 3 - Wisdom of Crowds
Topics
- Random Forests - how do they compare to decision trees
- Feature engineering - how to make AIs borrow your wisdom (i.e. distance to nearest spike)
- Comparing results - metrics that enable you to choose the best performer
- Training random forests - how to train a random forest using scikit-learn
- Matplotlib basics - how to visualize results using matplotlib Optional Homework
- Compare approaches - see which AI behaves better and after how many examples
Phase 2 - Neural Networks
Session 4 - Intro to Deep Learning
Topics
- What’s a perceptron - linear regression and the basis of Deep Learning
- What are activation functions - nonlinearity to describe complex functions
- What are losses and optimisers - the things that make DL algorithms learn
- Running a single-layer network - familiarising yourselves with Pytorch Optional Homework
- Experimenting with parameters - try out different number of neurons
Session 5 - Adding temporal information
Topics
- Adding hidden layers - multilayer perceptron and why activations matter
- The problem of single-frame states - will it learn better with temporal data
- Running a multi-layer network - enabling the AI to develop complex functions
- Stacking states - giving your AI temporal information Optional Homework
- Experiment with parameters - try out different architectures
Finish - Let’s compete
This is the final session in the series, where we’ll compete for the best time on a unseen level of the game. We do lessons learned and decide where to go to next.