Deep learning has achieved remarkable success in supervised and reinforcement learning problems including image classification, speech recognition, and game playing. These models are, however, to a large degree, specialized for the single task they are trained for. This course will cover the setting where there are multiple tasks to be solved. You will explore goal-conditioned reinforcement learning techniques that can increase learning speed of multiple tasks. You will discover how meta-learning methods can be used to learn new tasks quickly. You will learn how leverage the shared structure of a sequence of tasks to enable knowledge transfer. Through this course, you will develop and advance highly-sought after skills in the field of AI.
Please note: the course capacity is limited. To be considered for enrollment, join the wait list, fill out this course application, and be sure to complete your NDO application.
Prospective students who complete the course application will be notified of their application status by September 6th. Only applicants with completed NDO applications will be admitted should a seat become available.
You Will Learn
- How to understand and implement state-of-the-art multi-task learning
- How to execute meta-learning algorithms
- How to leverage the structure arising from multiple tasks to learn more efficiently or effectively
- How to conduct research in these areas effectively
- Chelsea Finn Assistant Professor, Stanford University
- Multi-Task Supervised Learning
- Bayesian Models and Deep Probabilistic Meta-Learning Approaches
- Model-Based Reinforcement Learning for Multi-Task Learning
- Learning Optimizers, Learning Rules, and Architectures
Note on Course Availability
This course is typically offered Autumn quarter.
The course schedule is displayed for planning purposes – courses can be modified, changed, or cancelled. Course availability will be considered finalized on the first day of open enrollment. For quarterly enrollment dates, please refer to our graduate certificate homepage.