A Complete Reinforcement Learning System (Capstone)

Start Date: 07/05/2020

Course Type: Common Course

Course Link: https://www.coursera.org/learn/complete-reinforcement-learning-system

Explore 1600+ online courses from top universities. Join Coursera today to learn data science, programming, business strategy, and more.

About Course

In this final course, you will put together your knowledge from Courses 1, 2 and 3 to implement a complete RL solution to a problem. This capstone will let you see how each component---problem formulation, algorithm selection, parameter selection and representation design---fits together into a complete solution, and how to make appropriate choices when deploying RL in the real world. This project will require you to implement both the environment to stimulate your problem, and a control agent with Neural Network function approximation. In addition, you will conduct a scientific study of your learning system to develop your ability to assess the robustness of RL agents. To use RL in the real world, it is critical to (a) appropriately formalize the problem as an MDP, (b) select appropriate algorithms, (c ) identify what choices in your implementation will have large impacts on performance and (d) validate the expected behaviour of your algorithms. This capstone is valuable for anyone who is planning on using RL to solve real problems. To be successful in this course, you will need to have completed Courses 1, 2, and 3 of this Specialization or the equivalent. By the end of this course, you will be able to: Complete an RL solution to a problem, starting from problem formulation, appropriate algorithm selection and implementation and empirical study into the effectiveness of the solution.

Course Syllabus

Milestone 1: Formalize Word Problem as MDP
Milestone 2: Choosing The Right Algorithm
Milestone 3: Identify Key Performance Parameters
Milestone 4: Implement Your Agent

Deep Learning Specialization on Coursera

Course Introduction

A Complete Reinforcement Learning System (Capstone) In this capstone project course, you will design and implement a fully working reinforcement learning system. You will also apply state of the art techniques and analysis to bring the system to life. In this course, we assume that you already have background in computer science, data structures, machine learning and computer graphics. You should be familiar with common programming paradigms such as object-oriented programming, procedural programming, and object-oriented algorithms. You should also have some basic machine learning knowledge. For the course project, you will use a combination of simulations, optimization techniques, hardware optimization, and a variety of algorithms, including some for class-based projects. The project will require you to implement several algorithms, including a regression one, which will train a class of deep neural network models. You must also design, implement and analyze a number of machine learning problems. The final project will use the design to implement the system in practice. Note that the course is based on the earlier "Deep Learning for Data Science" paper (https://www.coursera.org/learn/deep-learning-data-science/home/welcome) and is not compatible with prior versions of Coursera (https://www.coursera.org/learn/deep-learning-data-science/home/downloads).The Machine Learning Algorithms Recurrent Neural Networks Optimization Gradient Boosters

Course Tag

Related Wiki Topic

Article Example
Reinforcement learning Successes of reinforcement learning are listed here.
Reinforcement learning The basic reinforcement learning model consists of:
Reinforcement learning Two components make reinforcement learning powerful:
Reinforcement learning There is also a growing interest in real life applications of reinforcement learning.
Reinforcement learning A reinforcement learning agent interacts with its environment in discrete time steps.
Reinforcement learning Multiagent or Distributed Reinforcement Learning is also a topic of interest in current research.
Reinforcement learning In machine learning, the environment is typically formulated as a Markov decision process (MDP), as many reinforcement learning algorithms for this context utilize dynamic programming techniques. The main difference between the classical techniques and reinforcement learning algorithms is that the latter do not need knowledge about the MDP and they target large MDPs where exact methods become infeasible.
Reinforcement learning There are multiple applications of reinforcement learning to generate models and train them to play video games, such as Atari games. In these models, reinforcement learning finds the actions with the best reward at each play. This method is a widely used method in combination with deep neural networks to teach computers to play Atari video games.
Reinforcement learning Reinforcement learning differs from standard supervised learning in that correct input/output pairs are never presented, nor sub-optimal actions explicitly corrected. Further, there is a focus on on-line performance, which involves finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge). The exploration vs. exploitation trade-off in reinforcement learning has been most thoroughly studied through the multi-armed bandit problem and in finite MDPs.
Reinforcement learning Reinforcement learning algorithms such as TD learning are also being investigated as a model for Dopamine-based learning in the brain. In this model, the dopaminergic projections from the substantia nigra to the basal ganglia function as the prediction error. Reinforcement learning has also been used as a part of the model for human skill learning, especially in relation to the interaction between implicit and explicit learning in skill acquisition (the first publication on this application was in 1995-1996, and there have been many follow-up studies).
Reinforcement learning Thanks to these two key components, reinforcement learning can be used in large environments in any of the following situations:
Learning classifier system Up until the 2000's nearly all learning classifier system methods were developed with reinforcement learning problems in mind. As a result, the term ‘learning classifier system’ was commonly defined as the combination of ‘trial-and-error’ reinforcement learning with the global search of a genetic algorithm. Interest in supervised learning applications, and even unsupervised learning have since broadened the use and definition of this term.
Reinforcement learning The first two of these problems could be considered planning problems (since some form of the model is available), while the last one could be considered as a genuine learning problem. However, under a reinforcement learning methodology both planning problems would be converted to machine learning problems.
Reinforcement learning The goal of a reinforcement learning agent is to collect as much reward as possible. The agent can choose any action as a function of the history and it can even randomize its action selection.
Reinforcement learning Reinforcement learning is an area of machine learning inspired by behaviorist psychology, concerned with how software agents ought to take "actions" in an "environment" so as to maximize some notion of cumulative "reward". The problem, due to its generality, is studied in many other disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, statistics, and genetic algorithms. In the operations research and control literature, the field where reinforcement learning methods are studied is called "approximate dynamic programming". The problem has been studied in the theory of optimal control, though most studies are concerned with the existence of optimal solutions and their characterization, and not with the learning or approximation aspects. In economics and game theory, reinforcement learning may be used to explain how equilibrium may arise under bounded rationality.
Reinforcement learning So far, the discussion was restricted to how policy iteration can be used as a basis of the designing reinforcement learning algorithms. Equally importantly, value iteration can also be used as a starting point, giving rise to the Q-Learning algorithm and its many variants.
Reinforcement learning Thus, reinforcement learning is particularly well-suited to problems which include a long-term versus short-term reward trade-off. It has been applied successfully to various problems, including robot control, elevator scheduling, telecommunications, backgammon, checkers and go (AlphaGo).
Reinforcement learning In reinforcement learning methods the expectations are approximated by averaging over samples and one uses function approximation techniques to cope with the need to represent value functions over large state-action spaces.
Reinforcement A great many researchers subsequently expanded our understanding of reinforcement and challenged some of Skinner's conclusions. For example, Azrin and Holz defined punishment as a “consequence of behavior that reduces the future probability of that behavior,” and some studies have shown that positive reinforcement and punishment are equally effective in modifying behavior. Research on the effects of positive reinforcement, negative reinforcement and punishment continue today as those concepts are fundamental to learning theory and apply to many practical applications of that theory.
Learning classifier system In 1995, Wilson published his landmark paper, "Classifier fitness based on accuracy" in which he introduced the classifier system XCS. XCS took the simplified architecture of ZCS and added an accuracy-based fitness, a niche GA (acting in the action set [A]), an explicit generalization mechanism called "subsumption", and an adaptation of the Q-Learning credit assignment. XCS was popularized by its ability to reach optimal performance while evolving accurate and maximally general classifiers as well as its impressive problem flexibility (able to perform both reinforcement learning and supervised learning) . XCS later became the best known and most studied LCS algorithm and defined a new family of "accuracy-based LCS". ZCS alternatively became synonymous with "strength-based LCS". XCS is also important, because it successfully bridged the gap between LCS and the field of reinforcement learning. Following the success of XCS, LCS were later described as reinforcement learning systems endowed with a generalization capability. Reinforcement learning typically seeks to learn a value function that maps out a complete representation of the state/action space. Similarly, the design of XCS drives it to form an all-inclusive and accurate representation of the problem space (i.e. a "complete map") rather than focusing on high payoff niches in the environment (as was the case with strength-based LCS). Conceptually, complete maps don't only capture what you should do, or what is correct, but also what you shouldn't do, or what's incorrect. Differently, most strength-based LCSs, or exclusively supervised learning LCSs seek a rule set of efficient generalizations in the form of a "best action map" (or a "partial map"). Comparisons between strength vs. accuracy-based fitness and complete vs. best action maps have since been examined in greater detail.