Supervised Learning: Classification

Start Date: 01/24/2021

Course Type: Common Course

Course Link: https://www.coursera.org/learn/supervised-learning-classification

About Course

This course introduces you to one of the main types of modeling families of supervised Machine Learning: Classification. You will learn how to train predictive models to classify categorical outcomes and how to use error metrics to compare across different models. The hands-on section of this course focuses on using best practices for classification, including train and test splits, and handling data sets with unbalanced classes.

Course Syllabus

Logistic Regression
Support Vector Machines
Ensemble Models
Modeling Unbalanced Classes

Coursera Plus banner featuring three learners and university partner logos

Course Introduction

This course introduces you to one of the main types of modeling families of supervised Machine Learning: Classification.

Course Tag

Related Wiki Topic

Article Example
Semi-supervised learning Generative approaches to statistical learning first seek to estimate formula_9, the distribution of data points belonging to each class. The probability formula_10 that a given point formula_11 has label formula_12 is then proportional to formula_13 by Bayes' rule. Semi-supervised learning with generative models can be viewed either as an extension of supervised learning (classification plus information about formula_14) or as an extension of unsupervised learning (clustering plus some labels).
Bias–variance tradeoff This tradeoff applies to all forms of supervised learning: classification, regression (function fitting), and structured output learning. It has also been invoked to explain the effectiveness of heuristics in human learning.
Semi-supervised learning As in the supervised learning framework, we are given a set of formula_1 independently identically distributed examples formula_2 with corresponding labels formula_3. Additionally, we are given formula_4 unlabeled examples formula_5. Semi-supervised learning attempts to make use of this combined information to surpass the classification performance that could be obtained either by discarding the unlabeled data and doing supervised learning or by discarding the labels and doing unsupervised learning.
Supervised learning A wide range of supervised learning algorithms are available, each with its strengths and weaknesses. There is no single learning algorithm that works best on all supervised learning problems (see the No free lunch theorem).
Supervised learning There are four major issues to consider in supervised learning:
Supervised learning In empirical risk minimization, the supervised learning algorithm seeks the function formula_11 that minimizes formula_32. Hence, a supervised learning algorithm can be constructed by applying an optimization algorithm to find formula_11.
Supervised learning The supervised learning optimization problem is to find the function formula_11 that minimizes
Supervised learning There are several ways in which the standard supervised learning problem can be generalized:
Supervised learning Supervised learning is the machine learning task of inferring a function from "". The training data consist of a set of "training examples". In supervised learning, each example is a "pair" consisting of an input object (typically a vector) and a desired output value (also called the "supervisory signal"). A supervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples. An optimal scenario will allow for the algorithm to correctly determine the class labels for unseen instances. This requires the learning algorithm to generalize from the training data to unseen situations in a "reasonable" way (see inductive bias).
Supervised learning In order to solve a given problem of supervised learning, one has to perform the following steps:
Semi-supervised learning "Self-training" is a wrapper method for semi-supervised learning. First a supervised learning algorithm is trained based on the labeled data only. This classifier is then applied to the unlabeled data to generate more labeled examples as input for the supervised learning algorithm. Generally only the labels the classifier is most confident of are added at each step.
Semi-supervised learning Some methods for semi-supervised learning are not intrinsically geared to learning from both unlabeled and labeled data, but instead make use of unlabeled data within a supervised learning framework. For instance, the labeled and unlabeled examples formula_45 may inform a choice of representation, distance metric, or kernel for the data in an unsupervised first step. Then supervised learning proceeds from only the labeled examples.
Semi-supervised learning Semi-supervised learning is a class of supervised learning tasks and techniques that also make use of unlabeled data for training – typically a small amount of labeled data with a large amount of unlabeled data. Semi-supervised learning falls between unsupervised learning (without any labeled training data) and supervised learning (with completely labeled training data). Many machine-learning researchers have found that unlabeled data, when used in conjunction with a small amount of labeled data, can produce considerable improvement in learning accuracy. The acquisition of labeled data for a learning problem often requires a skilled human agent (e.g. to transcribe an audio segment) or a physical experiment (e.g. determining the 3D structure of a protein or determining whether there is oil at a particular location). The cost associated with the labeling process thus may render a fully labeled training set infeasible, whereas acquisition of unlabeled data is relatively inexpensive. In such situations, semi-supervised learning can be of great practical value. Semi-supervised learning is also of theoretical interest in machine learning and as a model for human learning.
Semi-supervised learning Human responses to formal semi-supervised learning problems have yielded varying conclusions about the degree of influence of the unlabeled data (for a summary see ). More natural learning problems may also be viewed as instances of semi-supervised learning. Much of human concept learning involves a small amount of direct instruction (e.g. parental labeling of objects during childhood) combined with large amounts of unlabeled experience (e.g. observation of objects without naming or counting them, or at least without feedback).
Learning vector quantization In computer science, learning vector quantization (LVQ), is a prototype-based supervised classification algorithm. LVQ is the supervised counterpart of vector quantization systems.
Semi-supervised learning The Laplacian can also be used to extend the supervised learning algorithms: regularized least squares and support vector machines (SVM) to semi-supervised versions Laplacian regularized least squares and Laplacian SVM.
Semi-supervised learning Semi-supervised learning may refer to either transductive learning or inductive learning. The goal of transductive learning is to infer the correct labels for the given unlabeled data formula_6 only. The goal of inductive learning is to infer the correct mapping from formula_7 to formula_8.
Semi-supervised learning The heuristic approach of "self-training" (also known as "self-learning" or "self-labeling") is historically the oldest approach to semi-supervised learning, with examples of applications starting in the 1960s (see for instance Scudder (1965)).
Semi-supervised learning The transductive learning framework was formally introduced by Vladimir Vapnik in the 1970s. Interest in inductive learning using generative models also began in the 1970s. A "probably approximately correct" learning bound for semi-supervised learning of a Gaussian mixture was demonstrated by Ratsaby and Venkatesh in 1995.
Binary classification Statistical classification is a problem studied in machine learning. It is a type of supervised learning, a method of machine learning where the categories are predefined, and is used to categorize new probabilistic observations into said categories. When there are only two categories the problem is known as statistical binary classification.