Optimizing Machine Learning Model Performance

Start Date: 10/20/2019

Course Type: Common Course

Course Link: https://www.coursera.org/learn/optimize-machine-learning-model-performance

Explore 1600+ online courses from top universities. Join Coursera today to learn data science, programming, business strategy, and more.

Course Syllabus

What does Good Data Look Like?
Preparing your Data for ML Success
Feature Engineering for MORE Fun and Profit
Bad Data

Deep Learning Specialization on Coursera

Course Tag

Related Wiki Topic

Article Example
Learning curve Plots relating performance to experience are widely used in machine learning. Performance is the error rate or accuracy of the learning system, while experience may be the number of training examples used for learning or the number of iterations used in optimizing the system model parameters. The machine learning curve is useful for many purposes including comparing different algorithms, choosing model parameters during design, adjusting optimization to improve convergence, and determining the amount of data used for training.
Machine learning Leo Breiman distinguished two statistical modelling paradigms: data model and algorithmic model, wherein 'algorithmic model' means more or less the machine learning algorithms like Random forest.
Machine learning The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory. Because training sets are finite and the future is uncertain, learning theory usually does not yield guarantees of the performance of algorithms. Instead, probabilistic bounds on the performance are quite common. The bias–variance decomposition is one way to quantify generalization error.
Machine learning Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves `rules’ to store, manipulate or apply, knowledge. The defining characteristic of a rule-based machine learner is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system. This is in contrast to other machine learners that commonly identify a singular model that can be universally applied to any instance in order to make a prediction. Rule-based machine learning approaches include learning classifier systems, association rule learning, and artificial immune systems.
Machine learning Machine learning is the subfield of computer science that, according to Arthur Samuel in 1959, gives "computers the ability to learn without being explicitly programmed." Evolved from the study of pattern recognition and computational learning theory in artificial intelligence, machine learning explores the study and construction of algorithms that can learn from and make predictions on data – such algorithms overcome following strictly static program instructions by making data driven predictions or decisions, through building a model from sample inputs. Machine learning is employed in a range of computing tasks where designing and programming explicit algorithms with good performance is difficult or unfeasible; example applications include email filtering, detection of network intruders or malicious insiders working towards a data breach, optical character recognition (OCR), learning to rank and computer vision.
Optimizing compiler Early compilers of the 1960s were often primarily concerned with simply compiling code correctly or efficiently – compile times were a major concern. One of the earliest notable optimizing compilers was that for BLISS (1970), which was described in "The Design of an Optimizing Compiler" (1975). By the 1980s optimizing compilers were sufficiently effective that programming in assembly language declined, and by the late 1990s for even performance sensitive code, optimizing compilers exceeded the performance of human experts. This co-evolved with the development of RISC chips and advanced processor features such as instruction scheduling and speculative execution which were designed to be targeted by optimizing compilers, rather than by human-written assembly code.
Active learning (machine learning) Recent developments are dedicated to hybrid active learning and active learning in a single-pass (on-line) context, combining concepts from the field of Machine Learning (e.g., conflict and ignorance) with adaptive, incremental learning policies in the field of Online machine learning.
Machine learning A genetic algorithm (GA) is a search heuristic that mimics the process of natural selection, and uses methods such as mutation and crossover to generate new genotype in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms found some uses in the 1980s and 1990s. Vice versa, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms.
Machine learning Some statisticians have adopted methods from machine learning, leading to a combined field that they call "statistical learning".
Machine learning Machine learning tasks are typically classified into three broad categories, depending on the nature of the learning "signal" or "feedback" available to a learning system. These are
Machine learning Another categorization of machine learning tasks arises when one considers the desired "output" of a machine-learned system:
Machine learning Machine Learning poses a host of ethical questions. Systems which are trained on datasets collected with biases may exhibit these biases upon use, thus digitizing cultural prejudices. Responsible collection of data thus is a critical part of machine learning.
Machine learning Machine learning also has intimate ties to optimization: many learning problems are formulated as minimization of some loss function on a training set of examples. Loss functions express the discrepancy between the predictions of the model being trained and the actual problem instances (for example, in classification, one wants to assign a label to instances, and models are trained to correctly predict the pre-assigned labels of a set examples). The difference between the two fields arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples.
Adversarial machine learning Adversarial machine learning is a research field that lies at the intersection of machine learning and computer security. It aims to enable the safe adoption of machine learning techniques in adversarial settings like spam filtering, malware detection and biometric recognition.
Machine learning Software suites containing a variety of machine learning algorithms include the following :
Ensemble learning In statistics and machine learning, ensemble methods use multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone.
Outline of machine learning [[Category:Artificial intelligence|Machine learning]]
Outline of machine learning Machine learning – subfield of computer science (more particularly soft computing) that evolved from the study of pattern recognition and computational learning theory in artificial intelligence. In 1959, Arthur Samuel defined machine learning as a "Field of study that gives computers the ability to learn without being explicitly programmed". Machine learning explores the study and construction of algorithms that can learn from and make predictions on data. Such algorithms operate by building a model from an example "training set" of input observations in order to make data-driven predictions or decisions expressed as outputs, rather than following strictly static program instructions.
Machine learning Learning classifier systems (LCS) are a family of rule-based machine learning algorithms that combine a discovery component (e.g. typically a genetic algorithm) with a learning component (performing either supervised learning, reinforcement learning, or unsupervised learning). They seek to identify a set of context-dependent rules that collectively store and apply knowledge in a piecewise manner in order to make predictions.
Journal of Machine Learning Research The journal was established as an open-access alternative to the journal "Machine Learning". In 2001, forty editorial board members of "Machine Learning" resigned, saying that in the era of the Internet, it was detrimental for researchers to continue publishing their papers in expensive journals with pay-access archives. The open access model employed by the "Journal of Machine Learning Research" allows authors to publish articles for free and retain copyright, while archives are freely available online.