Optimizing Machine Learning Performance

Start Date: 05/31/2020

Course Type: Common Course

Course Link: https://www.coursera.org/learn/optimize-machine-learning-model-performance

Explore 1600+ online courses from top universities. Join Coursera today to learn data science, programming, business strategy, and more.

About Course

This course synthesizes everything your have learned in the applied machine learning specialization. You will now walk through a complete machine learning project to prepare a machine learning maintenance roadmap. You will understand and analyze how to deal with changing data. You will also be able to identify and interpret potential unintended effects in your project. You will understand and define procedures to operationalize and maintain your applied machine learning model. By the end of this course you will have all the tools and understanding you need to confidently roll out a machine learning project and prepare to optimize it in your business context. To be successful, you should have at least beginner-level background in Python programming (e.g., be able to read and code trace existing code, be comfortable with conditionals, loops, variables, lists, dictionaries and arrays). You should have a basic understanding of linear algebra (vector notation) and statistics (probability distributions and mean/median/mode). This is the final course of the Applied Machine Learning Specialization brought to you by Coursera and the Alberta Machine Intelligence Institute (Amii).

Course Syllabus

What does Good Data Look Like?
Preparing your Data for ML Success
Feature Engineering for MORE Fun and Profit
Bad Data

Deep Learning Specialization on Coursera

Course Introduction

Optimizing Machine Learning Performance Optimizing machine learning performance is a high priority for data scientists. The high-performance models require less storage, runnier code, and are easy to use. They also scale linearly with the number of learners and learners. This course covers the core workarounds for optimizing performance of machine learning algorithms. We cover topics such as implementation details, optimization strategies, linearization, profiler usage, and more. Each topic is reviewed step-by-step and covered in-depth.Week 1 Week 2 Week 3 Week 4 Origins of life: Ancient Earth and Life This course explores the origin of life on Earth and how we know that it emerged as a microbial or bacterial population from the subsurface. We will learn about the kinds of life that could have existed on Earth and the possible connections between the ancient pasts and presents. You will learn the things you need to know about ancient Earth from ancient times to ancient present. From the time of the Cambrian Explosion to the present, you will explore the origins of life on Earth and the possible connections between the ancient pasts and presents. From the origin of microbial life to the origin of non-microbial life, you will gain a clear understanding of our surroundings. This is an introductory course that will be useful to anyone working in the field of Earth sciences, biology, and Earth sciences

Course Tag

Related Wiki Topic

Article Example
Machine learning The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory. Because training sets are finite and the future is uncertain, learning theory usually does not yield guarantees of the performance of algorithms. Instead, probabilistic bounds on the performance are quite common. The bias–variance decomposition is one way to quantify generalization error.
Learning curve Plots relating performance to experience are widely used in machine learning. Performance is the error rate or accuracy of the learning system, while experience may be the number of training examples used for learning or the number of iterations used in optimizing the system model parameters. The machine learning curve is useful for many purposes including comparing different algorithms, choosing model parameters during design, adjusting optimization to improve convergence, and determining the amount of data used for training.
Optimizing compiler Early compilers of the 1960s were often primarily concerned with simply compiling code correctly or efficiently – compile times were a major concern. One of the earliest notable optimizing compilers was that for BLISS (1970), which was described in "The Design of an Optimizing Compiler" (1975). By the 1980s optimizing compilers were sufficiently effective that programming in assembly language declined, and by the late 1990s for even performance sensitive code, optimizing compilers exceeded the performance of human experts. This co-evolved with the development of RISC chips and advanced processor features such as instruction scheduling and speculative execution which were designed to be targeted by optimizing compilers, rather than by human-written assembly code.
Active learning (machine learning) Recent developments are dedicated to hybrid active learning and active learning in a single-pass (on-line) context, combining concepts from the field of Machine Learning (e.g., conflict and ignorance) with adaptive, incremental learning policies in the field of Online machine learning.
Machine learning Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves `rules’ to store, manipulate or apply, knowledge. The defining characteristic of a rule-based machine learner is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system. This is in contrast to other machine learners that commonly identify a singular model that can be universally applied to any instance in order to make a prediction. Rule-based machine learning approaches include learning classifier systems, association rule learning, and artificial immune systems.
Machine learning A genetic algorithm (GA) is a search heuristic that mimics the process of natural selection, and uses methods such as mutation and crossover to generate new genotype in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms found some uses in the 1980s and 1990s. Vice versa, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms.
Machine learning Some statisticians have adopted methods from machine learning, leading to a combined field that they call "statistical learning".
Machine learning Machine learning tasks are typically classified into three broad categories, depending on the nature of the learning "signal" or "feedback" available to a learning system. These are
Machine learning Another categorization of machine learning tasks arises when one considers the desired "output" of a machine-learned system:
Machine learning Machine Learning poses a host of ethical questions. Systems which are trained on datasets collected with biases may exhibit these biases upon use, thus digitizing cultural prejudices. Responsible collection of data thus is a critical part of machine learning.
Machine learning Machine learning is the subfield of computer science that, according to Arthur Samuel in 1959, gives "computers the ability to learn without being explicitly programmed." Evolved from the study of pattern recognition and computational learning theory in artificial intelligence, machine learning explores the study and construction of algorithms that can learn from and make predictions on data – such algorithms overcome following strictly static program instructions by making data driven predictions or decisions, through building a model from sample inputs. Machine learning is employed in a range of computing tasks where designing and programming explicit algorithms with good performance is difficult or unfeasible; example applications include email filtering, detection of network intruders or malicious insiders working towards a data breach, optical character recognition (OCR), learning to rank and computer vision.
Adversarial machine learning Adversarial machine learning is a research field that lies at the intersection of machine learning and computer security. It aims to enable the safe adoption of machine learning techniques in adversarial settings like spam filtering, malware detection and biometric recognition.
Machine learning Software suites containing a variety of machine learning algorithms include the following :
Ensemble learning In statistics and machine learning, ensemble methods use multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone.
Outline of machine learning [[Category:Artificial intelligence|Machine learning]]
Machine learning Learning classifier systems (LCS) are a family of rule-based machine learning algorithms that combine a discovery component (e.g. typically a genetic algorithm) with a learning component (performing either supervised learning, reinforcement learning, or unsupervised learning). They seek to identify a set of context-dependent rules that collectively store and apply knowledge in a piecewise manner in order to make predictions.
Machine learning Machine learning and data mining often employ the same methods and overlap significantly, but while machine learning focuses on prediction, based on "known" properties learned from the training data, data mining focuses on the discovery of (previously) "unknown" properties in the data (this is the analysis step of Knowledge Discovery in Databases). Data mining uses many machine learning methods, but with different goals; on the other hand, machine learning also employs data mining methods as "unsupervised learning" or as a preprocessing step to improve learner accuracy. Much of the confusion between these two research communities (which do often have separate conferences and separate journals, ECML PKDD being a major exception) comes from the basic assumptions they work with: in machine learning, performance is usually evaluated with respect to the ability to "reproduce known" knowledge, while in Knowledge Discovery and Data Mining (KDD) the key task is the discovery of previously "unknown" knowledge. Evaluated with respect to known knowledge, an uninformed (unsupervised) method will easily be outperformed by other supervised methods, while in a typical KDD task, supervised methods cannot be used due to the unavailability of training data.
Machine learning Machine learning is closely related to (and often overlaps with) computational statistics, which also focuses on prediction-making through the use of computers. It has strong ties to mathematical optimization, which delivers methods, theory and application domains to the field. Machine learning is sometimes conflated with data mining, where the latter subfield focuses more on exploratory data analysis and is known as unsupervised learning. Machine learning can also be unsupervised and be used to learn and establish baseline behavioral profiles for various entities and then used to find meaningful anomalies.
Quantum machine learning Quantum machine learning is an emerging interdisciplinary research area at the intersection of quantum physics and machine learning. One can distinguish four different ways of merging the two parent disciplines. Quantum machine learning algorithms can use the advantages of quantum computation in order to improve classical methods of machine learning, for example by developing efficient implementations of expensive classical algorithms on a quantum computer. On the other hand, one can apply classical methods of machine learning to analyse quantum systems. Most generally, one can consider situations wherein both the learning device and the system under study are fully quantum.
Bluedrop Performance Learning Bluedrop Performance Learning is an e-learning company based in St. John's, Newfoundland and Labrador, Canada.