Neural Networks for Machine Learning

Start Date: 11/05/2018

Course Type: Common Course

Course Link:

About Course

Learn about artificial neural networks and how they're being used for machine learning, as applied to speech and object recognition, image segmentation, modeling language and human motion, etc. We'll emphasize both the basic algorithms and the practical tricks needed to get them to work well. This course contains the same content presented on Coursera beginning in 2013. It is not a continuation or update of the original course. It has been adapted for the new platform. Please be advised that the course is suited for an intermediate level learner - comfortable with calculus and with experience programming (Python).

Coursera Plus banner featuring three learners and university partner logos

Course Introduction

Learn about artificial neural networks and how they're being used for machine learning, as applied to speech and object recognition, image segmentation, modeling language and human motion, etc. We'll emphasize both the basic algorithms and the practical tricks needed to get them to work well.

Course Tag

Neural Networks Machine Learning Artificial Neural Networks Geoffrey Hinton

Related Wiki Topic

Article Example
Siemens Wind Power SWP develops artificial neural networks for machine learning to predict and diagnose potential problems in 9,000 wind turbines with 400 sensors each, sending data several times a second.
Neural network software Neural network software is used to simulate, research, develop, and apply artificial neural networks, software concepts adapted from biological neural networks, and, in some cases, a wider array of adaptive systems such as artificial intelligence and machine learning.
Artificial neural network Support vector machines and other, much simpler methods such as linear classifiers gradually overtook neural networks in machine learning popularity. As earlier challenges in training deep neural networks were successfully addressed with methods such as Unsupervised Pre-training and computing power increased through the use of GPUs and distributed computing, neural networks were again deployed on a large scale, particularly in image and visual recognition problems. This became known as "deep learning", although deep learning is not strictly synonymous with deep neural networks.
Henry J. Kelley now widely used for machine learning and artificial neural networks.
Arthur E. Bryson now widely used for machine learning and artificial neural networks.
Neural modeling fields Neural modeling field (NMF) is a mathematical framework for machine learning which combines ideas from neural networks, fuzzy logic, and model based recognition. It has also been referred to as modeling fields, modeling fields theory (MFT), Maximum likelihood artificial neural networks (MLANS).
Machine learning An artificial neural network (ANN) learning algorithm, usually called "neural network" (NN), is a learning algorithm that is inspired by the structure and functional aspects of biological neural networks. Computations are structured in terms of an interconnected group of artificial neurons, processing information using a connectionist approach to computation. Modern neural networks are non-linear statistical data modeling tools. They are usually used to model complex relationships between inputs and outputs, to find patterns in data, or to capture the statistical structure in an unknown joint probability distribution between observed variables.
Types of artificial neural networks The Boltzmann machine can be thought of as a noisy Hopfield network. Invented by Geoff Hinton and Terry Sejnowski in 1985, the Boltzmann machine is important because it is one of the first neural networks to demonstrate learning of latent variables (hidden units). Boltzmann machine learning was at first slow to simulate, but the contrastive divergence algorithm of Geoff Hinton (circa 2000) allows models such as Boltzmann machines and Products of Experts to be trained much faster.
Artificial neural network Although it is true that analyzing what has been learned by an artificial neural network is difficult, it is much easier to do so than to analyze what has been learned by a biological neural network. Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering generic principles which allow a learning machine to be successful. For example, Bengio and LeCun (2007) wrote an article regarding local vs non-local learning, as well as shallow vs deep architecture.
IEEE Transactions on Neural Networks and Learning Systems IEEE Transactions on Neural Networks and Learning Systems is a monthly peer-reviewed scientific journal published by the IEEE Computational Intelligence Society. It covers the theory, design, and applications of neural networks and related learning systems. The editor-in-chief is Haibo He (University of Rhode Island). According to the "Journal Citation Reports", the journal had a 2013 impact factor of 4.370.
Cellular neural network In computer science and machine learning, cellular neural networks (CNN) (or cellular nonlinear networks (CNN)) are a parallel computing paradigm similar to neural networks, with the difference that communication is allowed between neighbouring units only. Typical applications include image processing, analyzing 3D surfaces, solving partial differential equations, reducing non-visual problems to geometric maps, modelling biological vision and other sensory-motor organs.
Types of artificial neural networks Neural networks can be hardware- (neurons are represented by physical components) or software-based (computer models), and can use a variety of topologies and learning algorithms.
Generative adversarial networks Generative adversarial networks are a branch of unsupervised machine learning, implemented by a system of two neural networks competing against each other in a zero-sum game framework. They were first introduced by Ian Goodfellow "et al." in 2014.
Instantaneously trained neural networks Instantaneously trained neural networks have been proposed as models of short term learning and used in web search, and financial time series prediction applications. They have also been used in instant classification of documents and for deep learning and data mining.
Information Fuzzy Networks Info Fuzzy Networks (IFN) is a greedy machine learning algorithm for supervised learning.
Dropout (neural networks) Dropout is a regularization technique for reducing overfitting in neural networks by preventing complex co-adaptations on training data. It is a very efficient way of performing model averaging with neural networks. The term "dropout" refers to dropping out units (both hidden and visible) in a neural network.
International Conference on Machine Learning The conference attracts leading innovations in the field of machine learning. ICML is a top tier conference, and is one of the two most influential conferences in Machine Learning (along with Conference on Neural Information Processing Systems).
Artificial neural network Neural network research stagnated after the publication of machine learning research by Marvin Minsky and Seymour Papert (1969), who discovered two key issues with the computational machines that processed neural networks. The first was that basic perceptrons were incapable of processing the exclusive-or circuit. The second significant issue was that computers didn't have enough processing power to effectively handle the long run time required by large neural networks. Neural network research slowed until computers achieved greater processing power.
Neural machine translation Neural machine translation (NMT) is an approach to machine translation in which a large neural network is trained by deep learning techniques. It is a radical departure from phrase-based statistical translation approaches, in which a translation system consists of subcomponents that are separately engineered. Google and Microsoft have announced that their translation services are now using NMT in November 2016. Google uses Google Neural Machine Translation (GNMT) in preference to its previous statistical methods. Microsoft uses a similar Deep Neural Network powered Machine Translation technology for all its speech translations (including Microsoft Translator live and Skype Translator). An open source neural machine translation system, OpenNMT, has additionally been released by the Harvard NLP group.
Deep learning "Deep learning" has been characterized as a buzzword, or a rebranding of neural networks.