Neural Networks and Deep Learning

Start Date: 03/17/2019

Course Type: Common Course

Course Link: https://www.coursera.org/learn/neural-networks-deep-learning

Explore 1600+ online courses from top universities. Join Coursera today to learn data science, programming, business strategy, and more.

About Course

If you want to break into cutting-edge AI, this course will help you do so. Deep learning engineers are highly sought after, and mastering deep learning will give you numerous new career opportunities. Deep learning is also a new "superpower" that will let you build AI systems that just weren't possible a few years ago. In this course, you will learn the foundations of deep learning. When you finish this class, you will: - Understand the major technology trends driving Deep Learning - Be able to build, train and apply fully connected deep neural networks - Know how to implement efficient (vectorized) neural networks - Understand the key parameters in a neural network's architecture This course also teaches you how Deep Learning actually works, rather than presenting only a cursory or surface-level description. So after completing it, you will be able to apply deep learning to a your own applications. If you are looking for a job in AI, after this course you will also be able to answer basic interview questions. This is the first course of the Deep Learning Specialization.

Deep Learning Specialization on Coursera

Course Introduction

If you want to break into cutting-edge AI, this course will help you do so.

Course Tag

Neural Networks Deep Learning Neural Network AI Andrew NG Artificial Neural Network Backpropagation Python Programming

Related Wiki Topic

Article Example
Yoshua Bengio Yoshua Bengio (born 1964 in France) is a Canadian computer scientist, most noted for his work on artificial neural networks and deep learning.
Deep learning Various deep learning architectures such as deep neural networks, convolutional deep neural networks, deep belief networks and recurrent neural networks have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks.
Deep learning "Deep learning" has been characterized as a buzzword, or a rebranding of neural networks.
Wojciech Zaremba Wojciech Zaremba (born November 30, 1988) is a Polish mathematician and computer scientist, noted for his work on artificial neural networks and deep learning. In 2015, Zaremba co-founded OpenAI, with a mission to build safe artificial intelligence (AI), and ensure that its benefits are as evenly distributed as possible.
Deep learning In the long history of speech recognition, both shallow and deep learning (e.g., recurrent nets) of artificial neural networks have been explored for many years.
Jacek M. Zurada He has published 390 journal and conference papers. He also has authored or co-authored three books, including the pioneering neural networks text "Introduction to Artificial Neural Systems" (1992), and co-edited a number of volumes in Springer Lecture Notes in Computer Science. His research contributions cover data mining with emphasis on data and feature understanding, rule extraction from semantic and visual information, machine and neural learning, decomposition methods for salient feature extraction, lambda learning rule for neural networks and deep learning. His work was cited over 10,000 times (Google Scholar, 2016).
Artificial neural network Support vector machines and other, much simpler methods such as linear classifiers gradually overtook neural networks in machine learning popularity. As earlier challenges in training deep neural networks were successfully addressed with methods such as Unsupervised Pre-training and computing power increased through the use of GPUs and distributed computing, neural networks were again deployed on a large scale, particularly in image and visual recognition problems. This became known as "deep learning", although deep learning is not strictly synonymous with deep neural networks.
Deep learning Large memory storage and retrieval neural networks (LAMSTAR) are fast deep learning neural networks of many layers which can use many filters simultaneously. These filters may be nonlinear, stochastic, logic, non-stationary, or even non-analytical. They are biologically motivated and continuously learning.
Deep learning In 1993, Jürgen Schmidhuber's neural history compressor implemented as an unsupervised stack of recurrent neural networks (RNNs) solved a "Very Deep Learning" task that requires more than 1,000 subsequent layers in an RNN unfolded in time.
Deep learning Many deep learning algorithms are applied to unsupervised learning tasks. This is an important benefit because unlabeled data are usually more abundant than labeled data. Examples of deep structures that can be trained in an unsupervised manner are neural history compressors and deep belief networks.
Deep learning Some of the most successful deep learning methods involve artificial neural networks. Artificial neural networks are inspired by the 1959 biological model proposed by Nobel laureates David H. Hubel & Torsten Wiesel, who found two types of cells in the primary visual cortex: simple cells and complex cells. Many artificial neural networks can be viewed as cascading models of cell types inspired by these biological observations.
Deep learning Compound hierarchical-deep models compose deep networks with non-parametric Bayesian models. Features can be learned using deep architectures such as DBNs, DBMs, deep auto encoders, convolutional variants, ssRBMs, deep coding networks, DBNs with sparse feature learning, recursive neural networks, conditional DBNs, de-noising auto encoders. This provides a better representation, allowing faster learning and more accurate classification with high-dimensional data. However, these architectures are poor at learning novel classes with few examples, because all network units are involved in representing the input (a "distributed representation") and must be adjusted together (high degree of freedom). Limiting the degree of freedom reduces the number of parameters to learn, facilitating learning of new classes from few examples. "Hierarchical Bayesian (HB)" models allow learning from few examples, for example for computer vision, statistics, and cognitive science.
Deep learning A recent achievement in deep learning is the use of convolutional deep belief networks (CDBN). CDBNs have structure very similar to a convolutional neural networks and are trained similar to deep belief networks. Therefore, they exploit the 2D structure of images, like CNNs do, and make use of pre-training like deep belief networks. They provide a generic structure which can be used in many image and signal processing tasks. Recently, many benchmark results on standard image datasets like CIFAR have been obtained using CDBNs.
Deep learning Deep neural networks are generally interpreted in terms of: Universal approximation theorem or Probabilistic inference.
Instantaneously trained neural networks Instantaneously trained neural networks have been proposed as models of short term learning and used in web search, and financial time series prediction applications. They have also been used in instant classification of documents and for deep learning and data mining.
Deep learning These definitions have in common (1) multiple layers of nonlinear processing units and (2) the supervised or unsupervised learning of feature representations in each layer, with the layers forming a hierarchy from low-level to high-level features. The composition of a layer of nonlinear processing units used in a deep learning algorithm depends on the problem to be solved. Layers that have been used in deep learning include hidden layers of an artificial neural network and sets of complicated propositional formulas. They may also include latent variables organized layer-wise in deep generative models such as the nodes in Deep Belief Networks and Deep Boltzmann Machines.
Deep learning The first general, working learning algorithm for supervised deep feedforward multilayer perceptrons was published by Ivakhnenko and Lapa in 1965. A 1971 paper described a deep network with 8 layers trained by the Group method of data handling algorithm which is still popular in the current millennium. These ideas were implemented in a computer identification system "Alpha", which demonstrated the learning process. Other Deep Learning working architectures, specifically those built from artificial neural networks (ANN), date back to the Neocognitron introduced by Kunihiko Fukushima in 1980. The ANNs themselves date back even further. The challenge was how to train networks with multiple layers.
Deep learning Deep learning algorithms transform their inputs through more layers than shallow learning algorithms. At each layer, the signal is transformed by a processing unit, like an artificial neuron, whose parameters are 'learned' through training. A chain of transformations from input to output is a "credit assignment path" (CAP). CAPs describe potentially causal connections between input and output and may vary in length – for a feedforward neural network, the depth of the CAPs (thus of the network) is the number of hidden layers plus one (as the output layer is also parameterized), but for recurrent neural networks, in which a signal may propagate through a layer more than once, the CAP is potentially unlimited in length. There is no universally agreed upon threshold of depth dividing shallow learning from deep learning, but most researchers in the field agree that deep learning has multiple nonlinear layers (CAP > 2) and Juergen Schmidhuber considers CAP > 10 to be very deep learning.
IEEE Transactions on Neural Networks and Learning Systems IEEE Transactions on Neural Networks and Learning Systems is a monthly peer-reviewed scientific journal published by the IEEE Computational Intelligence Society. It covers the theory, design, and applications of neural networks and related learning systems. The editor-in-chief is Haibo He (University of Rhode Island). According to the "Journal Citation Reports", the journal had a 2013 impact factor of 4.370.
Deep learning Fukushima's Neocognitron introduced convolutional neural networks partially trained by unsupervised learning with human-directed features in the neural plane. Yann LeCun et al. (1989) applied supervised backpropagation to such architectures. Weng et al. (1992) published convolutional neural networks Cresceptron for 3-D object recognition from images of cluttered scenes and segmentation of such objects from images.