Deep Learning Specialization on Coursera

Course Introduction

This course provides an introduction to Deep Learning, a field that aims to harness the enormous amounts of data that we are surrounded by with artificial neural networks, allowing for the development of self-driving cars, speech interfaces, genomic sequence analysis and algorithmic trading.

Course Tag

Practical Deep Learning Deep Learning AI

Related Wiki Topic

Article Example
Deep learning Deep learning (also known as deep structured learning, hierarchical learning or deep machine learning) is a class
Deep learning Many deep learning algorithms are applied to unsupervised learning tasks. This is an important benefit because unlabeled data are usually more abundant than labeled data. Examples of deep structures that can be trained in an unsupervised manner are neural history compressors and deep belief networks.
Deep learning These definitions have in common (1) multiple layers of nonlinear processing units and (2) the supervised or unsupervised learning of feature representations in each layer, with the layers forming a hierarchy from low-level to high-level features. The composition of a layer of nonlinear processing units used in a deep learning algorithm depends on the problem to be solved. Layers that have been used in deep learning include hidden layers of an artificial neural network and sets of complicated propositional formulas. They may also include latent variables organized layer-wise in deep generative models such as the nodes in Deep Belief Networks and Deep Boltzmann Machines.
Deep learning Recently, a deep-learning approach based on an autoencoder artificial neural network has been used in bioinformatics, to predict Gene Ontology annotations and gene-function relationships.
Deep learning Recommendation systems have used deep learning to extract meaningful deep features for latent factor model for content-based recommendation for music. Recently, a more general approach for learning user preferences from multiple domains using multiview deep learning has been introduced. The model uses a hybrid collaborative and content-based approach and enhances recommendations in multiple tasks.
Deep learning If there is a lot of learnable predictability in the incoming data sequence, then the highest level RNN can use supervised learning to easily classify even deep sequences with very long time intervals between important events. In 1993, such a system already solved a "Very Deep Learning" task that requires more than 1000 subsequent layers in an RNN unfolded in time.
Deep learning A deep Q-network (DQN) is a type of deep learning model developed at Google DeepMind which combines a deep convolutional neural network with Q-learning, a form of reinforcement learning. Unlike earlier reinforcement learning agents, DQNs can learn directly from high-dimensional sensory inputs. Preliminary results were presented in 2014, with a paper published in February 2015 in Nature The application discussed in this paper is limited to Atari 2600 gaming, although it has implications for other applications. However, much before this work, there had been a number of reinforcement learning models that apply deep learning approaches (e.g.,).
Deep learning The probabilistic interpretation derives from the field of machine learning. It features inference, as well as the optimization concepts of training and testing, related to fitting and generalization respectively. More specifically, the probabilistic interpretation considers the activation nonlinearity as a cumulative distribution function. See Deep belief network. The probabilistic interpretation led to the introduction of dropout as regularizer in neural networks.
Deep learning Deep learning algorithms transform their inputs through more layers than shallow learning algorithms. At each layer, the signal is transformed by a processing unit, like an artificial neuron, whose parameters are 'learned' through training. A chain of transformations from input to output is a "credit assignment path" (CAP). CAPs describe potentially causal connections between input and output and may vary in length – for a feedforward neural network, the depth of the CAPs (thus of the network) is the number of hidden layers plus one (as the output layer is also parameterized), but for recurrent neural networks, in which a signal may propagate through a layer more than once, the CAP is potentially unlimited in length. There is no universally agreed upon threshold of depth dividing shallow learning from deep learning, but most researchers in the field agree that deep learning has multiple nonlinear layers (CAP > 2) and Juergen Schmidhuber considers CAP > 10 to be very deep learning.
Deep learning In 2010, industrial researchers extended deep learning from TIMIT to large vocabulary speech recognition, by adopting large output layers of the DNN based on context-dependent HMM states constructed by decision trees. Comprehensive reviews of this development and of the state of the art as of October 2014 are provided in the recent Springer book from Microsoft Research. An earlier article reviewed the background of automatic speech recognition and the impact of various machine learning paradigms, including deep learning.
Deep learning Deep learning exploits this idea of hierarchical explanatory factors where higher level, more abstract concepts are learned from the lower level ones. These architectures are often constructed with a greedy layer-by-layer method. Deep learning helps to disentangle these abstractions and pick out which features are useful for learning.
Deep learning Such supervised deep learning methods also were the first artificial pattern recognizers to achieve human-competitive performance on certain tasks.
Deep learning Deep learning is part of a broader family of machine learning methods based on learning representations of data. An observation (e.g., an image) can be represented in many ways such as a vector of intensity values per pixel, or in a more abstract way as a set of edges, regions of particular shape, etc. Some representations are better than others at simplifying the learning task (e.g., face recognition or facial expression recognition). One of the promises of deep learning is replacing handcrafted features with efficient algorithms for unsupervised or semi-supervised feature learning and hierarchical feature extraction.
An Introduction to Latin Syntax This text was also reprinted in James Davidson "easy and practical introduction to the knowledge of the Latin tongue" in 1798.
Deep learning Deep learning algorithms are based on distributed representations. The underlying assumption behind distributed representations is that observed data are generated by the interactions of factors organized in layers. Deep learning adds the assumption that these layers of factors correspond to levels of abstraction or composition. Varying numbers of layers and layer sizes can be used to provide different amounts of abstraction.
Introduction to Cooperative Learning Cooperative learning may be contrasted with competitive and individualistic learning. The key difference between these teaching approaches is the way students' learning goals are structured. The goal structure specifies the ways in which students will interact with each other and the teacher during the instructional session. Within cooperative situations, individuals seek outcomes that are beneficial to themselves and beneficial to all other group members. In competitive learning students work against each other to achieve an academic goal such as a grade of "A" that only one or a few students can attain. Finally, in individualistic learning students work by themselves to accomplish learning goals unrelated to those of the other students.
Deep learning Various deep learning architectures such as deep neural networks, convolutional deep neural networks, deep belief networks and recurrent neural networks have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks.
Deep learning For supervised learning tasks, deep learning methods obviate feature engineering, by translating the data into compact intermediate representations akin to principal components, and derive layered structures which remove redundancy in representation.
Introduction to Cooperative Learning In the mid-1960s, cooperative learning was relatively unknown and largely ignored by educators. Elementary, secondary, and university teaching was dominated by competitive and individualistic learning. Cultural resistance to cooperative learning was based on social Darwinism, with its premise that students must be taught to survive in a "dog-eat-dog" world, and the myth of "rugged individualism" underlying the use of individualistic learning. While competition dominated educational thought, it was being challenged by individualistic learning largely based on B. F. Skinner's work on programmed learning and behavioral modification. Educational practices and thought, however, have changed. Cooperative learning is now an accepted and highly recommended instructional procedure at all levels of education.
Deep learning offering technical insights into how to integrate deep learning into the existing highly efficient, run-time speech decoding system deployed by all major players in speech recognition industry. The history of this significant development in deep learning has been described and analyzed in recent books and articles.