Deep Neural Networks with PyTorch

Start Date: 09/15/2019

Course Type: Common Course

Course Link: https://www.coursera.org/learn/deep-neural-networks-with-pytorch

Explore 1600+ online courses from top universities. Join Coursera today to learn data science, programming, business strategy, and more.

Course Syllabus

Tensor and Datasets
Linear Regression
Linear Regression PyTorch Way

Deep Learning Specialization on Coursera

Course Tag

Related Wiki Topic

Article Example
Artificial neural network Support vector machines and other, much simpler methods such as linear classifiers gradually overtook neural networks in machine learning popularity. As earlier challenges in training deep neural networks were successfully addressed with methods such as Unsupervised Pre-training and computing power increased through the use of GPUs and distributed computing, neural networks were again deployed on a large scale, particularly in image and visual recognition problems. This became known as "deep learning", although deep learning is not strictly synonymous with deep neural networks.
Deep learning Various deep learning architectures such as deep neural networks, convolutional deep neural networks, deep belief networks and recurrent neural networks have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks.
Dropout (neural networks) Dropout is a regularization technique for reducing overfitting in neural networks by preventing complex co-adaptations on training data. It is a very efficient way of performing model averaging with neural networks. The term "dropout" refers to dropping out units (both hidden and visible) in a neural network.
Rectifier (neural networks) For the first time in 2011, the use of the rectifier as a non-linearity has been shown to enable training deep supervised neural networks without requiring unsupervised pre-training.
Types of artificial neural networks A committee of machines (CoM) is a collection of different neural networks that together "vote" on a given example. This generally gives a much better result compared to other neural network models. Because neural networks suffer from local minima, starting with the same architecture and training but using different initial random weights often gives vastly different networks. A CoM tends to stabilize the result.
Deep learning "Deep learning" has been characterized as a buzzword, or a rebranding of neural networks.
Deep learning Deep neural networks are generally interpreted in terms of: Universal approximation theorem or Probabilistic inference.
Rectifier (neural networks) This activation function was first introduced to a dynamical network by Hahnloser et al. in a 2000 paper in Nature with strong biological motivations and mathematical justifications. It has been used in convolutional networks more effectively than the widely used logistic sigmoid (which is inspired by probability theory; see logistic regression) and its more practical counterpart, the hyperbolic tangent. The rectifier is, , the most popular activation function for deep neural networks.
Types of artificial neural networks Spiking neural networks with axonal conduction delays exhibit polychronization, and hence could have a very large memory capacity.
Rectifier (neural networks) In the context of artificial neural networks, the rectifier is an activation function defined as
Optical neural network Some artificial neural networks that have been implemented as optical neural networks include the Hopfield neural network and the Kohonen self-organizing map with liquid crystals.
Types of artificial neural networks There are many types of artificial neural networks (ANN).
Deep learning With the advent of the back-propagation algorithm based on automatic differentiation, many researchers tried to train supervised deep artificial neural networks from scratch, initially with little success. Sepp Hochreiter's diploma thesis of 1991 formally identified the reason for this failure as the vanishing gradient problem, which affects many-layered feedforward networks and recurrent neural networks. Recurrent networks are trained by unfolding them into very deep feedforward networks, where a new layer is created for each time step of an input sequence processed by the network. As errors propagate from layer to layer, they shrink exponentially with the number of layers, impeding the tuning of neuron weights which is based on those errors.
Neural network software In 2012, Wintempla included a namespace called NN with a set of C++ classes to implement: feed forward networks, probabilistic neural networks and Kohonen networks. Neural Lab is based on Wintempla classes. Neural Lab tutorial and Wintempla tutorial explains some of these clases for neural networks. The main disadvantage of Wintempla is that it compiles only with Microsoft Visual Studio.
Rectifier (neural networks) Rectified linear units find applications in computer vision and speech recognition using deep neural nets.
Neural Networks (journal) Neural Networks is a monthly peer-reviewed scientific journal and an official journal of the International Neural Network Society, European Neural Network Society, and Japanese Neural Network Society. It was established in 1988 and is published by Elsevier. The journal covers all aspects of research on artificial neural networks. The founding editor-in-chief was Stephen Grossberg (Boston University), the current editors-in-chief are DeLiang Wang (Ohio State University) and Kenji Doya (Okinawa Institute of Science and Technology). The journal is abstracted and indexed in Scopus and the Science Citation Index. According to the "Journal Citation Reports", the journal has a 2012 impact factor of 1.927.
Recursive neural network Recurrent neural networks are recursive artificial neural networks with a certain structure: that of a linear chain. Whereas recursive neural networks operate on any hierarchical structure, combining child representations into parent representations, recurrent neural networks operate on the linear progression of time, combining the previous time step and a hidden representation into the representation for the current time step.
Recurrent neural network Recurrent neural networks are in fact recursive neural networks with a particular structure: that of a linear chain. Whereas recursive neural networks operate on any hierarchical structure, combining child representations into parent representations, recurrent neural networks operate on the linear progression of time, combining the previous time step and a hidden representation into the representation for the current time step.
Instantaneously trained neural networks Instantaneously trained neural networks have been proposed as models of short term learning and used in web search, and financial time series prediction applications. They have also been used in instant classification of documents and for deep learning and data mining.
Neural Lab Neural Lab is a free neural network simulator that designs and trains artificial neural networks for use in engineering, business, computer science and technology. It integrates with Microsoft Visual Studio using C (Win32 - Wintempla) to incorporate artificial neural networks into custom applications, research simulations or end user interfaces.