Deep Neural Networks with PyTorch

Start Date: 02/16/2020

Course Type: Common Course

Course Link:

Explore 1600+ online courses from top universities. Join Coursera today to learn data science, programming, business strategy, and more.

About Course

The course will teach you how to develop deep learning models using Pytorch. The course will start with Pytorch's tensors and Automatic differentiation package. Then each section will cover different models starting off with fundamentals such as Linear Regression, and logistic/softmax regression. Followed by Feedforward deep neural networks, the role of different activation functions, normalization and dropout layers. Then Convolutional Neural Networks and Transfer learning will be covered. Finally, several other Deep learning methods will be covered. Learning Outcomes: After completing this course, learners will be able to: • explain and apply their knowledge of Deep Neural Networks and related machine learning methods • know how to use Python libraries such as PyTorch for Deep Learning applications • build Deep Neural Networks using PyTorch

Course Syllabus

Tensor and Datasets
Linear Regression
Linear Regression PyTorch Way

Deep Learning Specialization on Coursera

Course Introduction

Deep Neural Networks with PyTorch This course introduces the basic architecture of a deep neural network, and how to implement the network in Python. The course is the continuation of the course “Deep Neural Networks with Python”, so the "Deep Neural Networks" course is the third in a series on Python deep learning. The course used to be titled “Deep Learning for Fun and Profit”, but the name stuck as the team was moving quickly and there were too many cool features to list. The course continues to use the same architecture as the previous course, but uses a different software implementation, and it contains two entirely new chapters.PyTorch PyTorch Overview Deep Learning Applications CNF Paths Data Collection, Management, and Processing This course provides an introduction to data collection, management, and processing, as applied to a natural language-level data analysis workload. We will focus on four key topics: 1) The purpose of a natural language processing task, 2) The nature of an integrated query language, 3) How to structure a query language specialization, 4) How to handle processing latencies and network traffic, and 5) How to process raw, unprocessed data for analysis). This course requires previous experience in data analysis, text processing, data visualization, and machine translation.Basic Data Collection and Processing Querying and Forming Syntax Synchronization Contracts Quer

Course Tag

Related Wiki Topic

Article Example
Artificial neural network Support vector machines and other, much simpler methods such as linear classifiers gradually overtook neural networks in machine learning popularity. As earlier challenges in training deep neural networks were successfully addressed with methods such as Unsupervised Pre-training and computing power increased through the use of GPUs and distributed computing, neural networks were again deployed on a large scale, particularly in image and visual recognition problems. This became known as "deep learning", although deep learning is not strictly synonymous with deep neural networks.
Deep learning Various deep learning architectures such as deep neural networks, convolutional deep neural networks, deep belief networks and recurrent neural networks have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks.
Dropout (neural networks) Dropout is a regularization technique for reducing overfitting in neural networks by preventing complex co-adaptations on training data. It is a very efficient way of performing model averaging with neural networks. The term "dropout" refers to dropping out units (both hidden and visible) in a neural network.
Rectifier (neural networks) For the first time in 2011, the use of the rectifier as a non-linearity has been shown to enable training deep supervised neural networks without requiring unsupervised pre-training.
Types of artificial neural networks A committee of machines (CoM) is a collection of different neural networks that together "vote" on a given example. This generally gives a much better result compared to other neural network models. Because neural networks suffer from local minima, starting with the same architecture and training but using different initial random weights often gives vastly different networks. A CoM tends to stabilize the result.
Deep learning "Deep learning" has been characterized as a buzzword, or a rebranding of neural networks.
Deep learning Deep neural networks are generally interpreted in terms of: Universal approximation theorem or Probabilistic inference.
Rectifier (neural networks) This activation function was first introduced to a dynamical network by Hahnloser et al. in a 2000 paper in Nature with strong biological motivations and mathematical justifications. It has been used in convolutional networks more effectively than the widely used logistic sigmoid (which is inspired by probability theory; see logistic regression) and its more practical counterpart, the hyperbolic tangent. The rectifier is, , the most popular activation function for deep neural networks.
Types of artificial neural networks Spiking neural networks with axonal conduction delays exhibit polychronization, and hence could have a very large memory capacity.
Rectifier (neural networks) In the context of artificial neural networks, the rectifier is an activation function defined as
Optical neural network Some artificial neural networks that have been implemented as optical neural networks include the Hopfield neural network and the Kohonen self-organizing map with liquid crystals.
Types of artificial neural networks There are many types of artificial neural networks (ANN).
Deep learning With the advent of the back-propagation algorithm based on automatic differentiation, many researchers tried to train supervised deep artificial neural networks from scratch, initially with little success. Sepp Hochreiter's diploma thesis of 1991 formally identified the reason for this failure as the vanishing gradient problem, which affects many-layered feedforward networks and recurrent neural networks. Recurrent networks are trained by unfolding them into very deep feedforward networks, where a new layer is created for each time step of an input sequence processed by the network. As errors propagate from layer to layer, they shrink exponentially with the number of layers, impeding the tuning of neuron weights which is based on those errors.
Neural network software In 2012, Wintempla included a namespace called NN with a set of C++ classes to implement: feed forward networks, probabilistic neural networks and Kohonen networks. Neural Lab is based on Wintempla classes. Neural Lab tutorial and Wintempla tutorial explains some of these clases for neural networks. The main disadvantage of Wintempla is that it compiles only with Microsoft Visual Studio.
Rectifier (neural networks) Rectified linear units find applications in computer vision and speech recognition using deep neural nets.
Neural Networks (journal) Neural Networks is a monthly peer-reviewed scientific journal and an official journal of the International Neural Network Society, European Neural Network Society, and Japanese Neural Network Society. It was established in 1988 and is published by Elsevier. The journal covers all aspects of research on artificial neural networks. The founding editor-in-chief was Stephen Grossberg (Boston University), the current editors-in-chief are DeLiang Wang (Ohio State University) and Kenji Doya (Okinawa Institute of Science and Technology). The journal is abstracted and indexed in Scopus and the Science Citation Index. According to the "Journal Citation Reports", the journal has a 2012 impact factor of 1.927.
Recursive neural network Recurrent neural networks are recursive artificial neural networks with a certain structure: that of a linear chain. Whereas recursive neural networks operate on any hierarchical structure, combining child representations into parent representations, recurrent neural networks operate on the linear progression of time, combining the previous time step and a hidden representation into the representation for the current time step.
Recurrent neural network Recurrent neural networks are in fact recursive neural networks with a particular structure: that of a linear chain. Whereas recursive neural networks operate on any hierarchical structure, combining child representations into parent representations, recurrent neural networks operate on the linear progression of time, combining the previous time step and a hidden representation into the representation for the current time step.
Instantaneously trained neural networks Instantaneously trained neural networks have been proposed as models of short term learning and used in web search, and financial time series prediction applications. They have also been used in instant classification of documents and for deep learning and data mining.
Neural Lab Neural Lab is a free neural network simulator that designs and trains artificial neural networks for use in engineering, business, computer science and technology. It integrates with Microsoft Visual Studio using C (Win32 - Wintempla) to incorporate artificial neural networks into custom applications, research simulations or end user interfaces.