Convolutional Neural Networks

Start Date: 03/17/2019

Course Type: Common Course

Course Link: https://www.coursera.org/learn/convolutional-neural-networks

Explore 1600+ online courses from top universities. Join Coursera today to learn data science, programming, business strategy, and more.

About Course

This course will teach you how to build convolutional neural networks and apply it to image data. Thanks to deep learning, computer vision is working far better than just two years ago, and this is enabling numerous exciting applications ranging from safe autonomous driving, to accurate face recognition, to automatic reading of radiology images. You will: - Understand how to build a convolutional neural network, including recent variations such as residual networks. - Know how to apply convolutional networks to visual detection and recognition tasks. - Know to use neural style transfer to generate art. - Be able to apply these algorithms to a variety of image, video, and other 2D or 3D data. This is the fourth course of the Deep Learning Specialization.

Deep Learning Specialization on Coursera

Course Introduction

This course will teach you how to build convolutional neural networks and apply it to image data.

Course Tag

Deep Learning AI Machine Learning Andrew NG Convolutional Neural Networks CNN Image Processing Facial Recognition System Tensorflow Convolutional Neural Network Artificial Neural Network

Related Wiki Topic

Article Example
Convolutional neural network Convolutional neural networks model animal visual perception, and can be applied to visual recognition tasks.
Convolutional neural network The design of convolutional neural networks follows visual mechanisms in living organisms.
Recursive neural network Extensions to graphs include Graph Neural Network (GNN), Neural Network for Graphs (NN4G), and more recently convolutional neural networks for graphs.
Convolutional neural network The feed-forward architecture of convolutional neural networks was extended in the neural abstraction pyramid by lateral and feedback connections. The resulting recurrent convolutional network allows for the flexible incorporation of contextual information to iteratively resolve local ambiguities. In contrast to previous models, image-like outputs at the highest resolution were generated.
Acoustic model Recently, the use of Convolutional Neural Networks has led to big improvements in acoustic modeling.
Bit plane Bitplane formats may be used for passing images to Spiking neural networks, or low precision approximations to neural networks/convolutional neural networks.
Convolutional neural network Some time delay neural networks also use a very similar architecture to convolutional neural networks, especially those for image recognition or classification tasks, since the tiling of neuron outputs can be done in timed stages, in a manner useful for analysis of images.
Pixel art scaling algorithms Waifu2x is Image Super-Resolution for Anime-styled art using Deep Convolutional Neural Networks. It also supports photos.
Deep learning Fukushima's Neocognitron introduced convolutional neural networks partially trained by unsupervised learning with human-directed features in the neural plane. Yann LeCun et al. (1989) applied supervised backpropagation to such architectures. Weng et al. (1992) published convolutional neural networks Cresceptron for 3-D object recognition from images of cluttered scenes and segmentation of such objects from images.
Convolutional neural network Together, these properties allow convolutional neural networks to achieve better generalization on vision problems. Weight sharing also dramatically reduces the number of free parameters being learnt, thus lowering the memory requirements for running the network. Decreasing the memory footprint allows the training of larger, more powerful networks.
Convolutional neural network For many applications, only a small amount of training data is available. Convolutional neural networks usually require a large amount of training data in order to avoid overfitting. A common technique is to train the network on a larger data set from a related domain. Once the network parameters have converged an additional training step is performed using the in-domain data to fine-tune the network weights. This allows convolutional networks to be successfully applied to problems with small training sets.
Convolutional neural network Compared to other image classification algorithms, convolutional neural networks use relatively little pre-processing. This means that the network is responsible for learning the filters that in traditional algorithms were hand-engineered. The lack of dependence on prior knowledge and human effort in designing features is a major advantage for CNNs.
Convolutional neural network Convolutional neural networks (CNNs) consist of multiple layers of receptive fields. These are small neuron collections which process portions of the input image. The outputs of these collections are then tiled so that their input regions overlap, to obtain a higher-resolution representation of the original image; this is repeated for every such layer. Tiling allows CNNs to tolerate translation of the input image.
Convolutional neural network Convolutional neural networks are biologically inspired variants of multilayer perceptrons, designed to emulate the behaviour of a visual cortex. These models mitigate the challenges posed by the MLP architecture by exploiting the strong spatially local correlation present in natural images. As opposed to MLPs, CNN have the following distinguishing features:
Convolutional neural network Convolutional neural networks have also seen use in the field of natural language processing. CNN models have subsequently been shown to be effective for various NLP problems and have achieved excellent results in semantic parsing, search query retrieval, sentence modeling, classification, prediction, and other traditional NLP tasks.
Neocognitron The neocognitron is a hierarchical, multilayered artificial neural network proposed by Kunihiko Fukushima in the 1980s. It has been used for handwritten character recognition and other pattern recognition tasks, and served as the inspiration for convolutional neural networks.
Convolutional neural network Convolutional neural networks have been used in drug discovery. Predicting the interaction between molecules and biological proteins can be used to identify potential treatments that are more likely to be effective and safe. In 2015, Atomwise introduced AtomNet, the first deep learning neural networks for structure-based rational drug design. The system trains directly on 3-dimensional representations of chemical interactions. Similar to how image recognition networks learn to compose smaller, spatially proximate features into larger, complex structures, AtomNet discovers chemical features, such as aromaticity, sp3 carbons, and hydrogen bonding. Subsequently, AtomNet was used to predict novel candidate biomolecules for several disease targets, most notably treatments for the Ebola virus and multiple sclerosis.
Convolutional neural network Convolutional neural networks are often used in image recognition systems. They have achieved an error rate of 0.23 percent on the MNIST database, which as of February 2012 is the lowest achieved on the database. Another paper on using CNN for image classification reported that the learning process was "surprisingly fast"; in the same paper, the best published results at the time were achieved in the MNIST database and the NORB database.
Darkforest The family of Darkforest computer go programs is based on convolution neural networks. The most recent advances in Darkfmcts3 combined convolutional neural networks with more traditional Monte Carlo tree search. Darkfmcts3 is the most advanced version of Darkforest, which combines Facebook's most advanced convolutional neural network architecture from Darkfores2 with a Monte Carlo tree search.
Vision processing unit Vision processing units are distinct from video processing units (which are specialised for video encoding and decoding) in their suitability for running machine vision algorithms such as convolutional neural networks, SIFT etc.