Machine Learning: Create a Neural Network that Predicts whether an Image is a Car or Airplane.

Start Date: 07/05/2020

Course Type: Common Course

Course Link: https://www.coursera.org/learn/predict-picture-type-using-neural-network

Explore 1600+ online courses from top universities. Join Coursera today to learn data science, programming, business strategy, and more.

Deep Learning Specialization on Coursera

Course Tag

Related Wiki Topic

Article Example
Artificial neural network Although it is true that analyzing what has been learned by an artificial neural network is difficult, it is much easier to do so than to analyze what has been learned by a biological neural network. Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering generic principles which allow a learning machine to be successful. For example, Bengio and LeCun (2007) wrote an article regarding local vs non-local learning, as well as shallow vs deep architecture.
Google Neural Machine Translation Google Neural Machine Translation (GNMT) is a neural machine translation (NMT) system developed by Google and introduced in November 2016, that uses an artificial neural network to increase fluency and accuracy in Google Translate.
Neural network software Neural network software is used to simulate, research, develop, and apply artificial neural networks, software concepts adapted from biological neural networks, and, in some cases, a wider array of adaptive systems such as artificial intelligence and machine learning.
Machine learning An artificial neural network (ANN) learning algorithm, usually called "neural network" (NN), is a learning algorithm that is inspired by the structure and functional aspects of biological neural networks. Computations are structured in terms of an interconnected group of artificial neurons, processing information using a connectionist approach to computation. Modern neural networks are non-linear statistical data modeling tools. They are usually used to model complex relationships between inputs and outputs, to find patterns in data, or to capture the statistical structure in an unknown joint probability distribution between observed variables.
Random neural network RNNs are also related to artificial neural networks, which (like the random neural network) have gradient-based learning algorithms. The learning algorithm for an n-node random neural network that includes feedback loops (it is also a recurrent neural network) is of computational complexity O(n^3) (the number of computations is proportional to the cube of n, the number of neurons). The random neural network can also be used with other learning algorithms such as reinforcement learning can also be used. The RNN has been shown to be a universal approximator for bounded and continuous functions.
Neural machine translation Neural machine translation (NMT) is an approach to machine translation in which a large neural network is trained by deep learning techniques. It is a radical departure from phrase-based statistical translation approaches, in which a translation system consists of subcomponents that are separately engineered. Google and Microsoft have announced that their translation services are now using NMT in November 2016. Google uses Google Neural Machine Translation (GNMT) in preference to its previous statistical methods. Microsoft uses a similar Deep Neural Network powered Machine Translation technology for all its speech translations (including Microsoft Translator live and Skype Translator). An open source neural machine translation system, OpenNMT, has additionally been released by the Harvard NLP group.
Cellular neural network In computer science and machine learning, cellular neural networks (CNN) (or cellular nonlinear networks (CNN)) are a parallel computing paradigm similar to neural networks, with the difference that communication is allowed between neighbouring units only. Typical applications include image processing, analyzing 3D surfaces, solving partial differential equations, reducing non-visual problems to geometric maps, modelling biological vision and other sensory-motor organs.
Image segmentation Neural Network segmentation relies on processing small areas of an image using an artificial neural network or a set of neural networks. After such processing the decision-making mechanism marks the areas of an image accordingly to the category recognized by the neural network. A type of network designed especially for this is the Kohonen map.
Artificial neural network Neural network research stagnated after the publication of machine learning research by Marvin Minsky and Seymour Papert (1969), who discovered two key issues with the computational machines that processed neural networks. The first was that basic perceptrons were incapable of processing the exclusive-or circuit. The second significant issue was that computers didn't have enough processing power to effectively handle the long run time required by large neural networks. Neural network research slowed until computers achieved greater processing power.
Artificial neural network The aim of the field is to create models of biological neural systems in order to understand how biological systems work. To gain this understanding, neuroscientists strive to make a link between observed biological processes (data), biologically plausible mechanisms for neural processing and learning (biological neural network models) and theory (statistical learning theory and information theory).
Logic learning machine Logic Learning Machine (LLM) is a machine learning method based on the generation of intelligible rules. LLM is an efficient implementation of the Switching Neural Network (SNN) paradigm, developed by Marco Muselli, Senior Researcher at the Italian National Research Council CNR-IEIIT in Genoa.
Artificial neural network Support vector machines and other, much simpler methods such as linear classifiers gradually overtook neural networks in machine learning popularity. As earlier challenges in training deep neural networks were successfully addressed with methods such as Unsupervised Pre-training and computing power increased through the use of GPUs and distributed computing, neural networks were again deployed on a large scale, particularly in image and visual recognition problems. This became known as "deep learning", although deep learning is not strictly synonymous with deep neural networks.
Artificial neural network When formula_37 some form of online machine learning must be used, where the cost is partially minimized as each new example is seen. While online machine learning is often used when formula_33 is fixed, it is most useful in the case where the distribution changes slowly over time. In neural network methods, some form of online machine learning is frequently used for finite datasets.
Machine learning Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves `rules’ to store, manipulate or apply, knowledge. The defining characteristic of a rule-based machine learner is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system. This is in contrast to other machine learners that commonly identify a singular model that can be universally applied to any instance in order to make a prediction. Rule-based machine learning approaches include learning classifier systems, association rule learning, and artificial immune systems.
Physical neural network A physical neural network is a type of artificial neural network in which an electrically adjustable resistance material is used to emulate the function of a neural synapse. "Physical" neural network is used to emphasize the reliance on physical hardware used to emulate neurons as opposed to software-based approaches which simulate neural networks. More generally the term is applicable to other artificial neural networks in which a memristor or other electrically adjustable resistance material is used to emulate a neural synapse.
Optical neural network An optical neural network is a physical implementation of an artificial neural network with optical components.
Amazon Machine Image An Amazon Machine Image (AMI) is a special type of virtual appliance that is used to create a virtual machine within the Amazon Elastic Compute Cloud ("EC2"). It serves as the basic unit of deployment for services delivered using EC2.
Neural machine translation A bidirectional recurrent neural network (RNN), known as an "encoder", is used by the neural network to encode a source sentence for a second RNN, known as a "decoder", that is used to predict words in the target language.
Boosting (machine learning) Object categorization is a typical task of computer vision which involves determining whether or not an image contains some specific category of object. The idea is closely related with recognition, identification, and detection. Appearance based object categorization typically contains feature extraction, learning a classifier, and applying the classifier to new examples. There are many ways to represent a category of objects, e.g. from shape analysis, bag of words models, or local descriptors such as SIFT, etc. Examples of supervised classifiers are Naive Bayes classifier, SVM, mixtures of Gaussians, neural network, etc. However, research has shown that object categories and their locations in images can be discovered in an unsupervised manner as well.
Machine learning Support vector machines (SVMs) are a set of related supervised learning methods used for classification and regression. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category or the other.