Deep Learning for Business

Start Date: 11/17/2019

Course Type: Common Course

Course Link: https://www.coursera.org/learn/deep-learning-business

Explore 1600+ online courses from top universities. Join Coursera today to learn data science, programming, business strategy, and more.

Course Syllabus

For the course “Deep Learning for Business,” the first module is “Deep Learning Products & Services,” which starts with the lecture “Future Industry Evolution & Artificial Intelligence” that explains past, current, and future industry evolutions and how DL (Deep Learning) and ML (Machine Learning) technology will be used in almost every aspect of future industry in the near future. The following lectures look into the hottest DL and ML products and services that are exciting the business world. First, the “Jeopardy!” winning versatile IBM Watson is introduced along with its DeepQA and AdaptWatson systems that use DL technology. Then the Amazon Echo and Echo Dot products are introduced along with the Alexa cloud based DL personal assistant that uses ASR (Automated Speech Recognition) and NLU (Natural Language Understanding) technology. The next lecture focuses on LettuceBot, which is a DL system that plants lettuce seeds with automatic fertilizer and herbicide nozzles control. Then the computer vision based DL blood cells analysis diagnostic system Athelas is introduced followed by the introduction of a classical and symphonic music composing DL system named AIVA (Artificial Intelligence Virtual Artist). As the last topic of module 1, the upcoming Apple watchOS 4 and the HomePod speaker that was presented at Apple's 2017 WWDC (World Wide Developers Conference) is introduced.

Deep Learning Specialization on Coursera

Course Introduction

Your smartphone, smartwatch, and automobile (if it is a newer model) have AI (Artificial Intelligence) inside serving you every day. In the near future, more advanced “self-learning” capable DL (Deep Learning) and ML (Machine Learning) technology will be used in almost every aspect of your business and industry. So now is the right time to learn what DL and ML is and how to use it in advantage of your company.

Course Tag

Deep Learning Business Deep Learning Business Deep Learning Products Deep Learning Services Deep Learning Software Artificial Intelligence (AI) Artificial Neural Network Machine Learning

Related Wiki Topic

Article Example
Deep learning Deep learning (also known as deep structured learning, hierarchical learning or deep machine learning) is a class
Deep learning Recommendation systems have used deep learning to extract meaningful deep features for latent factor model for content-based recommendation for music. Recently, a more general approach for learning user preferences from multiple domains using multiview deep learning has been introduced. The model uses a hybrid collaborative and content-based approach and enhances recommendations in multiple tasks.
Deep learning A deep Q-network (DQN) is a type of deep learning model developed at Google DeepMind which combines a deep convolutional neural network with Q-learning, a form of reinforcement learning. Unlike earlier reinforcement learning agents, DQNs can learn directly from high-dimensional sensory inputs. Preliminary results were presented in 2014, with a paper published in February 2015 in Nature The application discussed in this paper is limited to Atari 2600 gaming, although it has implications for other applications. However, much before this work, there had been a number of reinforcement learning models that apply deep learning approaches (e.g.,).
Deep learning Deep learning exploits this idea of hierarchical explanatory factors where higher level, more abstract concepts are learned from the lower level ones. These architectures are often constructed with a greedy layer-by-layer method. Deep learning helps to disentangle these abstractions and pick out which features are useful for learning.
Deep learning A main criticism of deep learning concerns the lack of theory surrounding many of the methods. Learning in the most common deep architectures is implemented using gradient descent; while gradient descent has been understood for a while now, the theory surrounding other algorithms, such as contrastive divergence is less clear. (i.e., Does it converge? If so, how fast? What is it approximating?) Deep learning methods are often looked at as a black box, with most confirmations done empirically, rather than theoretically.
Deep learning Compound hierarchical-deep models compose deep networks with non-parametric Bayesian models. Features can be learned using deep architectures such as DBNs, DBMs, deep auto encoders, convolutional variants, ssRBMs, deep coding networks, DBNs with sparse feature learning, recursive neural networks, conditional DBNs, de-noising auto encoders. This provides a better representation, allowing faster learning and more accurate classification with high-dimensional data. However, these architectures are poor at learning novel classes with few examples, because all network units are involved in representing the input (a "distributed representation") and must be adjusted together (high degree of freedom). Limiting the degree of freedom reduces the number of parameters to learn, facilitating learning of new classes from few examples. "Hierarchical Bayesian (HB)" models allow learning from few examples, for example for computer vision, statistics, and cognitive science.
Deep learning Deep learning algorithms transform their inputs through more layers than shallow learning algorithms. At each layer, the signal is transformed by a processing unit, like an artificial neuron, whose parameters are 'learned' through training. A chain of transformations from input to output is a "credit assignment path" (CAP). CAPs describe potentially causal connections between input and output and may vary in length – for a feedforward neural network, the depth of the CAPs (thus of the network) is the number of hidden layers plus one (as the output layer is also parameterized), but for recurrent neural networks, in which a signal may propagate through a layer more than once, the CAP is potentially unlimited in length. There is no universally agreed upon threshold of depth dividing shallow learning from deep learning, but most researchers in the field agree that deep learning has multiple nonlinear layers (CAP > 2) and Juergen Schmidhuber considers CAP > 10 to be very deep learning.
Deep learning For supervised learning tasks, deep learning methods obviate feature engineering, by translating the data into compact intermediate representations akin to principal components, and derive layered structures which remove redundancy in representation.
Deep learning In the long history of speech recognition, both shallow and deep learning (e.g., recurrent nets) of artificial neural networks have been explored for many years.
Deep learning Deep learning is part of a broader family of machine learning methods based on learning representations of data. An observation (e.g., an image) can be represented in many ways such as a vector of intensity values per pixel, or in a more abstract way as a set of edges, regions of particular shape, etc. Some representations are better than others at simplifying the learning task (e.g., face recognition or facial expression recognition). One of the promises of deep learning is replacing handcrafted features with efficient algorithms for unsupervised or semi-supervised feature learning and hierarchical feature extraction.
Deep learning Numerous researchers now use variants of a deep learning RNN called
Deep learning "Deep learning" has been characterized as a buzzword, or a rebranding of neural networks.
Deep learning Many deep learning algorithms are applied to unsupervised learning tasks. This is an important benefit because unlabeled data are usually more abundant than labeled data. Examples of deep structures that can be trained in an unsupervised manner are neural history compressors and deep belief networks.
Deep learning These definitions have in common (1) multiple layers of nonlinear processing units and (2) the supervised or unsupervised learning of feature representations in each layer, with the layers forming a hierarchy from low-level to high-level features. The composition of a layer of nonlinear processing units used in a deep learning algorithm depends on the problem to be solved. Layers that have been used in deep learning include hidden layers of an artificial neural network and sets of complicated propositional formulas. They may also include latent variables organized layer-wise in deep generative models such as the nodes in Deep Belief Networks and Deep Boltzmann Machines.
Deep learning The first general, working learning algorithm for supervised deep feedforward multilayer perceptrons was published by Ivakhnenko and Lapa in 1965. A 1971 paper described a deep network with 8 layers trained by the Group method of data handling algorithm which is still popular in the current millennium. These ideas were implemented in a computer identification system "Alpha", which demonstrated the learning process. Other Deep Learning working architectures, specifically those built from artificial neural networks (ANN), date back to the Neocognitron introduced by Kunihiko Fukushima in 1980. The ANNs themselves date back even further. The challenge was how to train networks with multiple layers.
Deep learning Since its resurgence, deep learning has become part of many state-of-the-art systems in various disciplines, particularly computer vision and automatic speech recognition (ASR). Results on commonly used evaluation sets such as TIMIT (ASR) and MNIST (image classification), as well as a range of large-vocabulary speech recognition tasks are constantly being improved with new applications of deep learning. Recently, it was shown that deep learning architectures in the form of convolutional neural networks have been nearly best performing; however, these are more widely used in computer vision than in ASR, and modern large scale speech recognition is typically based on CTC for LSTM.
Deep learning Such supervised deep learning methods also were the first artificial pattern recognizers to achieve human-competitive performance on certain tasks.
Deep learning offering technical insights into how to integrate deep learning into the existing highly efficient, run-time speech decoding system deployed by all major players in speech recognition industry. The history of this significant development in deep learning has been described and analyzed in recent books and articles.
Deep learning Deep learning algorithms are based on distributed representations. The underlying assumption behind distributed representations is that observed data are generated by the interactions of factors organized in layers. Deep learning adds the assumption that these layers of factors correspond to levels of abstraction or composition. Varying numbers of layers and layer sizes can be used to provide different amounts of abstraction.
Deep learning If there is a lot of learnable predictability in the incoming data sequence, then the highest level RNN can use supervised learning to easily classify even deep sequences with very long time intervals between important events. In 1993, such a system already solved a "Very Deep Learning" task that requires more than 1000 subsequent layers in an RNN unfolded in time.