Deep Learning Specialization on Coursera

Course Tag

Related Wiki Topic

Article Example
Bayesian optimization Bayesian optimization is a sequential design strategy
Bayesian optimization and mixtures of these. They all trade-off exploration and exploitation so as to minimize the number of function queries. As such, Bayesian optimization is well suited for functions that are very expensive to evaluate.
Bayesian optimization expected improvement, Bayesian expected losses, upper confidence bounds (UCB), Thompson sampling
Hyperparameter optimization Bayesian optimization is a methodology for the global optimization of noisy black-box functions. Applied to hyperparameter optimization, Bayesian optimization consists of developing a statistical model of the function from hyperparameter values to the objective evaluated on a validation set. Intuitively, the methodology assumes that there is some smooth but noisy function that acts as a mapping from hyperparameters to the objective. In Bayesian optimization, one aims to gather observations in such a manner as to evaluate the machine learning model the least number of times while revealing as much information as possible about this function and, in particular, the location of the optimum. Bayesian optimization relies on assuming a very general prior over functions which when combined with observed hyperparameter values and corresponding outputs yields a distribution over functions. The methodology proceeds by iteratively picking hyperparameters to observe (experiments to run) in a manner that trades off exploration (hyperparameters for which the outcome is most uncertain) and exploitation (hyperparameters which are expected to have a good outcome). In practice, Bayesian optimization has been shown to obtain better results in fewer experiments than grid search and random search, due to the ability to reason about the quality of experiments before they are run.
Bayesian optimization for global optimization of black-box functions that doesn't require derivatives.
Bayesian optimization Since the objective function is unknown, the Bayesian strategy is to treat it as a random function and place a prior over it.
Bayesian optimization The term is generally attributed to Jonas Mockus and is coined in his work from a series of publications on global optimization in the 1970s and 1980s.
Optimization Toolbox Portfolio optimization, cashflow matching, and other computational finance problems are solved with Optimization Toolbox.
Stochastic optimization Stochastic optimization (SO) methods are optimization methods that generate and use random variables. For stochastic problems, the random variables appear in the formulation of the optimization problem itself, which involve random objective functions or random constraints. Stochastic optimization methods also include methods with random iterates. Some stochastic optimization methods use random iterates to solve stochastic problems, combining both meanings of stochastic optimization.
Bayesian inference A decision-theoretic justification of the use of Bayesian inference was given by Abraham Wald, who proved that every unique Bayesian procedure is admissible. Conversely, every admissible statistical procedure is either a Bayesian procedure or a limit of Bayesian procedures.
Bayesian network "X" is a Bayesian network with respect to "G" if, for any two nodes "u", "v":
Bayesian Bayesian also refers to the application of this probability theory to the functioning of the brain:
Bayesian Bayesian methods have been also applied to the interpretation of quantum mechanics:
Hydrological optimization Examples of problems solved with hydrological optimization:
Bayesian probability A decision-theoretic justification of the use of Bayesian inference (and hence of Bayesian probabilities) was given by Abraham Wald, who proved that every admissible statistical procedure is either a Bayesian procedure or a limit of Bayesian procedures. Conversely, every Bayesian procedure is admissible.
Bayesian programming Bayesian programming may also be seen as an algebraic formalism to specify graphical models such as, for instance, Bayesian networks, dynamic Bayesian networks, Kalman filters or hidden Markov models. Indeed, Bayesian Programming is more general than Bayesian networks and has a power of expression equivalent to probabilistic factor graphs.
Online optimization Online optimization is a field of optimization theory, more popular in computer science and operations research, that deals with the optimization problems having no or incomplete knowledge of the future (online).
Vector optimization Vector optimization is a subarea of mathematical optimization where optimization problems with a vector-valued objective functions are optimized with respect to a given partial ordering and subject to certain constraints. A multi-objective optimization problem is a special case of a vector optimization problem: The objective space is the finite dimensional Euclidean space partially ordered by the component-wise "less than or equal to" ordering.
Query optimization Classical query optimization associates each query plan with one scalar cost value. Parametric query optimization assumes that query plan cost depends on parameters whose values are unknown at optimization time. Such parameters can for instance represent the selectivity of query predicates that are not fully specified at optimization time but will be provided at execution time. Parametric query optimization therefore associates each query plan with a cost function that maps from a multi-dimensional parameter space to a one-dimensional cost space.
Bilevel optimization A bilevel optimization problem can be generalized to a multi-objective bilevel optimization problem with multiple objectives at one or both levels. A general multi-objective bilevel optimization problem can be formulated as follows: