Regression Modeling in Practice

Start Date: 11/03/2019

Course Type: Common Course

Course Link: https://www.coursera.org/learn/regression-modeling-practice

Explore 1600+ online courses from top universities. Join Coursera today to learn data science, programming, business strategy, and more.

Course Syllabus

This session starts where the Data Analysis Tools course left off. This first set of videos provides you with some conceptual background about the major types of data you may work with, which will increase your competence in choosing the statistical analysis that’s most appropriate given the structure of your data, and in understanding the limitations of your data set. We also introduce you to the concept of confounding variables, which are variables that may be the reason for the association between your explanatory and response variable. Finally, you will gain experience in describing your data by writing about your sample, the study data collection procedures, and your measures and data management steps.

Deep Learning Specialization on Coursera

Course Introduction

This course focuses on one of the most important tools in your data analysis arsenal: regression ana

Course Tag

Logistic Regression Data Analysis Python Programming Regression Analysis

Related Wiki Topic

Article Example
Multivariate adaptive regression splines No regression modeling technique is best for all situations.
Regression-kriging Regression-kriging is used in various applied fields, from meteorology, climatology, soil mapping, geological mapping, species distribution modeling and similar. The only requirement for using regression-kriging versus e.g. ordinary kriging is that one or more covariate layers exist, and which are significantly correlated with the feature of interest. Some general applications of regression-kriging are:
Ordinal regression In statistics, ordinal regression (also called "ordinal classification") is a type of regression analysis used for predicting an ordinal variable, i.e. a variable whose value exists on an arbitrary scale where only the relative ordering between different values is significant. It can be considered an intermediate problem between regression and classification. Examples of ordinal regression are ordered logit and ordered probit. Ordinal regression turns up often in the social sciences, for example in the modeling of human levels of preference (on a scale from, say, 1–5 for "very poor" through "excellent"), as well as in information retrieval. In machine learning, ordinal regression may also be called ranking learning.
Linear regression In statistics, linear regression is an approach for modeling the relationship between a scalar dependent variable "y" and one or more explanatory variables (or independent variables) denoted "X". The case of one explanatory variable is called "simple linear regression". For more than one explanatory variable, the process is called "multiple linear regression". (This term is distinct from "multivariate linear regression", where multiple correlated dependent variables are predicted, rather than a single scalar variable.)
Regression analysis In linear regression, the model specification is that the dependent variable, formula_2 is a linear combination of the "parameters" (but need not be linear in the "independent variables"). For example, in simple linear regression for modeling formula_3 data points there is one independent variable: formula_4, and two parameters, formula_5 and formula_6:
Regression analysis The performance of regression analysis methods in practice depends on the form of the data generating process, and how it relates to the regression approach being used. Since the true form of the data-generating process is generally not known, regression analysis often depends to some extent on making assumptions about this process. These assumptions are sometimes testable if a sufficient quantity of data is available. Regression models for prediction are often useful even when the assumptions are moderately violated, although they may not perform optimally. However, in many applications, especially with small effects or questions of causality based on observational data, regression methods can give misleading results.
Kitchen sink regression The kitchen sink regression is an example of the practice of data dredging.
Logistic regression The basic idea of logistic regression is to use the mechanism already developed for linear regression by modeling the probability "p" using a linear predictor function, i.e. a linear combination of the explanatory variables and a set of regression coefficients that are specific to the model at hand but the same for all trials. The linear predictor function formula_97 for a particular data point "i" is written as:
Nonparametric regression Understanding and using these controls on overfitting is essential to effective modeling with nonparametric regression. Nonparametric regression models can become overfit either by including too many predictors or by using small smoothing parameters (also known as bandwidth or tolerance). This can make a big difference with special problems, such as small data sets or clumped distributions along predictor variables.
Segmented regression Segmented linear regression is segmented regression whereby the relations in the intervals are obtained by linear regression.
Meta-regression Meta-regression has been employed as a technique to derive improved parameter estimates that are of direct use to policy makers. Meta-regression provides a framework for replication and offers a sensitivity analysis for model specification. There are a number of strategies for identifying and coding empirical observational data. Meta-regression models can be extended for modeling within-study dependence, excess heterogeneity and publication selection. The simple regression model does not allow for within study variation. The fixed effects regression model does not allow for between study variation. The random or mixed effects model allows for within study variation and between study variation and is therefore the most appropriate model to choose in many applications. Whether there is between study variation (excess heterogeneity) can be tested under the assumption that effect sizes are homogeneous or have a tendency to a central mean. If the test shows that the effect sizes have excess heterogeneity, the fixed effects meta-regression model may be most appropriate.
Regression analysis Regression methods continue to be an area of active research. In recent decades, new methods have been developed for robust regression, regression involving correlated responses such as time series and growth curves, regression in which the predictor (independent variable) or response variables are curves, images, graphs, or other complex data objects, regression methods accommodating various types of missing data, nonparametric regression, Bayesian methods for regression, regression in which the predictor variables are measured with error, regression with more predictor variables than observations, and causal inference with regression.
Cluster-weighted modeling In the same way as for regression analysis, it will be important to consider preliminary data transformations as part of the overall modeling strategy if the core components of the model are to be simple regression models for the cluster-wise condition densities, and normal distributions for the cluster-weighting densities "p"("x").
Regression dilution Hughes (1993) shows that the regression dilution ratio methods apply approximately in survival models. Rosner (1992) shows that the ratio methods apply approximately to logistic regression models. Carroll et al. (1995) give more detail on regression dilution in nonlinear models, presenting the regression dilution ratio methods as the simplest case of "regression calibration" methods, in which additional covariates may also be incorporated.
Regression analysis In statistical modeling, regression analysis is a statistical process for estimating the relationships among variables. It includes many techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables (or 'predictors'). More specifically, regression analysis helps one understand how the typical value of the dependent variable (or 'criterion variable') changes when any one of the independent variables is varied, while the other independent variables are held fixed. Most commonly, regression analysis estimates the conditional expectation of the dependent variable given the independent variables – that is, the average value of the dependent variable when the independent variables are fixed. Less commonly, the focus is on a quantile, or other location parameter of the conditional distribution of the dependent variable given the independent variables. In all cases, the estimation target is a function of the independent variables called the regression function. In regression analysis, it is also of interest to characterize the variation of the dependent variable around the regression function which can be described by a probability distribution. A related but distinct approach is necessary condition analysis (NCA), which estimates the maximum (rather than average) value of the dependent variable for a given value of the independent variable (ceiling line rather than central line) in order to identify what value of the independent variable is necessary but not sufficient for a given value of the dependent variable.
Deming regression The Deming regression is only slightly more difficult to compute compared to the simple linear regression. Most statistical software packages used in clinical chemistry offer Deming regression.
Regression analysis Many techniques for carrying out regression analysis have been developed. Familiar methods such as linear regression and ordinary least squares regression are parametric, in that the regression function is defined in terms of a finite number of unknown parameters that are estimated from the data. Nonparametric regression refers to techniques that allow the regression function to lie in a specified set of functions, which may be infinite-dimensional.
Segmented regression Segmented regression, also known as piecewise regression or "broken-stick regression", is a method in regression analysis in which the independent variable is partitioned into intervals and a separate line segment is fit to each interval. Segmented regression analysis can also be performed on multivariate data by partitioning the various independent variables. Segmented regression is useful when the independent variables, clustered into different groups, exhibit different relationships between the variables in these regions. The boundaries between the segments are "breakpoints".
Deming regression Deming regression is equivalent to the maximum likelihood estimation of an errors-in-variables model in which the errors for the two variables are assumed to be independent and normally distributed, and the ratio of their variances, denoted "δ", is known. In practice, this ratio might be estimated from related data-sources; however the regression procedure takes no account for possible errors in estimating this ratio.
Behavioral modeling in hydrology In hydrology, behavioral modeling is a modeling approach that focuses on the modeling of the behavior of hydrological systems.