Multiple Regression Analysis in Public Health

Start Date: 07/05/2020

Course Type: Common Course

Course Link: https://www.coursera.org/learn/multiple-regression-analysis-public-health

About Course

Biostatistics is the application of statistical reasoning to the life sciences, and it's the key to unlocking the data gathered by researchers and the evidence presented in the scientific public health literature. In this course, you'll extend simple regression to the prediction of a single outcome of interest on the basis of multiple variables. Along the way, you'll be introduced to a variety of methods, and you'll practice interpreting data and performing calculations on real data from published studies. Topics include multiple logistic regression, the Spline approach, confidence intervals, p-values, multiple Cox regression, adjustment, and effect modification.

Course Syllabus

An Overview of Multiple Regression for Estimation, Adjustment, and Basic Prediction, and Multiple Linear Regression
Multiple Logistic Regression
Multiple Cox Regression
Course Project

Coursera Plus banner featuring three learners and university partner logos

Course Introduction

Multiple Regression Analysis in Public Health Practice Multiple regression is the analysis of the association between variables in a population and a prognostic or nonprognostic outcome. It is the application of multiple imputation to predict or eliminate confounding, which is the use of confounding by non-independence of variables. Multiple regression is the application of multiple regression to confirm, modify, or reject hypotheses. Multiple regression is the application of multiple regression to adjust for confounding, which is the use of confounding by non-independence of variables. Multiple regression is the application of multiple regression to stratify the effects of confounding, which is the use of confounding by independence of variables. Multiple regression is the application of multiple regression to stratify the effects of confounding, which is the use of confounding by independence of variables. This course is part of the iMBA offered by the University of Illinois, a flexible, fully-accredited online MBA at an incredibly competitive price. For more information, please see the Resource page in this course and onlinemba.illinois.edu.Module 1: Multiple Regression Module 2: Comparing Means Module 3: Comparing Means and Directed Averaging Module 4: Comparing Means and Directed Averaging Operating Systems and Networking in the Networking Lab This course has been designed to help you build knowledge on critical components of network architecture, management, and operations. We'll take a deeper dive

Course Tag

effect modifcation Proportional Hazards Model Regression Analysis Spline approach

Related Wiki Topic

Article Example
Regression analysis All major statistical software packages perform least squares regression analysis and inference. Simple linear regression and multiple regression using least squares can be done in some spreadsheet applications and on some calculators. While many statistical software packages can perform various types of nonparametric and robust regression, these methods are less standardized; different software packages implement different methods, and a method with a given name may be implemented differently in different packages. Specialized regression software has been developed for use in fields such as survey analysis and neuroimaging.
Regression analysis Classical assumptions for regression analysis include:
Regression analysis In the last case, the regression analysis provides the tools for:
Regression analysis In multiple linear regression, there are several independent variables or functions of independent variables.
Regression analysis In the more general multiple regression model, there are "p" independent variables:
Regression analysis Many techniques for carrying out regression analysis have been developed. Familiar methods such as linear regression and ordinary least squares regression are parametric, in that the regression function is defined in terms of a finite number of unknown parameters that are estimated from the data. Nonparametric regression refers to techniques that allow the regression function to lie in a specified set of functions, which may be infinite-dimensional.
Regression analysis The performance of regression analysis methods in practice depends on the form of the data generating process, and how it relates to the regression approach being used. Since the true form of the data-generating process is generally not known, regression analysis often depends to some extent on making assumptions about this process. These assumptions are sometimes testable if a sufficient quantity of data is available. Regression models for prediction are often useful even when the assumptions are moderately violated, although they may not perform optimally. However, in many applications, especially with small effects or questions of causality based on observational data, regression methods can give misleading results.
Regression analysis Regression analysis is widely used for prediction and forecasting, where its use has substantial overlap with the field of machine learning. Regression analysis is also used to understand which among the independent variables are related to the dependent variable, and to explore the forms of these relationships. In restricted circumstances, regression analysis can be used to infer causal relationships between the independent and dependent variables. However this can lead to illusions or false relationships, so caution is advisable; for example, correlation does not imply causation.
Regression analysis In statistical modeling, regression analysis is a statistical process for estimating the relationships among variables. It includes many techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables (or 'predictors'). More specifically, regression analysis helps one understand how the typical value of the dependent variable (or 'criterion variable') changes when any one of the independent variables is varied, while the other independent variables are held fixed. Most commonly, regression analysis estimates the conditional expectation of the dependent variable given the independent variables – that is, the average value of the dependent variable when the independent variables are fixed. Less commonly, the focus is on a quantile, or other location parameter of the conditional distribution of the dependent variable given the independent variables. In all cases, the estimation target is a function of the independent variables called the regression function. In regression analysis, it is also of interest to characterize the variation of the dependent variable around the regression function which can be described by a probability distribution. A related but distinct approach is necessary condition analysis (NCA), which estimates the maximum (rather than average) value of the dependent variable for a given value of the independent variable (ceiling line rather than central line) in order to identify what value of the independent variable is necessary but not sufficient for a given value of the dependent variable.
Meta-regression In energy conservation, meta-regression analysis has been used to evaluate behavioral information strategies in the residential electricity sector. In water policy analysis, meta-regression has been used to evaluate cost savings estimates due to privatization of local government services for water distribution and solid waste collection. Meta-regression is an increasingly popular tool to evaluate the available evidence in cost-benefit analysis studies of a policy or program spread across multiple studies.
Polynomial regression Conveniently, these models are all linear from the point of view of estimation, since the regression function is linear in terms of the unknown parameters "a", "a", ... Therefore, for least squares analysis, the computational and inferential problems of polynomial regression can be completely addressed using the techniques of multiple regression. This is done by treating "x", "x", ... as being distinct independent variables in a multiple regression model.
Regression analysis If the experimenter had performed measurements at three different values of the independent variable vector X, then regression analysis would provide a unique set of estimates for the three unknown parameters in β.
Outline of regression analysis The following outline is provided as an overview of and topical guide to regression analysis:
Regression analysis Assume now that the vector of unknown parameters β is of length "k". In order to perform a regression analysis the user must provide information about the dependent variable "Y":
Regression analysis Regression methods continue to be an area of active research. In recent decades, new methods have been developed for robust regression, regression involving correlated responses such as time series and growth curves, regression in which the predictor (independent variable) or response variables are curves, images, graphs, or other complex data objects, regression methods accommodating various types of missing data, nonparametric regression, Bayesian methods for regression, regression in which the predictor variables are measured with error, regression with more predictor variables than observations, and causal inference with regression.
Multivariate analysis Factor analysis is part of the general linear model (GLM) family of procedures and makes many of the same assumptions as multiple regression, but it uses multiple outcomes.
Unit-weighted regression In statistics, unit-weighted regression is a simplified and robust version (Wainer & Thissen, 1976) of multiple regression analysis where only the intercept term is estimated. That is, it fits a model
Klecka's tau In addition to its use in discriminant analysis it has been used in multiple regression analysis, probit regression, logistic regression and image analysis.
Canonical analysis In statistics, canonical analysis (from bar, measuring rod, ruler) belongs to the family of regression methods for data analysis. Regression analysis quantifies a relationship between a predictor variable and a criterion variable by the coefficient of correlation "r", coefficient of determination "r", and the standard regression coefficient "β". Multiple regression analysis expresses a relationship between a set of predictor variables and a single criterion variable by the multiple correlation "R", multiple coefficient of determination R², and a set of standard partial regression weights "β", "β", etc. Canonical variate analysis captures a relationship between a set of predictor variables and a set of criterion variables by the canonical correlations "ρ", "ρ", ..., and by the sets of canonical weights C and D.
Linear regression The very simplest case of a single scalar predictor variable "x" and a single scalar response variable "y" is known as "simple linear regression". The extension to multiple and/or vector-valued predictor variables (denoted with a capital "X") is known as "multiple linear regression", also known as "multivariable linear regression". Nearly all real-world regression models involve multiple predictors, and basic descriptions of linear regression are often phrased in terms of the multiple regression model. Note, however, that in these cases the response variable "y" is still a scalar. Another term "multivariate linear regression" refers to cases where "y" is a vector, i.e., the same as "general linear regression".