by Marco Taboga, PhD
This lecture shows how to perform maximum likelihood estimation of the parameters of a Normal Linear Regression Model, that is, of a linear regression model whose error terms are normally distributed conditional on the regressors.
In order to fully understand the material presented in this lecture, it might be useful to revise the lectures on Maximum likelihood estimation and on the Normal Linear Regression Model.
The objective is to estimate the parameters of the linear regression model
where is the dependent variable, is a vector of regressors, is the vector of regression coefficients to be estimated and is an unobservable error term.
We assume that our sample is made up of IID observations .
The regression equations can be written in matrix form as
where the vector of observations of the dependent variable is denoted by , the matrix of regressors is denoted by , and the vector of error terms is denoted by .
We also assume that the vector of errors has a multivariate normal distribution conditional on , with mean equal to and covariance matrix equal towhere is the identity matrix and
Note that also is a parameter to be estimated.
Furthermore, it is assumed that the matrix of regressors has full-rank.
The assumption that the covariance matrix of is diagonal implies that the entries of are mutually independent (i.e., is independent of for .). Moreover, they all have a normal distribution with mean and variance .
By the properties of linear transformations of normal random variables, we have that also the dependent variable is conditionally normal, with mean and variance . Therefore, the conditional probability density function of the dependent variable is
The likelihood function is
The log-likelihood function is
The maximum likelihood estimators of the regression coefficients and of the variance of the error terms are
Thus, the maximum likelihood estimators are:
for the regression coefficients, the usual OLS estimator;
The vector of parametersis asymptotically normal with asymptotic mean equal toand asymptotic covariance matrix equal to
This means that the probability distribution of the vector of parameter estimates can be approximated by a multivariate normal distribution with mean and covariance matrix
Please cite as:
Taboga, Marco (2017). "Linear regression - Maximum Likelihood Estimation", Lectures on probability theory and mathematical statistics, Third edition. Kindle Direct Publishing. Online appendix. https://www.statlect.com/fundamentals-of-statistics/linear-regression-maximum-likelihood.