icon-cookie
The website uses cookies to optimize your user experience. Using this website grants us the permission to collect certain information essential to the provision of our services to you, but you may change the cookie settings within your browser any time you wish. Learn more
I agree
blank_error__heading
blank_error__body
Text direction?

Linear regression - Maximum Likelihood Estimation

by Marco Taboga, PhD

This lecture shows how to perform maximum likelihood estimation of the parameters of a Normal Linear Regression Model, that is, of a linear regression model whose error terms are normally distributed conditional on the regressors.

In order to fully understand the material presented in this lecture, it might be useful to revise the lectures on Maximum likelihood estimation and on the Normal Linear Regression Model.

Table of Contents

Assumptions

The objective is to estimate the parameters of the linear regression model[eq1]

where $y_{i}$ is the dependent variable, $x_{i}$ is a $1	imes K$ vector of regressors, $eta _{0}$ is the Kx1 vector of regression coefficients to be estimated and $arepsilon _{i}$ is an unobservable error term.

We assume that our sample is made up of $N$ IID observations [eq2].

The regression equations can be written in matrix form as[eq3]

where the $N	imes 1$ vector of observations of the dependent variable is denoted by $y$, the $N	imes K$ matrix of regressors is denoted by X, and the $N	imes 1$ vector of error terms is denoted by epsilon.

We also assume that the vector of errors epsilon has a multivariate normal distribution conditional on X, with mean equal to 0 and covariance matrix equal to[eq4]where I is the $N	imes N$ identity matrix and [eq5]

Note that also $sigma _{0}^{2}$ is a parameter to be estimated.

Furthermore, it is assumed that the matrix of regressors X has full-rank.

The assumption that the covariance matrix of epsilon is diagonal implies that the entries of epsilon are mutually independent (i.e., $arepsilon _{i}$ is independent of $arepsilon _{j}$ for $i
eq j$.). Moreover, they all have a normal distribution with mean 0 and variance $sigma _{0}^{2}$.

By the properties of linear transformations of normal random variables, we have that also the dependent variable $y_{i}$ is conditionally normal, with mean $x_{i}eta _{0}$ and variance $sigma _{0}^{2}$. Therefore, the conditional probability density function of the dependent variable is [eq6]

The likelihood function

The likelihood function is[eq7]

Proof

The log-likelihood function

The log-likelihood function is [eq9]

Proof

The maximum likelihood estimators

The maximum likelihood estimators of the regression coefficients and of the variance of the error terms are[eq11]

Proof

Thus, the maximum likelihood estimators are:

  1. for the regression coefficients, the usual OLS estimator;

Asymptotic variance

The vector of parameters[eq23]is asymptotically normal with asymptotic mean equal to[eq24]and asymptotic covariance matrix equal to

[eq25]

Proof

This means that the probability distribution of the vector of parameter estimates [eq39]can be approximated by a multivariate normal distribution with mean [eq40]and covariance matrix[eq41]

Measure
Measure
Related Notes
Get a free MyMarkup account to save this article and view it later on any device.
Create account

End User License Agreement

Summary | 7 Annotations
is a vector of regressors
2020/08/12 08:30
is the vector of regression coefficients
2020/08/12 08:30
is the dependent variable
2020/08/12 08:30
is an unobservable error term
2020/08/12 08:30
mean equal to and covariance matrix equal to
2020/08/12 08:31
the covariance matrix of is diagonal implies that the entries of are mutually independent
2020/08/12 08:32
they all have a normal distribution with mean and variance
2020/08/12 08:32