This post shows how to manually construct the OLS estimator in R (see this post for the exact mathematical derivation of the OLS estimator). The code will go through each single step of the calculation and estimate the coefficients, standard errors and p-values. In case you are interested the coding an OLS function rather than in the step wise calculation of the estimation itself I recommend you to have a look at this post. Continue reading Calculate OLS estimator manually in R
Category Archives: Econometrics
CLRM – Assumption 5: Normal Distributed Error Terms in Population
Assumption 5 is often listed as a Gauss-Markov assumption and refers to normally distributed error terms in the population. Overall, assumption 5 is not a Gauss-Markov assumption in that sense that the OLS estimator will still be the best linear unbiased estimator (BLUE) even if the error terms
are not normally distributed in the population. Continue reading CLRM – Assumption 5: Normal Distributed Error Terms in Population
Violation of CLRM – Assumption 4.1: Consequences when the expected value of the error term is non-zero
Violating assumption 4.1 of the OLS assumptions, i.e. , can affect our estimation in various ways. The exact ways a violation affects our estimates depends on the way we violate
. This post looks at different cases and elaborates on the consequences of the violation. We start with a less severe case and then continue discussing a far more sensitive violation of assumption 4.1. Continue reading Violation of CLRM – Assumption 4.1: Consequences when the expected value of the error term is non-zero
CLRM – Assumption 4: Independent and Identically Distributed Error Terms
Assumption 4 of the four assumption required by the Gauss-Markov theorem states that the error terms of the population are independent and identically distributed (iid) with an expected value of zero and a constant variance
. Formally, Continue reading CLRM – Assumption 4: Independent and Identically Distributed Error Terms
Proof Gauss Markov Theorem
From a previous posts on the Gauss Markov Theorem and OLS we know that the assumption of unbiasedness must full fill the following condition Continue reading Proof Gauss Markov Theorem
CLRM – Assumption 3: Explanatory Variables must be exogenous
Assumption 3, exogeneity of explanatory variables requires that the explanatory variables in the model do not explain variation in the error terms, formally we express assumption 3 as Continue reading CLRM – Assumption 3: Explanatory Variables must be exogenous
CLRM – Assumption 2: Full Rank of Matrix X
Assumption 2 requires the matrix of explanatory variables to have full rank. This means that in case matrix
is a
matrix the rank of matrix
is
. Namely,
CLRM – Assumption 1: Linear Parameter and correct model specification
Assumption 1 requires that the dependent variable is a linear combination of the explanatory variables
and the error terms
. Assumption 1 requires the specified model to be linear in parameters, but it does not require the model to be linear in variables. Equation 1 and 2 depict a model which is both, linear in parameter and variables. Note that Equation 1 and 2 show the same model in different notation.
Continue reading CLRM – Assumption 1: Linear Parameter and correct model specification
Unbiased Estimator of Sample Variance – Vol. 2
Lately I received some criticism saying that my proof (link to proof) on the unbiasedness of the estimator for the sample variance strikes through its unnecessary length. Well, as I am an economist and love proofs which read like a book, I never really saw the benefit of bowling down a proof to a couple of lines. Actually, I hate it if I have to brew over a proof for an hour before I clearly understand what’s going on. However, in order to satisfy the need for mathematical beauty, I looked around and found the following proof which is way shorter than my original version.
Continue reading Unbiased Estimator of Sample Variance – Vol. 2
Assumptions of Classical Linear Regression Models (CLRM)
The following post will give a short introduction about the underlying assumptions of the classical linear regression model (OLS assumptions), which we derived in the following post. Given the Gauss-Markov Theorem we know that the least squares estimator and
are unbiased and have minimum variance among all unbiased linear estimators. The Gauss-Markov Theorem is telling us that in a regression model, where the expected value of our error terms is zero,
and variance of the error terms is constant and finite
and
and
are uncorrelated for all
and
the least squares estimator
and
are unbiased and have minimum variance among all unbiased linear estimators. (A detailed proof of the Gauss-Markov Theorem can be found here)