Category Archives: Econometrics

Calculate OLS estimator manually in R

This post shows how to manually construct the OLS estimator in R (see this post for the exact mathematical derivation of the OLS estimator). The code will go through each single step of the calculation and estimate the coefficients, standard errors and p-values.  In case you are interested the coding an OLS function rather than in the step wise calculation of the estimation itself I recommend you to have a look at this postContinue reading Calculate OLS estimator manually in R

Advertisement

CLRM – Assumption 5: Normal Distributed Error Terms in Population

Assumption 5 is often listed as a Gauss-Markov assumption and refers to normally distributed error terms \epsilon in the population. Overall, assumption 5 is not a Gauss-Markov assumption in that sense that the OLS estimator will still be the best linear unbiased estimator (BLUE) even if the error terms \epsilon are not normally distributed in the population. Continue reading CLRM – Assumption 5: Normal Distributed Error Terms in Population

Violation of CLRM – Assumption 4.1: Consequences when the expected value of the error term is non-zero

Violating assumption 4.1 of the OLS assumptions, i.e. E(\epsilon_i|X) = 0, can affect our estimation in various ways. The exact ways a violation affects our estimates depends on the way we violate E(\epsilon_i|X) = 0. This post looks at different cases and elaborates on the consequences of the violation. We start with a less severe case and then continue discussing a far more sensitive violation of assumption 4.1. Continue reading Violation of CLRM – Assumption 4.1: Consequences when the expected value of the error term is non-zero

CLRM – Assumption 1: Linear Parameter and correct model specification

Assumption 1 requires that the dependent variable \textbf{y} is a linear combination of the explanatory variables \textbf{X} and the error terms \epsilon. Assumption 1 requires the specified model to be linear in parameters, but it does not require the model to be linear in variables. Equation 1 and 2 depict a model which is both, linear in parameter and variables. Note that Equation 1 and 2 show the same model in different notation.

Continue reading CLRM – Assumption 1: Linear Parameter and correct model specification

Unbiased Estimator of Sample Variance – Vol. 2

Lately I received some criticism saying that my proof (link to proof) on the unbiasedness of the estimator for the sample variance strikes through its unnecessary length. Well, as I am an economist and love proofs which read like a book, I never really saw the benefit of bowling down a proof to a couple of lines. Actually, I hate it if I have to brew over a proof for an hour before I clearly understand what’s going on. However, in order to satisfy the need for mathematical beauty, I looked around and found the following proof which is way shorter than my original version.

Continue reading Unbiased Estimator of Sample Variance – Vol. 2

Assumptions of Classical Linear Regression Models (CLRM)

The following post will give a short introduction about the underlying assumptions of the classical linear regression model (OLS assumptions), which we derived in the following post. Given the  Gauss-Markov Theorem we know that the least squares estimator b_{0} and b_{1} are unbiased and have minimum variance among all unbiased linear estimators. The Gauss-Markov Theorem is telling us that in a regression model, where the expected value of our error terms is zero, E(\epsilon_{i}) = 0 and variance of the error terms is constant and finite \sigma^{2}(\epsilon_{i}) = \sigma^{2} \textless \infty and \epsilon_{i} and \epsilon_{j} are uncorrelated for all i and j the least squares estimator b_{0} and b_{1} are unbiased and have minimum variance among all unbiased linear estimators. (A detailed proof of the Gauss-Markov Theorem can be found here)

Continue reading Assumptions of Classical Linear Regression Models (CLRM)