CLRM – Assumption 1: Linear Parameter and correct model specification

Assumption 1 requires that the dependent variable $\textbf{y}$ is a linear combination of the explanatory variables $\textbf{X}$ and the error terms $\epsilon$. Assumption 1 requires the specified model to be linear in parameters, but it does not require the model to be linear in variables. Equation 1 and 2 depict a model which is both, linear in parameter and variables. Note that Equation 1 and 2 show the same model in different notation.

Understanding investment activity in an economy is not trivial. The erratic nature of firm level investment activity is somewhat of a mystery to me and it took me quite some time to get a vague idea of what could be the generating process behind such an erratic behavior. I think understanding capital adjustment costs was the key to understand why it can be rational for firms to invest in a spasmodic way. In this post I would like to shortly summarize part of what I learnt so far and list different types of capital adjustment costs found in the literature.

Unbiased Estimator of Sample Variance – Vol. 2

Lately I received some criticism saying that my proof (link to proof) on the unbiasedness of the estimator for the sample variance strikes through its unnecessary length. Well, as I am an economist and love proofs which read like a book, I never really saw the benefit of bowling down a proof to a couple of lines. Actually, I hate it if I have to brew over a proof for an hour before I clearly understand what’s going on. However, in order to satisfy the need for mathematical beauty, I looked around and found the following proof which is way shorter than my original version.

Assumptions of Classical Linear Regression Models (CLRM)

The following post will give a short introduction about the underlying assumptions of the classical linear regression model (OLS assumptions), which we derived in the following post. Given the  Gauss-Markov Theorem we know that the least squares estimator $b_{0}$ and $b_{1}$ are unbiased and have minimum variance among all unbiased linear estimators. The Gauss-Markov Theorem is telling us that in a regression model, where the expected value of our error terms is zero, $E(\epsilon_{i}) = 0$ and variance of the error terms is constant and finite $\sigma^{2}(\epsilon_{i}) = \sigma^{2} \textless \infty$ and $\epsilon_{i}$ and $\epsilon_{j}$ are uncorrelated for all $i$ and $j$ the least squares estimator $b_{0}$ and $b_{1}$ are unbiased and have minimum variance among all unbiased linear estimators. (A detailed proof of the Gauss-Markov Theorem can be found here)

Continue reading Assumptions of Classical Linear Regression Models (CLRM)

The Gauss Markov Theorem

When studying the classical linear regression model, one necessarily comes across the Gauss-Markov Theorem. The Gauss-Markov Theorem is a central theorem for linear regression models. It states different conditions that, when met, ensure that your estimator has the lowest variance among all unbiased estimators. More formally, Continue reading The Gauss Markov Theorem

What is an indirect proof?

In economics, especially in theoretical economics, it is often necessary to formally prove your statements. Meaning to show your statements are correct in a logical way. One possible way of showing that your statements are correct is by providing an indirect proof. The following couple of lines try to explain the concept of indirect proof in a simple way.

Derivation of the Least Squares Estimator for Beta in Matrix Notation

The following post is going to derive the least squares estimator for $\beta$, which we will denote as $b$. In general start by mathematically formalizing relationships we think are present in the real world and write it down in a formula.

Relationship between Coefficient of Determination & Squared Pearson Correlation Coefficient

The usual way of interpreting the coefficient of determination $R^{2}$ is to see it as the percentage of the variation of the dependent variable $y$ ($Var(y)$) can be explained by our model. The exact interpretation and derivation of the coefficient of determination $R^{2}$ can be found here.

Another way of interpreting the coefficient of determination $R^{2}$ is to look at it as the Squared Pearson Correlation Coefficient between the observed values $y_{i}$ and the fitted values  Continue reading Relationship between Coefficient of Determination & Squared Pearson Correlation Coefficient

The Coefficient Of Determination or R2

The coefficient of determination $R^{2}$ shows how much of the variation of the dependent variable $y$ ($Var(y)$) can be explained by our model. Another way of interpreting the coefficient of determination $R^{2}$, which will not be discussed in this post, is to look at it as the squared Pearson correlation coefficient between the observed values $y_{i}$ and the fitted values $\hat{y}_{i}$. Why this is the case exactly can be found in another post.

Balance Statistic

The following article tries to explain the Balance Statistic sometimes referred to as Saldo or Saldo Statistic. It is used as a quantification method for qualitative survey question. The benefit of applying the Balance Statistic arises when the survey is repeated over time as it tracks changes in respondents answers in a comprehensible way. The Balance Statistic is common in Business Tendency Surveys.