The following code produces confidence intervals in R using the normal distribution and confidence intervals using the t-distribution.
The code reproduces the figure 1 presented in this post.
Continue reading Confidence Intervals R Code Part 1The following code produces confidence intervals in R using the normal distribution and confidence intervals using the t-distribution.
The code reproduces the figure 1 presented in this post.
Continue reading Confidence Intervals R Code Part 1Julia presents various ways to carry out linear regressions. In this previous post, I explained how to run linear regression in Julia using the function linreg(). Unfortunately, linreg() is deprecated and no longer exists in Julia v1.0.
In this post I will present how to use the native function of Julia to run OLS on the following model
This blog post explains the difference between confidence intervals that use the t-distribution and confidence intervals that use the Normal distribution. Thereby, the post will not focus on the theoretical/mathematical differences of the two distributions, but rather compare the two types of confidence intervals using simulation studies. Furthermore, in case you are interested in replicating the presented results or simply play around with it yourself, I provide the R code to conduct the simulation exercises and to replicate the figures.
Continue reading What is the difference between using the t-distribution and the Normal distribution when constructing confidence intervals?Multicollinearity is a common problem in econometrics. As explained in a previous post, multicollinearity arises when we have too few observations to precisely estimate the effects of two or more highly correlated variables on the dependent variable. This post tries to graphically illustrate the problem of multicollinearity using venn-diagrams. The venn-diagrams below all represent the following regression model Continue reading Graphically Illustrate Multicollinearity: Venn Diagram
This post is part of the series on the omitted variable bias and provides a simulation exercise that illustrates how omitting a relevant variable from your regression model biases the coefficients. The R code will be provided at the end. Continue reading Omitted Variable Bias: An Example
Multicollinearity or collinearity refers to a situation where two or more variables of a regression model are highly correlated. Because of the high correlation, it is difficult to disentangle the pure effect of one single explanatory variables on the dependent variable
. From a mathematical point of view, multicollinearity only becomes an issue when we face perfect multicollinearity. That is, when we have identical variables in our regression model. Continue reading The Problem of Multicollinearity
In STATA one can estimate a linear regression using the command regress
. In this post I will present how to use the STATA function regress
to run OLS on the following model
In a previous post, we discussed how to obtain clustered standard errors in R. While the previous post described how one can easily calculate cluster robust standard errors in R, this post shows how one can include cluster robust standard errors in stargazer and create nice tables including clustered standard errors.
Continue reading Cluster Robust Standard Errors in Stargazer
In a previous post, we discussed how to obtain robust standard errors in R. While the previous post described how one can easily calculate robust standard errors in R, this post shows how one can include robust standard errors in stargazer and create nice tables including robust standard errors.
The omitted variable bias is a common and serious problem in regression analysis. Generally, the problem arises if one does not consider all relevant variables in a regression. In this case, one violates the third assumption of the assumption of the classical linear regression model. The following series of blog posts explains the omitted variable bias and discusses its consequences.