Y = C + I + G + NX
As one can see from the equation above, aggregate demand (Y) is equal consumption (C) plus investment (I) plus government spending (G) plus net exports (NX), i.e. how much we are selling abroad to other countries on net.
According to Keynesian theory, aggregate demand determines the amount of available expenditure in an economy. Now, why should one care about available expenditure? Well, in Keynesian economics, available expenditure determines the amount of means available in an economy in order to sustain labor hires in a given period. That is, in the Keynesian model, the available expenditures is what keeps people at work. Boldly speaking, the amount of expenditure defines the amount of available money to pay the wages of workers. This concept is particularly important during a recession. Assume for instance, that a shock hits the economy and aggregate demand decreases. This implies that demand for firms’ products drops and firms will sell less products and earn less money. Hence, at the end of the month firms have less money available to pay their employees. Meaning that firms will be forced to lay off some workers and unemployment increases. Hence, in a Keynesian setting, a drop in aggregate demand implies a decrease in the means available in an economy, leading to less jobs and higher unemployment.
In order for the Keynesian aggregate demand-employment relationship to work, Keynesians rely on an important assumption: price stickiness. That is, Keynesian Economics typically assumes that nominal wages are sticky. Now, what does sticky wages mean? In order to answer this question think of wage as the price of labor. In addition, assume for a second that wage behaves just like any other price. In a typical market, if demand for a certain product falls then the price of this product will fall as well. If this mechanism were true also for the labor market, a drop in aggregate demand would mean that the price of labor (wage) decreases and not mean that people are losing their jobs. However, wages are unlike many other prices. They do not always adjust so quickly. Hence, we say that wages are sticky or rigid. Why is that? Why do wages not adjust so quickly? There are several reasons for that. First, there might be a long-term contract between the employer and the employee. Second, there may be a law, such as the minimum wage law, that prevents wages from falling below a certain threshold. Finally, worker moral might also contribute to wage stickiness. Especially when aggregate demand is raising, workers could demand higher wages, but often do not ask for a raise for moral reasons.
It is important to understand that wage stickiness has severe consequences for employment. Once the flow of aggregate demand expenditure slows down and firms cannot cut wages, firms need either fire workers or exit the market, i.e. declare bankruptcy. Moreover, the reduction of workers triggers a second round feedback loop. Meaning that once there are less people working, the flow of aggregate demand expenditure will decrease even more. The reason being that there is lower employment and hence lower production and thus less earning to be consumed, resulting in less investment and less aggregate demand. Therefore, at the end of the month, firms have even less money to pay wages and will be forced to lay off even more workers.
Keep in mind that in typical Keynesian scenario if consumption and investment are falling usually government spending is going to end up falling as well. The reason being that a lower aggregate demand means that firms produce and sell less. Hence, less tax revenue is generated. Consequently, unless the government is borrowing money, a reduction in aggregate demand reduces also the government’s ability to spend money. Thus, as government spending is part of aggregate demand, there will be an additional negative shock to aggregate demand.
Did we already experience sharp drops of aggregate demand in the past? John Maynard Keynes wrote its book in the 1930s, right in the aftermath of the Great Depression. Up to date, the Great Depression represents the most prominent reduction of aggregate demand experienced in modern history. Starting in 1929, many banks failed and many depositors lost their money. Note that this was still a time before governmental guarantees. The money supply fell by about a third and the stock market crashed. This caused consumer spending to decrease, a drop in investment, and a reduction in aggregate demand. Thus, the sharp reduction in aggregate demand led to the Great Depression with high levels of unemployment. More recently, we experienced yet another prominent drop in aggregate demand. Although the Great Recession of 2008 was much less severe than the Great Depression, it also had an considerable Keynesian element.
1) Keynesian Economics
2) Aggregate Demand
3) Keynesian Economics in an AS-AD model
4) How to get out of a recession?
5) Drawbacks of Keynesian Economics
In this article, we will discuss the concept of aggregate demand, the central idea of Keynesian economics. Understanding aggregate demand is key in understanding Keynesian Economics. Particularly, understanding aggregate demand helps to grasp why Keynesians tend to favor activist monetary and fiscal policy during recessionary times.
Keynesian Economics in an AS-AD model
This piece explains the dynamics of Keynesian Economics within an AS-AD (aggregate supply/ aggregate demand) model. Particularly, the article presents what happens to output and inflation if aggregate demand decreases.
How to get out of a recession?
This post shortly elucidates on the measures that, according to Keynesian Economics, are necessary to counteract a recession.
Drawbacks of Keynesian Economics
The last post of the series on Keynesian Economics discusses various drawback and problems of the theory.
Overall, Keynesian Economics is very important to understand and explain business cycle fluctuations. It is central to the modern understanding of macroeconomics. That said there are also various limitations to the Keynesian ways of understanding the world.
]]>This post shows how one can prove this statement. Let’s start from the statement that we want to prove:
Note that is symmetric. Hence, in order to simplify the math we are going to label as A, i.e. .
Let’s compute the partial derivative of with respect to .
Instead of stating every single equation, one can state the same using the more compact matrix notation:
plugging in for A
Now let’s return to the derivation of the least squares estimator.
]]>JLD
package. The following examples demonstrates how to save data objects in Julia and how to load the once they are saved.
# in case JLD is not installed # Pkg.add("JLD") using JLD # generate example data objects n=100 draws = rand(n) # save data by naming objects save("data.JLD","numberObs",n,"draws",draws) #load JLD file load("data.JLD") # # you loaded the objects again in # your workspace. In addition, the # the object "n" is now named "numberObs" #]]>
srand()
function. The code example below sets the seed to 1234. Generating a random variable with rand(1)
after setting the seed to 1234 will always generate the same number, i.e. it will always return 0.5908446386657102.
srand(1234) rand(1)]]>
]]>
Thereby, each circle depicts the variance of one variable of the regression model. That is, the circle depicts the variance of the dependent variable , the circle depicts the variance of variable and the circle shows the variance of the variable . The overlapping areas show variation that variables have in common. For instance, the overlapping area of variable and variable represents the variation of variable that can be explained by variable .
In the first figure, the circles and do both intersect with the circle . However, there is no overlap between the circle and the circle . In this case, variable and variable are both correlated with variable , but the two explanatory variables themselves are uncorrelated. Thus, one can precisely identify the effect of each explanatory variable ( and ) on the independent variable ().
Figure 2 shows a case in which there exists some correlation between the two explanatory variables. Note that, in Figure 2 there exists some overlap between the circle and the circle meaning that the two variables have some variation in common. You see that it becomes less clear to determine what the effect of one explanatory variable on the dependent variable actually is, i.e. there is some area overlapping all three variables. Although there exists some correlation between variable and , there is still enough variation left to determine the effect of and rather precisely.
Moderate multicollinearity is not much of a concern. However, if the correlation between two or more explanatory variables is very strong is get continuously harder to precisely estimate the pure effect of one explanatory variable on the dependent variable. Figure 3 depicts a case in which the variables and are strongly correlated. There is increasingly less variation left that can be associated to only one explanatory variable and . In this case we need more data to precisely estimate the effect of one explanatory variable on the dependent variable. Generally, multicollinearity lets our estimates become less accurate.
Finally, as already stated in this post, multicollinearity does not cause problems from a mathematical point of view as long as we do not have perfect multicollinearity. In the representation of a venn-diagram, perfect multicollinearity between variable and would mean that the circle of variable and the circle of variable are identical, i.e. there exists a perfectly overlap between the two circles. Hence, one variable is a linear combination of the other one. There is no variation left to be estimated and the estimator breaks down as we violate the second assumption (full rank assumptions) of the Gauss-Markov assumptions.
]]>Assume you are interest in second-hand cars and you want to find out what determines the prices of used cars. In order to answer this question, you collect a lot of data on cars, including all factors that you think might influence the price of a car. Finally, you end up having a data sample including 1’000 cars. For each car in your sample, you observe the price of the car, the brand of the car, the number of seats that a car has, whether the car already had an accident or not, the size of the car’s engine, the amount of kilometer it was already driven, and the age of the car.
In order to find out what drives the cars price you decide to estimate the following model using OLS:
Now assume that for some reason you forget to include the variable age in your model. Estimating the model without the variable age will introduce an omitted variable bias and lead to biased estimates of your coefficient. Particularly, as miles and age are positively correlated and age has a negative impact on price, we the estimated coefficient of miles will exhibit a downward bias (read this post to learn more about the direction of the omitted variable bias). The table below presents the estimation results for the model presented above, once with and once without age. The OLS-estimation of the model including all relevant variables, estimates all coefficients correctly. However, neglecting the variable age leads to a biased estimate of the coefficient of the variable milage. Moreover, as predicted, neglecting the variable age leads to a downward bias of the estimate for the coefficient of the variable milage, i.e. the estimated coefficient decrease from -0.014 to -0.025.
Dependent variable: price |
||
(1) | (2) | |
brand | 2,075.163^{***} | 2,090.848^{***} |
(89.610) | (89.591) | |
seats | 384.272^{***} | 376.811^{***} |
(124.666) | (124.930) | |
accident | -2,628.229^{***} | -2,588.826^{***} |
(690.435) | (691.917) | |
engine | 4.880^{***} | 4.865^{***} |
(0.156) | (0.156) | |
milage | -0.014^{**} | -0.025^{***} |
(0.006) | (0.003) | |
age | -160.562^{**} | |
(66.507) | ||
Constant | -342.778 | -633.799 |
(821.856) | (814.939) | |
Observations | 1,000 | 1,000 |
R^{2} | 0.625 | 0.623 |
Adjusted R^{2} | 0.623 | 0.621 |
Residual Std. Error | 3,950.177 (df = 993) | 3,959.760 (df = 994) |
F Statistic | 275.692^{***} (df = 6; 993) | 328.071^{***} (df = 5; 994) |
Note: | ^{*}p<0.1; ^{**}p<0.05; ^{***}p<0.01 |
The following code allows you to replicate the example presented above. The code first simulates data sample including car prices and additional observables and estimates then the regression model, once with and once without the variable age.
# start with an empty workspace rm(list=ls()) options(scipen=999) set.seed(12345) # simulate data obs <- 1000 # number of observations brand <- sample(c(1,2,3,4,5),obs,replace = T) seats <- sample(c(4,4,5,5,5,7),obs,replace = T) accident <- sample(c(rep(0,20),1),obs,replace = T) engine <- sample(seq(1000,3600,200),obs,replace = T) age <- sample(seq(1,16,1),obs,replace = T, prob = c(0.04,0.06,0.08,0.11, 0.12,0.105,0.095,0.085, 0.07,0.06,0.05,0.04, 0.03,0.025,0.02,0.01)) milage <- age*10000*rnorm(obs,1,0.3) error <- rnorm(obs,0,1)*4000 price <- round(brand*2000+seats*300-accident*2000+engine*5- age*200-milage*0.01+error) # estimate the model with and without age reg1 <- lm(price~brand+seats+accident+engine+milage+age) reg2 <- lm(price~brand+seats+accident+engine+milage) require(stargazer) stargazer(reg1,reg2,type = "text") ##
]]>
However, when talking about multicollinearity we rarely refer to the case of perfect multicollinearity. Much more often we do refer to the case where two or more variables are highly correlated. Hence, multicolliearity does not violate any Gauss Markov assumptions and the OLS-estimator is still BLUE. That is, under multicolliearity the OLS-estimator is still unbiased and has the lowest variance among all other estimators. Moreover, not only the coefficients are estimated in an efficient way, also the estimates for the t-values are unbiased. Thus, all confidence intervals and test-statistics remain valid.
Although the OLS-estimator provides efficient estimates for the coefficients and standard errors under multicollinearity, these estimates are not very good. Meaning that even though it is possible to estimate all coefficients and standard errors, the obtained estimates might be very imprecise. The problem of multicollinearity can best be thought of a as small sample size problem. That is, if we have only few observations, we cannot precisely estimate certain relationships. A similar problem occurs when we have multicollinearity in our data. If we have variables that are strongly correlated with one another, they often to not contain enough information to allow precise estimates. That is, it is difficult to disentangle the true effect of two highly collinear variables on the dependent variable.
The Gauss Markov assumptions require the matrix of the OLS estimator to have full rank. In the case of perfect multicollinearity, we violate assumption 2 of the Gauss Markov assumptions as at least one variable can be represented as a liner combination of one or more variables. In this case, the matrix has not full rank. Hence, under perfect multicollinearity the matrix is singular and cannot be inverted and the OLS-estimator is not defined.
The most common case of perfect multicollinearity occurs when we specify binary variables, which are also referred to as dummy variables. Hence, the expression dummy variable trap. The dummy variable trap can best be explained with an example. Assume the following: We are interested in demography and we would like to know if men live longer than women. In order to answer this question, we gather all necessary data , i.e. age of death, sex, occupation, marital status, and several other variables, and specify a binary variable that takes a value of one in case an observation refers to a man and zero otherwise. Additionally, we specify a second binary variable that takes a value of one in case of a woman and zero otherwise. We then try to explain our variable of interest , i.e. age of death, using the two binary variables and additional controls. In this case, our regression model would look something like this:
Written in matrix form, the matrix of our OLS-estimator would be then be defined as
Note that, the first three columns of the matrix above are linearly dependent. That is, the first column of the matrix, the regression constant, can be expressed as the sum of column 2 () and column 3 (). In this case, matrix does not have full rank and we cannot compute estimates for the coefficients . This problem is referred to as the dummy variable trap. The way to solve this problem is to simply neglect one variable. In reality, perfect multicollinearity is rarely an issue and can easily be detected as the estimator cannot be computed. Statistical software packages automatically detect perfect multicollinearity and issue a warning or simply drop one variable.
]]>Total factor productivity is hard to measure, differs between countries and fluctuates over time. Total factor productivity contains mainly immaterial values including technology, knowledge and ability. Hence, total factor productivity strongly relates to capital. For instance, due to technological change we are able to develop more efficient machinery and equipment that firms will adapt in their production process. However, the System of National Accounts (SNA) attributes machinery and equipment to a country’s capital stock. Hence, if we observe an increase in GDP, it is difficult to identify if the increase in GDP is due to an increase in the capital stock or due to capital that is more efficient. Consequently, it can be problematic to consider TFP and capital separately.
There exist various methods to measure total factor productivity. One prominent way of measuring TFP is Growth Accounting. According to this method, total factor productivity accounts for approximately two third of GDP growth in OECD countries. However, Growth Accounting does not permit to establish a causal link between TFP growth and GDP growth. For instance, technological change could cause both an increase of an economy’s capital stock as well as an increase in TFP. Alternative methods to measure TFP that consider these causal links suggest that technological change drives the entire GDP growth in the long-run.
]]>