The code reproduces the figure 1 presented in this post.

#start with an empty workspace
rm(list=ls())
library(RCurl)
# import the function from repository
url_robust <- "https://raw.githubusercontent.com/IsidoreBeautrelet/economictheoryblog/master/confidence_intervals.R"
eval(parse(text = getURL(url_robust, ssl.verifypeer = FALSE)),
envir=.GlobalEnv)
##################################
## Compare confidence interval of
## normal distribution with
## t-distribution
##################################
intervals_normal <- NULL
for(nx in 2:100){
interval <- conf_fix(0,10,nx)
intervals_normal <- rbind(intervals_normal,interval)
}
intervals_t <- NULL
for(nx in 2:100){
interval <- conf_fix(0,10,nx,distribution = "test")
intervals_t <- rbind(intervals_t,interval)
}
png('comparison_confidence_intervals.png',width = 540, height = 540)
plot(intervals_normal[,1],type="l",ylim=c(-100,100),
ylab="",xlab="Number of Observations",
main="Confidence Intervals: Mean: 0, Std: 1")
lines(intervals_normal[,2])
lines(intervals_t[,1],col=2)
lines(intervals_t[,2],col=2)
legend("topright",c("Normal Distribution","t-Distribution"),
lwd=c(1,1),col=c(1,2),bty = "n")
dev.off()

]]>In this post I will present how to use the native function of Julia to run OLS on the following model

An alternative way to run a linear regression is to use the *lm()* function of the GLM package. In case you are interested in running a regression based on the GLM package, you can check out this post. It describes how to conduct a multiple regression in Julia and uses the *lm()* function provided by the GLM package.

In this example, our dependent variable will be my weekly average weight, the explanatory variable represents the sum of calories that I burned during the previous week. For a more detailed description of the data see here.

#using DataArrays # load Taro - Pkg to read Excel Data using Taro Taro.init() # get data path="https://economictheoryblog.files.wordpress.com/2016/08/data.xlsx" data = Taro.readxl(download(path), "data", "A1:C357") using DataFrames data = DataFrame(data) deleterows!(data,findall(ismissing,data[:,1])) deleterows!(data,findall(ismissing,data[:,2])) y = convert(Array{Float64,1},data[:,1]) x = convert(Array{Float64,1},data[:,2]) reverse([x ones(length(x))]\y)

The function *reverse()* returns point estimates for and . Unfortunately, the function does not return standard error for the point estimates. In order to obtain standard errors for the estimated coefficients, one can use the *lm()* function form the GLM package. The GLM package of Julia does provide a more flexible environment. You can find a working example of how to conduct multiple regression in Julia using the GLM package here.

In a first step, we are going the compare confidence intervals using the t-distribution to confidence intervals using the normal distribution. We will assume that we know the underlying data generating process and examine what happens to the intervals if the number of observations increase. That is, we will examine how the two confidence intervals depend on the sample size. In a second step, we investigate how often a confidence interval does not include the true mean. Once again, we will repeat this exercise for different sample sizes. Finally, we are going to examine how often an interval does not include the mean of another sample drawn from the same distribution. This is particularly important for mean testing. Also in this case we will vary the number of observations.

In this first part, we are going to compare confidence intervals using the t-distribution to confidence intervals using the normal distribution. Particularly, we will see how the confidence intervals differ between the two distributions depending on the sample size. We will use the standard formula to construct confidence intervals (see below) and work with two parameters. First, we are going to change n, i.e. the number of observations. We let n run from 2 to 100. That is, we will compute 99 confidence intervals for n=2 to n=100. Second, we will do this twice, once using *Z* values, i.e. using the Normal distribution, and once using *t* values, i.e. using the t-distribution.

The figure below plots the resulting confidence intervals. The red line depicts confidence intervals from the t-distribution and the black line depicts the corresponding intervals from the normal distribution. We assume the we know the underlying data generating process (DGP) and that it is the same for all cases. Thereby, we assume a Standard Normal distribution, i.e. mean is zero and standard deviation is one.

One can easily see that the t-distribution gives much larger intervals when the number of observations is small. However, pretty soon, the two confidence intervals converge.

In a second step, we are going to conduct a simulation study. This study will help us to find out, how often confidence intervals of random variables do not include the true mean.

In our case, we chose the data generating process to be normally distributed with mean zero and variance one. Hence, the true mean of our data generating process will be zero. Thus, we are going to examine how often confidence intervals do not include zero.

How are we going to do that?

- Draw a vector of length 2
- Construct a 95% confidence intervals using Normal distribution
- Construct a 95% confidence intervals using t-distribution
- Check if the intervals include zero
- Repeat point 1-4 10.000 times
- Compute how often a confidence interval does not include zero on average
- Repeat point 1-6 for an increasing vector length. That is, we repeat the simulation for vector length 3 to 100.

In case you are having trouble following the exact steps of the simulation study, you can check the R code.

The following figure plots visualizes the results of this simulation study. The figure shows the share of confidence intervals that do not include zero, i.e. the true mean of the DGP, depending on the vector size. The black line shows refers to confidence intervals using the normal distribution and the red line refers to confidence intervals using the t-distribution. The figure illustrates that the share of confidence intervals that do not include the true mean is larger for confidence intervals based on the normal distribution than for confidence intervals based on the t-distribution. The difference is especially pronounced when having only few observations.

The following figure plots the difference between the share of confidence intervals not including the true mean when using the normal distribution and the share of confidence intervals not including the true mean when using the t-distribution. That is, the following figure plots the difference between the black line and the red line of the figure above. It states much more often confidence intervals based on the normal distribution does not include the true mean.

The last part of this post will focus on mean testing. It is quite common that one uses mean tests to examine if the mean of two variables are statistically different from each other. We will once again conduct a simulation study and we will see that, especially when working with few observations, the choice of the underlying distribution will make a considerable difference.

How is the simulation exercise construct?

- Draw a vector of length 2
- Construct a 95% confidence intervals using Normal distribution
- Construct a 95% confidence intervals using t-distribution
- Draw a second vector of length 2 and compute its mean
- Check if the intervals include the mean
- Repeat point 1-5 10.000 times
- Compute how often a confidence interval does not the mean on average
- Repeat point 1-7 for an increasing vector length. That is, we repeat the simulation for vector length 3 to 100.

In a similar fashion as above, the following figure shows the share of confidence intervals that do not include the mean of a second random variable that is drawn from the same data generating process. Once again, for very few observations, confidence intervals that are based on the t distribution do a much better job than confidence intervals that are based on the normal distribution. Surprisingly, the difference between the two types of confidence intervals does not collapse to zero immediately, as it was the case in the previous exercises, but remains visible even at a vector length of 100.

This post focused on difference of confidence intervals that are based on the normal distribution and confidence intervals that are based on the t distribution. Conducting simulation exercises, I showed that when having very little observations, one is definitively better off using the t-distribution. In comparison to confidence intervals that are based on the t-distribution, confidence intervals that are based on the normal distribution fail much more often to include the true mean of a distribution. Thus, using the t-distribution for testing appear the much more conservative option.

- sudo apt-get update
- sudo apt-get upgrade
- sudo apt-get dist-upgrade
- sed -i ‘s/stretch/buster/g’ /etc/apt/sources.list
- sudo apt-get update
- sudo apt-get upgrade
- sudo apt-get autoremove
- reboot

In case you are interested in upgrading Debian 8 to Debian 9, check out this post.

]]>**1. **First open a terminal and type su then your root password that you created when installing your Debian 10.

**2.** Install Leafpad text editor which allows you to edit text files. Type: “apt-get install leafpad”

**3.** Stay in root terminal and type “leafpad /etc/gdm3/daemon.conf”. This command opens the file “daemon.conf” in leafpad. Under security type “AllowRoot=true”. So your security section in the file should look like this:

[security]

AllowRoot=true

Once it looks like this save the file then exit the window.

**4. **Stay in root terminal and type “leafpad /etc/pam.d/gdm-password”. This command opens the file “gdm-password” in leafpad. Within this file you have comment out the line containing “auth required pam_succeed_if.so user != root quiet_success” so that it looks like this

#auth required pam_succeed_if.so user != root quiet_successSave the file and exit.

**5.** Now you should be able to login as root in you GUI Debian 9.

#start with an empty workspace
rm(list=ls())
#set seed
set.seed(2)
# load necessary packages for demo
library(RCurl)
# import the function from repository
url_robust <- "https://raw.githubusercontent.com/IsidoreBeautrelet/economictheoryblog/master/confidence_intervals.R"
eval(parse(text = getURL(url_robust, ssl.verifypeer = FALSE)),
envir=.GlobalEnv)
#generate a vector of random numbers
vector <- rnorm(100)
#calculate 95% confidence intervals
conf(vector)
[1] -0.2580911 0.1966948
#calculate 90% confidence intervals
conf(vector,conf_level = 0.90)
[1] -0.2215323 0.1601360

Furthermore, if you do not have many observations, you may want to use Student’s t-distribution instead of the Normal distribution. The Student’s t-distribution has wider tales when the number of observations is low and gives a you more conservative estimates of your confidence interval. In case you want to use Student’s t-distribution you case set the parameter ‘distribution’, i.e. distribution=”normal”.

In case you do not know whether to use the Normal Distribution or the Student’s t-distribution , you might want to check out this post. In this post, I try illustrate the difference between using the Normal and the t-distribution.

Before we start with the economic dynamics, let us just briefly revise the the AS-AD model. The AS-AD model is a macroeconomic model that connects the price level (inflation) and output growth through the relationship of aggregate supply and aggregate demand. The figure below presents a stylized AS-AD model. In the AS-AD model, the downward slowing orange curve represents aggregate demand. The upward sloping blue line represents aggregate supply in the short run. The green dotted line represents long run aggregate supply, which represents an economy’s growth potential. Remember that aggregate demand is the key concept of Keynesian economics. Hence, in Keynesian reasoning, the impulse that generates economic dynamics comes from changes in aggregate demand.

How do Keynesian dynamics work in an AS-AD model? Let us continue with the example that I laid out in this previous post. In the figure above, the economy is in an equilibrium in which current output growth is equal its growth potential. Assume now that the economy enters in a recession and aggregate demand decreases. In this case the orange line shifts back and to the left. The new equilibrium also moves down and to the left. We observe a reduction in output and a decrease in inflation.

Under the assumption of wage-stickiness, a common assumption of Keynesian economics, firms need to lay off people. Thus, a reduction in aggregate demand decreases jobs and increases unemployment. In this setting, there may be some important second-order effects. The long run aggregate supply curve may end up to the left as well. The reason being that some laid-off workers may end up being demoralized, might lose their workplace contacts and be less integrated in society. In the longer run, those people might loose the skill and end up being less productive. In this case the economy used knowledge and skills. We observe a decrease in the economy’s growth potential.

In a concise way, this article explained how Keynesian dynamics can be presented in an AS-AD framework. Particularly, the blog post explained how a drop in aggregate demand propagates within an AS-AD model and explained how second-order effects can lead to a permanent decrease in output. The next post on the series of Keynesian economics focuses on how to get out of a recession and elaborate on the Keynesian remedies to do so.

1) Keynesian Economics

2) Aggregate Demand

3) Keynesian Economics in an AS-AD model

4) How to get out of a recession?

5) Drawbacks of Keynesian Economics

4) How to get out of a recession?

5) Drawbacks of Keynesian Economics

In this post, I want to revisit my first year with WordAds and share some experiences and observations that I made throughout the last year. Especially, I want to report some general points, such as how the income reporting and payout works, and discuss some issues related to traffic.

Once I was accepted to the program, a new “*WordAds*” button appeared on the sidebar of my dashboard. The button leads to a page that reports your monthly earnings together with the number of ads that WordAds served your viewers. Until September 2018, WordAds reported these figures on a monthly basis. Thereby, the figures of one particular month were usually published between the 20th and 25th of the following month. In October 2018, WordAds introduced a new feature, the company incorporated the ads and income reporting system into the Jetpack package. More precisely, WordAds added a new tab on your stats page. That is, next to traffic and insights tab, now one also finds a tab called *ads* that reports the number of ads displayed and the income generated on a daily basis. In my opinion this represents a huge improvement. Finally, once you accumulate 100 dollars of income, WordAds sends you the money to your PayPal account.

How much can one earn with WordAds? While these seem to be very important questions to many bloggers, I am just not the guy to ask. WordAds is the only source of revenue that my blog generates—I tried the Amazon Affiliate program, but it didn’t work—. In order to make a living out of blogging, my blog would need to generate millions of monthly page views each month. Currently, I am having a few thousand. Hence, if one seeks earning advice, there are many blogs out there that propose fantastic strategies to earn money with blogging. In this post, I prefer to focus on some simple statistics. The average CPM—earnings per thousand impressions—was 0.685 (October 2017-November 2018). That is, displaying thousand ads generates on average an income of 68.5 pennies. Averaging 3 ads per view, I roughly earned 2 dollar per 1000 views. However, throughout the year the average CPM fluctuates considerably. The following table display the average CPM for each month since I was accepted to the program. Note that, the average CPM was high during between November and April and dropped during the summer month. Up to this day, the average CPM did not recover from this dip. The fluctuations in CPM might be driven by various factors. For instance, a different composition of traffic—US traffic pays more than traffic from other countries—and a different willingness to pay—firms paying for ads pay more during certain periods than during others—are probably the most important factors explaining the fluctuation in CPM. If time permits, I will try to examine in greater detail what factors drive these fluctuations.

Month | Average CPM |

October 2017 | 0.54 |

November | 0.89 |

December | 0.96 |

January | 0.78 |

February | 0.74 |

March | 0.71 |

April | 0.71 |

May | 0.69 |

June | 0.55 |

July | 0.46 |

August | 0.58 |

September | 0.68 |

October 2018 | 0.57 |

November | 0.67 |

The average CPM does not only fluctuate throughout the year, but also within a week. The table below reports the average CPM for each weekday. One can see that the average CPM varies substantially between the different days of the week . The average CPM is lowest on Tuesday (0.57) and highest on Saturday (0.72). However, note that, while the table above uses monthly data that was made available since the beginning of my WordAds affiliation, i.e. October 2017, the following table displays estimations that are based only on information from October 1, 2018 onward. Nonetheless, even though I have only a limited amount of data points for each weekday (between 10 and 11), the construction of 95% confidence intervals—based on a t-distribution—indicates that the CPMs are statistically different from each other.

Day | Average CPM | 95% Conf. Intervals |

Monday | 0.64 | [0.59,0.70] |

Tuesday | 0.57 | [0.50,0.63] |

Wednesday | 0.64 | [0.57,0.70] |

Thursday | 0.66 | [0.54,0.77] |

Friday | 0.64 | [0.53,0.74] |

Saturday | 0.72 | [0.60,0.84] |

Sunday | 0.66 | [0.54,0.79] |

During the last year, traffic on my site slowed down considerably. Additionally, also the average CPM decrease during the year. In the reminder of this post, I will mention some possible factors that can explain the decrease in traffic and average earning. Besides the fact that I started displaying ads during the last year, a couple of additional factors might have potentially reduced traffic and income on my blog. First, AMP was activated on my blog. Second, I changed my theme. Finally, GDPR came into place.

The sole fact that I started displaying ads on my blog probably already reduced traffic by its own. The reason being that ads slow down the loading speed of a page. It is well known that search engines punish long loading times and decrease the rank of site that take a long time to load. Thus, displaying ads might have led search engines diverting less traffic to my site. This is what I actually observe, my average google-rank decreased considerably during the last year.

AMP might be an additional factor that could have caused the slowdown in traffic on my blog. I do not remember exactly when, but at some point AMP was activated on my blog. AMP is a library that translates web pages into mobile pages that are compelling, smooth, and load near instantaneously. The advantage is obvious, AMP pages load incredibly fast. Hence, search engine rank improves as loading time decreases. Unfortunately, AMP is very bad in parsing latex code. And, as I use a lot of math in my posts, most of my posts look very bad when AMP transformed. Hence, even though AMP decreases the loading speed of my blog posts, I am pretty sure that it also increase my bounce rate. Thus, on December 16, 2018, I decided to no longer use AMP.

In August this year, I felt that my old theme was outdated. Thus, I changed the theme of my blog. I switched from the theme *twenty ten* to *twenty fourteen*. I am not sure by how much this change influenced the visibility of my website, but I suspect that a new theme comes with a different loading time and different custom settings. Hence, bots and crawlers will have to newly evaluate at least part of my blog and SEO ranking will drop temporarily.

Finally, in May 2018 the European Union put the new *General Data Protection Regulation* (*GDPR*) into place. The law basically aims to give control to individuals over their personal data. Importantly, from a bloggers perspective, GDPR considers cookies as personal data. The reason being that cookies can identify an individual. The issue with not using cookies in advertisement is it that ads cannot be personalized. Hence, companies’ willingness to pay reduces substantially as they cannot select their audience as precisely as with cookies. Thus, even though GDPR does not reduce traffic on your site, it still decreases your income per ad displayed in the European Union and other countries that implemented GDPR. GDPR might also have reduced the average CPM on my blog. The following figure plots the monthly CPM over time and shows the average CPM before and after GDPR was introduced. Unfortunately, I cannot attribute the decrease in the mean to introduction of GDPR. The reason being that also the composition of the viewers changes significantly during summer. I would need a lot more data to clearly identify the effect of GDPR on ads revenue.

`rand()`

function that draws random numbers and specify the Gamma distribution by using the `Gamma(a,b)`

command. The parameters a and b define the shape parameters of the Gamma distribution. This article provides a more generic overview of how to generate random numbers in Julia.

The following code example generates the variable A that contains 10 random numbers that follow a Gamma distribution with shape 1 and 2.

using Distributions A = rand(Gamma(1,2),10)]]>

Y = C + I + G + NX

As one can see from the equation above, aggregate demand (Y) is equal consumption (C) plus investment (I) plus government spending (G) plus net exports (NX), i.e. how much we are selling abroad to other countries on net.

According to Keynesian theory, aggregate demand determines the amount of available expenditure in an economy. Now, why should one care about available expenditure? Well, in Keynesian economics, available expenditure determines the amount of means available in an economy in order to sustain labor hires in a given period. That is, in the Keynesian model, the available expenditures is what keeps people at work. Boldly speaking, the amount of expenditure defines the amount of available money to pay the wages of workers. This concept is particularly important during a recession. Assume for instance, that a shock hits the economy and aggregate demand decreases. This implies that demand for firms’ products drops and firms will sell less products and earn less money. Hence, at the end of the month firms have less money available to pay their employees. Meaning that firms will be forced to lay off some workers and unemployment increases. Hence, in a Keynesian setting, a drop in aggregate demand implies a decrease in the means available in an economy, leading to less jobs and higher unemployment.

In order for the Keynesian aggregate demand-employment relationship to work, Keynesians rely on an important assumption: price stickiness. That is, Keynesian Economics typically assumes that nominal wages are sticky. Now, what does sticky wages mean? In order to answer this question think of wage as the price of labor. In addition, assume for a second that wage behaves just like any other price. In a typical market, if demand for a certain product falls then the price of this product will fall as well. If this mechanism were true also for the labor market, a drop in aggregate demand would mean that the price of labor (wage) decreases and not mean that people are losing their jobs. However, wages are unlike many other prices. They do not always adjust so quickly. Hence, we say that wages are sticky or rigid. Why is that? Why do wages not adjust so quickly? There are several reasons for that. First, there might be a long-term contract between the employer and the employee. Second, there may be a law, such as the minimum wage law, that prevents wages from falling below a certain threshold. Finally, worker moral might also contribute to wage stickiness. Especially when aggregate demand is raising, workers could demand higher wages, but often do not ask for a raise for moral reasons.

It is important to understand that wage stickiness has severe consequences for employment. Once the flow of aggregate demand expenditure slows down and firms cannot cut wages, firms need either fire workers or exit the market, i.e. declare bankruptcy. Moreover, the reduction of workers triggers a second round feedback loop. Meaning that once there are less people working, the flow of aggregate demand expenditure will decrease even more. The reason being that there is lower employment and hence lower production and thus less earning to be consumed, resulting in less investment and less aggregate demand. Therefore, at the end of the month, firms have even less money to pay wages and will be forced to lay off even more workers.

Keep in mind that in typical Keynesian scenario if consumption and investment are falling usually government spending is going to end up falling as well. The reason being that a lower aggregate demand means that firms produce and sell less. Hence, less tax revenue is generated. Consequently, unless the government is borrowing money, a reduction in aggregate demand reduces also the government’s ability to spend money. Thus, as government spending is part of aggregate demand, there will be an additional negative shock to aggregate demand.

Did we already experience sharp drops of aggregate demand in the past? John Maynard Keynes wrote its book in the 1930s, right in the aftermath of the Great Depression. Up to date, the Great Depression represents the most prominent reduction of aggregate demand experienced in modern history. Starting in 1929, many banks failed and many depositors lost their money. Note that this was still a time before governmental guarantees. The money supply fell by about a third and the stock market crashed. This caused consumer spending to decrease, a drop in investment, and a reduction in aggregate demand. Thus, the sharp reduction in aggregate demand led to the Great Depression with high levels of unemployment. More recently, we experienced yet another prominent drop in aggregate demand. Although the Great Recession of 2008 was much less severe than the Great Depression, it also had an considerable Keynesian element.

3) Keynesian Economics in an AS-AD model

4) How to get out of a recession?

5) Drawbacks of Keynesian Economics

]]>