Skip to content

Commit 625d218

Browse files
committed
some fixes
1 parent 84ebe7e commit 625d218

File tree

287 files changed

+26038
-26770
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

287 files changed

+26038
-26770
lines changed

.texpadtmp/ITER.aux

+45
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,45 @@
1+
\relax
2+
\providecommand\hyper@newdestlabel[2]{}
3+
\providecommand\BKM@entry[2]{}
4+
\providecommand\HyperFirstAtBeginDocument{\AtBeginDocument}
5+
\HyperFirstAtBeginDocument{\ifx\hyper@anchor\@undefined
6+
\global\let\oldcontentsline\contentsline
7+
\gdef\contentsline#1#2#3#4{\oldcontentsline{#1}{#2}{#3}}
8+
\global\let\oldnewlabel\newlabel
9+
\gdef\newlabel#1#2{\newlabelxx{#1}#2}
10+
\gdef\newlabelxx#1#2#3#4#5#6{\oldnewlabel{#1}{{#2}{#3}}}
11+
\AtEndDocument{\ifx\hyper@anchor\@undefined
12+
\let\contentsline\oldcontentsline
13+
\let\newlabel\oldnewlabel
14+
\fi}
15+
\fi}
16+
\global\let\hyper@last\relax
17+
\gdef\HyperFirstAtBeginDocument#1{#1}
18+
\providecommand\HyField@AuxAddToFields[1]{}
19+
\providecommand\HyField@AuxAddToCoFields[2]{}
20+
\BKM@entry{id=1,dest={636861707465722A2E32},srcline={191}}{5C3337365C3337375C303030505C303030725C303030655C303030665C303030615C303030635C30303065}
21+
\newlabel{preface}{{}{5}}
22+
\@writefile{toc}{\contentsline {chapter}{Preface}{5}{chapter*.2}\protected@file@percent }
23+
\newlabel{conventions-used-in-this-book}{{}{5}}
24+
\@writefile{toc}{\contentsline {subsubsection}{Conventions Used in this Book}{5}{section*.3}\protected@file@percent }
25+
\newlabel{acknowledgement}{{}{6}}
26+
\@writefile{toc}{\contentsline {subsubsection}{Acknowledgement}{6}{section*.4}\protected@file@percent }
27+
\BKM@entry{id=2,dest={636861707465722E31},srcline={220}}{5C3337365C3337375C303030495C3030306E5C303030745C303030725C3030306F5C303030645C303030755C303030635C303030745C303030695C3030306F5C3030306E}
28+
\@writefile{toc}{\contentsline {chapter}{\numberline {1}Introduction}{7}{chapter.1}\protected@file@percent }
29+
\@writefile{lof}{\addvspace {10\p@ }}
30+
\@writefile{lot}{\addvspace {10\p@ }}
31+
\newlabel{introduction}{{1}{7}}
32+
\BKM@entry{id=3,dest={73656374696F6E2E312E31},srcline={252}}{5C3337365C3337375C303030435C3030306F5C3030306C5C3030306F5C303030705C303030685C3030306F5C3030306E}
33+
\@writefile{toc}{\contentsline {section}{\numberline {1.1}Colophon}{9}{section.1.1}\protected@file@percent }
34+
\newlabel{colophon}{{1.1}{9}}
35+
\BKM@entry{id=4,dest={73656374696F6E2E312E32},srcline={436}}{5C3337365C3337375C303030415C3030305C3034305C303030565C303030655C303030725C303030795C3030305C3034305C303030535C303030685C3030306F5C303030725C303030745C3030305C3034305C303030495C3030306E5C303030745C303030725C3030306F5C303030645C303030755C303030635C303030745C303030695C3030306F5C3030306E5C3030305C3034305C303030745C3030306F5C3030305C3034305C303030615C3030306E5C303030645C3030305C3034305C303030525C303030535C303030745C303030755C303030645C303030695C3030306F}
36+
\@writefile{toc}{\contentsline {section}{\numberline {1.2}A Very Short Introduction to \texttt {R} and \emph {RStudio}}{13}{section.1.2}\protected@file@percent }
37+
\newlabel{a-very-short-introduction-to-and-rstudio}{{1.2}{13}}
38+
\newlabel{basics}{{1.2}{13}}
39+
\@writefile{toc}{\contentsline {subsubsection}{\texttt {R} Basics}{13}{section*.5}\protected@file@percent }
40+
\@writefile{lof}{\contentsline {figure}{\numberline {1.1}{\ignorespaces RStudio: the four panes}}{14}{figure.1.1}\protected@file@percent }
41+
\newlabel{fig:unnamed-chunk-8}{{1.1}{14}}
42+
\newlabel{vectors}{{1.2}{14}}
43+
\@writefile{toc}{\contentsline {subsubsection}{Vectors}{14}{section*.6}\protected@file@percent }
44+
\newlabel{functions}{{1.2}{15}}
45+
\@writefile{toc}{\contentsline {subsubsection}{Functions}{15}{section*.7}\protected@file@percent }

.texpadtmp/ITER.log

+825
Large diffs are not rendered by default.

.texpadtmp/ITER.synctex.gz

83.2 KB
Binary file not shown.

.texpadtmp/ITER.toc

+9
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
\contentsline {chapter}{Preface}{5}{chapter*.2}%
2+
\contentsline {subsubsection}{Conventions Used in this Book}{5}{section*.3}%
3+
\contentsline {subsubsection}{Acknowledgement}{6}{section*.4}%
4+
\contentsline {chapter}{\numberline {1}Introduction}{7}{chapter.1}%
5+
\contentsline {section}{\numberline {1.1}Colophon}{9}{section.1.1}%
6+
\contentsline {section}{\numberline {1.2}A Very Short Introduction to \texttt {R} and \emph {RStudio}}{13}{section.1.2}%
7+
\contentsline {subsubsection}{\texttt {R} Basics}{13}{section*.5}%
8+
\contentsline {subsubsection}{Vectors}{14}{section*.6}%
9+
\contentsline {subsubsection}{Functions}{15}{section*.7}%

02-ch2.Rmd

+7-23
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,6 @@ knitr::kable(pdfdata, format = my_output, caption = "PDF and CDF of a Dice Roll"
4646

4747
We can easily plot both functions using `r ttcode("R")`. Since the probability equals $1/6$ for each outcome, we set up the vector `r ttcode("probability")` by using the function `r ttcode("rep()")` which replicates a given value a specified number of times.
4848

49-
<div class="unfolded">
5049
```{r, eval = T, message = F, warning = F, fig.align='center', fig.pos="h"}
5150
# generate the vector of probabilities
5251
probability <- rep(1/6, 6)
@@ -56,11 +55,9 @@ plot(probability,
5655
xlab = "outcomes",
5756
main = "Probability Distribution")
5857
```
59-
</div>
6058

6159
For the cumulative probability distribution we need the cumulative probabilities, i.e., we need the cumulative sums of the vector `r ttcode("probability")`. These sums can be computed using `r ttcode("cumsum()")`.
6260

63-
<div class="unfolded">
6461
```{r, echo = T, eval = T, message = F, warning = F, fig.align='center'}
6562
# generate the vector of cumulative probabilities
6663
cum_probability <- cumsum(probability)
@@ -70,7 +67,6 @@ plot(cum_probability,
7067
xlab = "outcomes",
7168
main = "Cumulative Probability Distribution")
7269
```
73-
</div>
7470

7571
### Bernoulli Trials {-}
7672

@@ -449,15 +445,15 @@ f(x) = \frac{1}{\sqrt{2 \pi} \sigma} \exp{-(x - \mu)^2/(2 \sigma^2)}.
449445
For the standard normal distribution we have $\mu=0$ and $\sigma=1$. Standard normal variates are often denoted by $Z$. Usually, the standard normal PDF is denoted by $\phi$ and the standard normal CDF is denoted by $\Phi$. Hence,
450446
$$ \phi(c) = \Phi'(c) \ \ , \ \ \Phi(c) = P(Z \leq c) \ \ , \ \ Z \sim \mathcal{N}(0,1).$$ Note that the notation X $\sim$ Y reads as "X is distributed as Y". In `r ttcode("R")`, we can conveniently obtain densities of normal distributions using the function `r ttcode("dnorm()")`. Let us draw a plot of the standard normal density function using `r ttcode("curve()")` together with `r ttcode("dnorm()")`.
451447

452-
<div class="unfolded">
448+
453449
```{r, echo = T, eval = T, message = F, warning = F, fig.align='center'}
454450
# draw a plot of the N(0,1) PDF
455451
curve(dnorm(x),
456452
xlim = c(-3.5, 3.5),
457453
ylab = "Density",
458454
main = "Standard Normal Density Function")
459455
```
460-
</div>
456+
461457

462458
We can obtain the density at different positions by passing a vector to `r ttcode("dnorm()")`.
463459

@@ -468,15 +464,14 @@ dnorm(x = c(-1.96, 0, 1.96))
468464

469465
Similar to the PDF, we can plot the standard normal CDF using `r ttcode("curve()")`. We could use `r ttcode("dnorm()")` for this but it is much more convenient to rely on `r ttcode("pnorm()")`.
470466

471-
<div class="unfolded">
472467
```{r, echo = T, eval = T, message = F, warning = F, fig.align='center'}
473468
# plot the standard normal CDF
474469
curve(pnorm(x),
475470
xlim = c(-3.5, 3.5),
476471
ylab = "Probability",
477472
main = "Standard Normal Cumulative Distribution Function")
478473
```
479-
</div>
474+
480475

481476
We can also use `r ttcode("R")` to calculate the probability of events associated with a standard normal variate.
482477

@@ -644,7 +639,6 @@ it holds that
644639
$$ Z_1^2+Z_2^2+Z_3^3 \sim \chi^2_3. \tag{2.3} $$
645640
Using the code below, we can display the PDF and the CDF of a $\chi^2_3$ random variable in a single plot. This is achieved by setting the argument `r ttcode("add = TRUE")` in the second call of `r ttcode("curve()")`. Further we adjust limits of both axes using `r ttcode("xlim")` and `r ttcode("ylim")` and choose different colors to make both functions better distinguishable. The plot is completed by adding a legend with help of `r ttcode("legend()")`.
646641

647-
<div class="unfolded">
648642
```{r, echo = T, eval = T, message = F, warning = F, fig.align='center'}
649643
# plot the PDF
650644
curve(dchisq(x, df = 3),
@@ -666,15 +660,14 @@ legend("topleft",
666660
col = c("blue", "red"),
667661
lty = c(1, 1))
668662
```
669-
</div>
663+
670664

671665
Since the outcomes of a $\chi^2_M$ distributed random variable are always positive, the support of the related PDF and CDF is $\mathbb{R}_{\geq0}$.
672666

673667
As expectation and variance depend (solely!) on the degrees of freedom, the distribution's shape changes drastically if we vary the number of squared standard normals that are summed up. This relation is often depicted by overlaying densities for different $M$, see the <a href="https://en.wikipedia.org/wiki/Chi-squared_distribution">Wikipedia Article</a>.
674668

675669
We reproduce this here by plotting the density of the $\chi_1^2$ distribution on the interval $[0,15]$ with `r ttcode("curve()")`. In the next step, we loop over degrees of freedom $M=2,...,7$ and add a density curve for each $M$ to the plot. We also adjust the line color for each iteration of the loop by setting `r ttcode("col = M")`. At last, we add a legend that displays degrees of freedom and the associated colors.
676670

677-
<div class="unfolded">
678671
```{r, echo = T, eval = T, message = F, warning = F, fig.align='center'}
679672
# plot the density for M=1
680673
curve(dchisq(x, df = 1),
@@ -698,7 +691,7 @@ legend("topright",
698691
lty = 1,
699692
title = "D.F.")
700693
```
701-
</div>
694+
702695

703696
Increasing the degrees of freedom shifts the distribution to the right (the mode becomes larger) and increases the dispersion (the distribution's variance grows).
704697

@@ -722,7 +715,6 @@ A $t_M$ distributed random variable $X$ has an expectation if $M>1$ and it has a
722715

723716
Let us plot some $t$ distributions with different $M$ and compare them to the standard normal distribution.
724717

725-
<div class="unfolded">
726718
```{r, echo = T, eval = T, message = F, warning = F, fig.align='center'}
727719
# plot the standard normal density
728720
curve(dnorm(x),
@@ -756,7 +748,6 @@ legend("topright",
756748
col = 1:4,
757749
lty = c(2, 1, 1, 1))
758750
```
759-
</div>
760751

761752
The plot illustrates what has been said in the previous paragraph: as the degrees of freedom increase, the shape of the $t$ distribution comes closer to that of a standard normal bell curve. Already for $M=25$ we find little difference to the standard normal density. If $M$ is small, we find the distribution to have heavier tails than a standard normal, i.e., it has a "fatter" bell shape.
762753

@@ -777,7 +768,6 @@ pf(2, df1 = 3, df2 = 14, lower.tail = F)
777768

778769
We can visualize this probability by drawing a line plot of the related density and adding a color shading with `r ttcode("polygon()")`.
779770

780-
<div class="unfolded">
781771
```{r, echo = T, eval = T, message = F, warning = F, fig.align='center'}
782772
# define coordinate vectors for vertices of the polygon
783773
x <- c(2, seq(2, 10, 0.01), 10)
@@ -793,7 +783,7 @@ curve(df(x ,3 ,14),
793783
# draw the polygon
794784
polygon(x, y, col = "orange")
795785
```
796-
</div>
786+
797787

798788
The $F$ distribution is related to many other distributions. An important special case encountered in econometrics arises if the denominator degrees of freedom are large such that the $F_{M,n}$ distribution can be approximated by the $F_{M,\infty}$ distribution which turns out to be simply the distribution of a $\chi^2_M$ random variable divided by its degrees of freedom $M$,
799789

@@ -894,7 +884,6 @@ VarS
894884

895885
So the distribution of $S$ is known. It is also evident that its distribution differs considerably from the marginal distribution, i.e,the distribution of a single dice roll's outcome, $D$ . Let us visualize this using bar plots.
896886

897-
<div class="unfolded">
898887
```{r, echo = T, eval = T, message = F, warning = F, fig.align='center'}
899888
# divide the plotting area into one row with two columns
900889
par(mfrow = c(1, 2))
@@ -919,8 +908,6 @@ barplot(probability,
919908
space = 0,
920909
main = "Outcome of a Single Dice Roll")
921910
```
922-
</div>
923-
924911

925912
Many econometric procedures deal with averages of sampled data. It is typically assumed that observations are drawn randomly from a larger, unknown population. As demonstrated for the sample function $S$, computing an average of a random sample has the effect that the average is a random variable itself. This random variable in turn has a probability distribution, called the sampling distribution. Knowledge about the sampling distribution of the average is therefore crucial for understanding the performance of econometric procedures.
926913

@@ -994,7 +981,6 @@ A straightforward approach to examine the distribution of univariate numerical d
994981

995982
Using `r ttcode("curve()")`, we overlay the histogram with a red line, the theoretical density of a $\mathcal{N}(0, 0.1)$ random variable. Remember to use the argument `r ttcode("add = TRUE")` to add the curve to the current plot. Otherwise `r ttcode("R")` will open a new graphic device and discard the previous plot!^[*Hint:* `r ttcode("T")` and `r ttcode("F")` are alternatives for `r ttcode("TRUE")` and `r ttcode("FALSE")`.]
996983

997-
<div class="unfolded">
998984
```{r, echo = T, eval = T, message = F, warning = F, fig.align='center'}
999985
# Plot the density histogram
1000986
hist(sample.avgs,
@@ -1009,7 +995,6 @@ curve(dnorm(x, sd = 1/sqrt(n)),
1009995
lwd = "2",
1010996
add = T)
1011997
```
1012-
</div>
1013998

1014999
The sampling distribution of $\overline{Y}$ is indeed very close to that of a $\mathcal{N}(0, 0.1)$ distribution so the Monte Carlo simulation supports the theoretical claim.
10151000

@@ -1023,7 +1008,6 @@ To visualize the claim stated in equation (<a href="#mjx-eqn-2.3">2.3</a>), we p
10231008

10241009
Again, we produce a density estimate for the distribution underlying our simulated data using a density histogram and overlay it with a line graph of the theoretical density function of the $\chi^2_3$ distribution.
10251010

1026-
<div class="unfolded">
10271011
```{r, echo = T, eval = T, message = F, warning = F, fig.align='center'}
10281012
# number of repetitions
10291013
reps <- 10000
@@ -1052,7 +1036,7 @@ curve(dchisq(x, df = DF),
10521036
col = "red",
10531037
add = T)
10541038
```
1055-
</div>
1039+
10561040

10571041
### Large Sample Approximations to Sampling Distributions {-}
10581042

0 commit comments

Comments
 (0)