forked from rdpeng/exdata
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathdplyr.Rmd
373 lines (228 loc) · 16.5 KB
/
dplyr.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
# Managing Data Frames with the `dplyr` package
```{r,echo=FALSE}
knitr::opts_chunk$set(comment = NA, prompt = TRUE, collapse = TRUE, tidy = FALSE)
options(width = 80)
```
[Watch a video of this chapter](https://youtu.be/aywFompr1F4)
## Data Frames
The *data frame* is a key data structure in statistics and in R. The basic structure of a data frame is that there is one observation per row and each column represents a variable, a measure, feature, or characteristic of that observation. R has an internal implementation of data frames that is likely the one you will use most often. However, there are packages on CRAN that implement data frames via things like relational databases that allow you to operate on very very large data frames (but we won't discuss them here).
Given the importance of managing data frames, it's important that we have good tools for dealing with them. R obviously has some built-in tools like the `subset()` function and the use of `[` and `$` operators to extract subsets of data frames. However, other operations, like filtering, re-ordering, and collapsing, can often be tedious operations in R whose syntax is not very intuitive. The `dplyr` package is designed to mitigate a lot of these problems and to provide a highly optimized set of routines specifically for dealing with data frames.
## The `dplyr` Package
The `dplyr` package was developed by Hadley Wickham of RStudio and is an optimized and distilled version of his `plyr` package. The `dplyr` package does not provide any "new" functionality to R per se, in the sense that everything `dplyr` does could already be done with base R, but it *greatly* simplifies existing functionality in R.
One important contribution of the `dplyr` package is that it provides a "grammar" (in particular, verbs) for data manipulation and for operating on data frames. With this grammar, you can sensibly communicate what it is that you are doing to a data frame that other people can understand (assuming they also know the grammar). This is useful because it provides an abstraction for data manipulation that previously did not exist. Another useful contribution is that the `dplyr` functions are **very** fast, as many key operations are coded in C++.
## `dplyr` Grammar
Some of the key "verbs" provided by the `dplyr` package are
* `select`: return a subset of the columns of a data frame, using a flexible notation
* `filter`: extract a subset of rows from a data frame based on logical conditions
* `arrange`: reorder rows of a data frame
* `rename`: rename variables in a data frame
* `mutate`: add new variables/columns or transform existing variables
* `summarise` / `summarize`: generate summary statistics of different
variables in the data frame, possibly within strata
* `%>%`: the "pipe" operator is used to connect multiple verb actions together into a pipeline
The `dplyr` package as a number of its own data types that it takes advantage of. For example, there is a handy `print` method that prevents you from printing a lot of data to the console. Most of the time, these additional data types are transparent to the user and do not need to be worried about.
### Common `dplyr` Function Properties
All of the functions that we will discuss in this Chapter will have a few common characteristics. In particular,
1. The first argument is a data frame.
2. The subsequent arguments describe what to do with the data frame specified in the first argument, and you can refer to columns in the data frame directly without using the $ operator (just use the column names).
3. The return result of a function is a new data frame
4. Data frames must be properly formatted and annotated for this to all be useful. In particular, the data must be [tidy](http://www.jstatsoft.org/v59/i10/paper). In short, there should be one observation per row, and each column should represent a feature or characteristic of that observation.
## Installing the `dplyr` package
The `dplyr` package can be installed from CRAN or from GitHub using the `devtools` package and the `install_github()` function. The GitHub repository will usually contain the latest updates to the package and the development version.
To install from CRAN, just run
```{r,eval=FALSE}
install.packages("dplyr")
```
To install from GitHub you can run
```{r,eval=FALSE}
install_github("hadley/dplyr")
```
After installing the package it is important that you load it into your R session with the `library()` function.
```{r}
library(dplyr)
```
You may get some warnings when the package is loaded because there are functions in the `dplyr` package that have the same name as functions in other packages. For now you can ignore the warnings.
NOTE: If you ever run into a problem where R is getting confused over which function you mean to call, you can specify the *full name* of a function using the `::` operator. The full name is simply the package name from which the function is defined followed by `::` and then the function name. For example, the `filter` function from the `dplyr` package has the full name `dplyr::filter`. Calling functions with their full name will resolve any confusion over which function was meant to be called.
## `select()`
For the examples in this chapter we will be using a dataset containing air pollution and temperature data for the [city of Chicago](http://www.biostat.jhsph.edu/~rpeng/leanpub/rprog/chicago_data.zip) in the U.S. The dataset is available from my web site.
After unzipping the archive, you can load the data into R using the `readRDS()` function.
```{r}
chicago <- readRDS("data/chicago.rds")
```
You can see some basic characteristics of the dataset with the `dim()` and `str()` functions.
```{r}
dim(chicago)
str(chicago)
```
The `select()` function can be used to select columns of a data frame that you want to focus on. Often you'll have a large data frame containing "all" of the data, but any *given* analysis might only use a subset of variables or observations. The `select()` function allows you to get the few columns you might need.
Suppose we wanted to take the first 3 columns only. There are a few ways to do this. We could for example use numerical indices. But we can also use the names directly.
```{r}
names(chicago)[1:3]
subset <- select(chicago, city:dptp)
head(subset)
```
Note that the `:` normally cannot be used with names or strings, but inside the `select()` function you can use it to specify a range of variable names.
You can also *omit* variables using the `select()` function by using the negative sign. With `select()` you can do
```{r,eval=FALSE}
select(chicago, -(city:dptp))
```
which indicates that we should include every variable *except* the variables `city` through `dptp`.
The equivalent code in base R would be
```{r,eval=FALSE}
i <- match("city", names(chicago))
j <- match("dptp", names(chicago))
head(chicago[, -(i:j)])
```
Not super intuitive, right?
The `select()` function also allows a special syntax that allows you to specify variable names based on patterns. So, for example, if you wanted to keep every variable that ends with a "2", we could do
```{r}
subset <- select(chicago, ends_with("2"))
str(subset)
```
Or if we wanted to keep every variable that starts with a "d", we could do
```{r}
subset <- select(chicago, starts_with("d"))
str(subset)
```
You can also use more general regular expressions if necessary. See the help page (`?select`) for more details.
## `filter()`
The `filter()` function is used to extract subsets of rows from a data frame. This function is similar to the existing `subset()` function in R but is quite a bit faster in my experience.
Suppose we wanted to extract the rows of the `chicago` data frame where the levels of PM2.5 are greater than 30 (which is a reasonably high level), we could do
```{r}
chic.f <- filter(chicago, pm25tmean2 > 30)
str(chic.f)
```
You can see that there are now only `r nrow(chic.f)` rows in the data frame and the distribution of the `pm25tmean2` values is.
```{r}
summary(chic.f$pm25tmean2)
```
We can place an arbitrarily complex logical sequence inside of `filter()`, so we could for example extract the rows where PM2.5 is greater than 30 *and* temperature is greater than 80 degrees Fahrenheit.
```{r}
chic.f <- filter(chicago, pm25tmean2 > 30 & tmpd > 80)
select(chic.f, date, tmpd, pm25tmean2)
```
Now there are only `r nrow(chic.f)` observations where both of those conditions are met.
## `arrange()`
The `arrange()` function is used to reorder rows of a data frame according to one of the variables/columns. Reordering rows of a data frame (while preserving corresponding order
of other columns) is normally a pain to do in R. The `arrange()` function simplifies the process quite a bit.
Here we can order the rows of the data frame by date, so that the first row is the earliest (oldest) observation and the last row is the latest (most recent) observation.
```{r}
chicago <- arrange(chicago, date)
```
We can now check the first few rows
```{r}
head(select(chicago, date, pm25tmean2), 3)
```
and the last few rows.
```{r}
tail(select(chicago, date, pm25tmean2), 3)
```
Columns can be arranged in descending order too by useing the special `desc()` operator.
```{r}
chicago <- arrange(chicago, desc(date))
```
Looking at the first three and last three rows shows the dates in descending order.
```{r}
head(select(chicago, date, pm25tmean2), 3)
tail(select(chicago, date, pm25tmean2), 3)
```
## `rename()`
Renaming a variable in a data frame in R is surprisingly hard to do! The `rename()` function is designed to make this process easier.
Here you can see the names of the first five variables in the `chicago` data frame.
```{r}
head(chicago[, 1:5], 3)
```
The `dptp` column is supposed to represent the dew point temperature and the `pm25tmean2` column provides the PM2.5 data. However, these names are pretty obscure or awkward and probably be renamed to something more sensible.
```{r}
chicago <- rename(chicago, dewpoint = dptp, pm25 = pm25tmean2)
head(chicago[, 1:5], 3)
```
The syntax inside the `rename()` function is to have the new name on the left-hand side of the `=` sign and the old name on the right-hand side.
I leave it as an exercise for the reader to figure how you do this in base R without `dplyr`.
## `mutate()`
The `mutate()` function exists to compute transformations of variables in a data frame. Often, you want to create new variables that are derived from existing variables and `mutate()` provides a clean interface for doing that.
For example, with air pollution data, we often want to *detrend* the data by subtracting the mean from the data. That way we can look at whether a given day's air pollution level is higher than or less than average (as opposed to looking at its absolute level).
Here we create a `pm25detrend` variable that subtracts the mean from the `pm25` variable.
```{r}
chicago <- mutate(chicago, pm25detrend = pm25 - mean(pm25, na.rm = TRUE))
head(chicago)
```
There is also the related `transmute()` function, which does the same thing as `mutate()` but then *drops all non-transformed variables*.
Here we detrend the PM10 and ozone (O3) variables.
```{r}
head(transmute(chicago,
pm10detrend = pm10tmean2 - mean(pm10tmean2, na.rm = TRUE),
o3detrend = o3tmean2 - mean(o3tmean2, na.rm = TRUE)))
```
Note that there are only two columns in the transmuted data frame.
## `group_by()`
The `group_by()` function is used to generate summary statistics from the data frame within strata defined by a variable. For example, in this air pollution dataset, you might want to know what the average annual level of PM2.5 is. So the stratum is the year, and that is something we can derive from the `date` variable. In conjunction with the `group_by()` function we often use the `summarize()` function (or `summarise()` for some parts of the world).
The general operation here is a combination of splitting a data frame into separate pieces defined by a variable or group of variables (`group_by()`), and then applying a summary function across those subsets (`summarize()`).
First, we can create a `year` varible using `as.POSIXlt()`.
```{r}
chicago <- mutate(chicago, year = as.POSIXlt(date)$year + 1900)
```
Now we can create a separate data frame that splits the original data frame by year.
```{r}
years <- group_by(chicago, year)
```
Finally, we compute summary statistics for each year in the data frame with the `summarize()` function.
```{r}
summarize(years, pm25 = mean(pm25, na.rm = TRUE),
o3 = max(o3tmean2, na.rm = TRUE),
no2 = median(no2tmean2, na.rm = TRUE))
```
`summarize()` returns a data frame with `year` as the first column, and then the annual averages of `pm25`, `o3`, and `no2`.
In a slightly more complicated example, we might want to know what are the average levels of ozone (`o3`) and nitrogen dioxide (`no2`) within quintiles of `pm25`. A slicker way to do this would be through a regression model, but we can actually do this quickly with `group_by()` and `summarize()`.
First, we can create a categorical variable of `pm25` divided into quintiles.
```{r}
qq <- quantile(chicago$pm25, seq(0, 1, 0.2), na.rm = TRUE)
chicago <- mutate(chicago, pm25.quint = cut(pm25, qq))
```
Now we can group the data frame by the `pm25.quint` variable.
```{r}
quint <- group_by(chicago, pm25.quint)
```
Finally, we can compute the mean of `o3` and `no2` within quintiles of `pm25`.
```{r}
summarize(quint, o3 = mean(o3tmean2, na.rm = TRUE),
no2 = mean(no2tmean2, na.rm = TRUE))
```
From the table, it seems there isn't a strong relationship between `pm25` and `o3`, but there appears to be a positive correlation between `pm25` and `no2`. More sophisticated statistical modeling can help to provide precise answers to these questions, but a simple application of `dplyr` functions can often get you most of the way there.
## `%>%`
The pipeline operater `%>%` is very handy for stringing together multiple `dplyr` functions in a sequence of operations. Notice above that every time we wanted to apply more than one function, the sequence gets buried in a sequence of nested function calls that is difficult to read, i.e.
```{r,eval=FALSE}
third(second(first(x)))
```
This nesting is not a natural way to think about a sequence of operations. The `%>%` operator allows you to string operations in a left-to-right fashion, i.e.
```{r,eval=FALSE}
first(x) %>% second %>% third
```
Take the example that we just did in the last section where we computed the mean of `o3` and `no2` within quintiles of `pm25`. There we had to
1. create a new variable `pm25.quint`
2. split the data frame by that new variable
3. compute the mean of `o3` and `no2` in the sub-groups defined by `pm25.quint`
That can be done with the following sequence in a single R expression.
```{r}
mutate(chicago, pm25.quint = cut(pm25, qq)) %>%
group_by(pm25.quint) %>%
summarize(o3 = mean(o3tmean2, na.rm = TRUE),
no2 = mean(no2tmean2, na.rm = TRUE))
```
This way we don't have to create a set of temporary variables along the way or create a massive nested sequence of function calls.
Notice in the above code that I pass the `chicago` data frame to the first call to `mutate()`, but then afterwards I do not have to pass the first argument to `group_by()` or `summarize()`. Once you travel down the pipeline with `%>%`, the first argument is taken to be the output of the previous element in the pipeline.
Another example might be computing the average pollutant level by month. This could be useful to see if there are any seasonal trends in the data.
```{r}
mutate(chicago, month = as.POSIXlt(date)$mon + 1) %>%
group_by(month) %>%
summarize(pm25 = mean(pm25, na.rm = TRUE),
o3 = max(o3tmean2, na.rm = TRUE),
no2 = median(no2tmean2, na.rm = TRUE))
```
Here we can see that `o3` tends to be low in the winter months and high in the summer while `no2` is higher in the winter and lower in the summer.
## Summary
The `dplyr` package provides a concise set of operations for managing data frames. With these functions we can do a number of complex operations in just a few lines of code. In particular, we can often conduct the beginnings of an exploratory analysis with the powerful combination of `group_by()` and `summarize()`.
Once you learn the `dplyr` grammar there are a few additional benefits
* `dplyr` can work with other data frame "backends" such as SQL databases. There is an SQL interface for relational databases via the DBI package
* `dplyr` can be integrated with the `data.table` package for large fast tables
The `dplyr` package is handy way to both simplify and speed up your data frame management code. It's rare that you get such a combination at the same time!