The data.table R package is considered as the fastest package for data manipulation. This tutorial includes various examples and practice questions to make you familiar with the data.table package.
Analysts generally call R programming not compatible with big datasets (> 10 GB) as it is not memory efficient and loads everything into RAM. To change their perception, 'data.table' package comes into play. This package was designed to be concise and painless. There are many benchmarks done in the past to compare dplyr vs data.table. In every benchmark, data.table wins. The efficiency of this package was also compared with python' package (panda). And data.table wins. In CRAN, there are more than 200 packages that are dependent on data.table which makes it listed in the top 5 R's package.
![]() |
data.table Tutorial |
The syntax of data.table is shown in the image below :
![]() |
data.table Syntax |
DT[ i , j , by]
The data.table syntax is NOT RESTRICTED to only 3 parameters. There are other arguments that can be added to data.table syntax. The list is as follows -
The above arguments will be explained in the latter part of this tutorial.
How to Install and load data.table Packageinstall.packages("data.table")
#load required library library(data.table)
In data.table package, fread() function is used to read data from your computer or from a web page. It is equivalent to the read.csv() function of base R.
mydata = fread("https://github.com/arunsrinivasan/satrdays-workshop/raw/master/flights_2014.csv")Describe Data
This dataset contains 253K observations and 17 columns. It constitutes information about flights' arrival or departure time, delays, flight cancellation and destination in year 2014.
nrow(mydata) [1] 253316
ncol(mydata) [1] 17
names(mydata) [1] "year" "month" "day" "dep_time" "dep_delay" "arr_time" "arr_delay" [8] "cancelled" "carrier" "tailnum" "flight" "origin" "dest" "air_time" [15] "distance" "hour" "min"
head(mydata) year month day dep_time dep_delay arr_time arr_delay cancelled carrier tailnum flight 1: 2014 1 1 914 14 1238 13 0 AA N338AA 1 2: 2014 1 1 1157 -3 1523 13 0 AA N335AA 3 3: 2014 1 1 1902 2 2224 9 0 AA N327AA 21 4: 2014 1 1 722 -8 1014 -26 0 AA N3EHAA 29 5: 2014 1 1 1347 2 1706 1 0 AA N319AA 117 6: 2014 1 1 1824 4 2145 0 0 AA N3DEAA 119 origin dest air_time distance hour min 1: JFK LAX 359 2475 9 14 2: JFK LAX 363 2475 11 57 3: JFK LAX 351 2475 19 2 4: LGA PBI 157 1035 7 22 5: JFK LAX 350 2475 13 47 6: EWR LAX 339 2454 18 24
The function is.data.table() checks whether the object is a data.table. If it is not a data.table, you can convert it into a data.table using the as.data.table() function.
is.data.table(mydata) mydata = as.data.table(mydata)
Suppose you need to select only 'origin' column. You can use the code below -
dat1 = mydata[ , origin] # returns a vector
The above line of code returns a vector not data.table.
To get result in data.table format, run the code below :
dat1 = mydata[ , .(origin)] # returns a data.tableIt can also be written like data.frame way
dat1 = mydata[, c("origin")]Keeping a column based on column position
dat2 =mydata[, 2]
In this code, we are selecting second column from mydata.
Keeping Multiple Columnsdat3 = mydata[, .(origin, year, month, hour)]Keeping multiple columns based on column position
dat4 = mydata[, c(2:4)]Dropping a Column
Suppose you want to include all the variables except one column, say. 'origin'. It can be easily done by adding ! sign (implies negation in R).
dat5 = mydata[, !c("origin")]Dropping Multiple Columns
dat6 = mydata[, !c("origin", "year", "month")]Keeping variables that contain 'dep'
You can use %like% operator to find pattern. It is same as base R's grepl() function, SQL's LIKE operator and SAS's CONTAINS function.
dat7 = mydata[,names(mydata) %like% "dep"]
You can rename variables with setnames() function. In the following code, we are renaming the variable 'dest' to 'destination'.
setnames(mydata, c("dest"), c("Destination"))
To rename multiple variables, you can simply add variables in both the sides.
setnames(mydata, c("dest","origin"), c("Destination", "origin.of.flight"))
The following code shows how you can subset rows. Suppose you are asked to find all the flights whose origin is 'JFK'.
# Filter based on one variableSelect Multiple Values
dat8 = mydata[origin == "JFK"]
dat9 = mydata[origin %in% c("JFK", "LGA")]Apply Logical Operator : NOT
# Exclude ValuesFilter based on Multiple variables
dat10 = mydata[!origin %in% c("JFK", "LGA")]
dat11 = mydata[origin == "JFK" & carrier == "AA"]
data.table uses binary search algorithm that makes data manipulation faster.
Binary Search AlgorithmBinary search is an efficient algorithm for finding a value from a sorted list of values. It involves repeatedly splitting in half the portion of the list that contains values, until you found the value that you were searching for.Suppose you have the following values in a variable :
5, 10, 7, 20, 3, 13, 26
You are searching the value 20 in the above list. See how binary search algorithm works -
It is important to set key in your dataset which tells system that data is sorted by the key column. For example, you have employee’s name, address, salary, designation, department, employee ID. We can use 'employee ID' as a key to search a particular employee.
# Indexing (Set Keys)
setkey(mydata, origin)
Note : It makes the data table sorted by the column 'origin'.
How to filter when key is turned on.data12 = mydata[c("JFK", "LGA")]Performance Comparison
You can compare performance of the filtering process (With or Without KEY).
system.time(mydata[origin %in% c("JFK", "LGA")])
system.time(mydata[c("JFK", "LGA")])
![]() |
Performance - With or without KEY |
If you look at the real time in the image above, setting key makes filtering twice as faster than without using keys.
Indexing Multiple Columns
We can also set keys to multiple columns like we did below to columns 'origin' and 'dest'. See the example below.
setkey(mydata, origin, dest)Filtering while setting keys on Multiple Columns
# First key column 'origin' matches "JFK" and second key column 'dest' matches "MIA" mydata[.("JFK", "MIA")]It is equivalent to the following code :
mydata[origin == "JFK" & dest == "MIA"]To identify the column(s) indexed by Result : It returns origin and dest as these are columns that are set keys.
mydata01 = setorder(mydata, origin)Sorting Data on descending order
mydata02 = setorder(mydata, -origin)Sorting Data based on multiple variables
In this example, we tells R to reorder data first by origin on ascending order and then variable 'carrier'on descending order.
mydata03 = setorder(mydata, origin, -carrier)
You can do any operation on rows by adding := operator. In this example, we are subtracting 'dep_delay' variable from 'dep_time' variable to compute scheduled departure time.
mydata[, dep_sch:=dep_time - dep_delay]Adding Multiple Columns
mydata[, c("dep_sch","arr_sch"):=list(dep_time - dep_delay, arr_time - arr_delay)]
If you don't want to make changes (addition of columns) in the original data, you can make a copy of it.
mydata_CIF THEN ELSE
The 'IF THEN ELSE' conditions are very popular for recoding values. In data.table package, it can be done with the following methods :
The following code sets flag= 1 if min is less than 50. Otherwise, set flag =0.
Method 1 : mydata[, flag:= ifelse(min < 50, 1,0)]
Method 2 : mydata[, flag:= 1*(min < 50)]How to write Sub Queries (like SQL)
We can use this format - DT[ ] [ ] [ ] to build a chain in data.table. It is like sub-queries like SQL.
mydata[, dep_sch:=dep_time - dep_delay][,.(dep_time,dep_delay,dep_sch)]First, we are computing scheduled departure time and then selecting only relevant columns.
Summarize or Aggregate Columns
It's easy to summarize data with data.table package. We can generate summary statistics of specific variables. In this case, we are calculating mean, median, minimum and maximum value of variable arr_delay.
mydata[, .(mean = mean(arr_delay, na.rm = TRUE),
median = median(arr_delay, na.rm = TRUE),
min = min(arr_delay, na.rm = TRUE),
max = max(arr_delay, na.rm = TRUE))]
![]() |
Summarize with data.table package |
To summarize multiple variables, we can simply write all the summary statistics function in a bracket. See the command below-
mydata[, .(mean(arr_delay), mean(dep_delay))]
If you need to calculate summary statistics for a larger list of variables, you can use .SD and .SDcols operators. The .SD operator implies 'Subset of Data'.
mydata[, lapply(.SD, mean), .SDcols = c("arr_delay", "dep_delay")]
In this case, we are calculating mean of two variables - arr_delay and dep_delay.
Summarize all numeric Columnsmydata[, lapply(.SD, mean)]Summarize with multiple statistics
mydata[, sapply(.SD, function(x) c(mean=mean(x), median=median(x)))]
The following code calculates the mean arrival delay calculated for each unique value in the "origin" column.
mydata[, .(mean_arr_delay = mean(arr_delay, na.rm = TRUE)), by = origin]
![]() |
Summary by group |
Instead of 'by', you can use keyby= operator.
mydata[, .(mean_arr_delay = mean(arr_delay, na.rm = TRUE)), keyby = origin]Summarize multiple variables by group 'origin'
mydata[, .(mean(arr_delay, na.rm = TRUE), mean(dep_delay, na.rm = TRUE)), by = origin]Or it can be written like below -
mydata[, lapply(.SD, mean, na.rm = TRUE), .SDcols = c("arr_delay", "dep_delay"), by = origin]
You can remove duplicate values with unique() function. Suppose you want to delete duplicates based on a variable, say. carrier.
setkey(mydata, "carrier")
unique(mydata)
setkey(mydata, NULL)Note : Setting key to NULL is not required if no key is already set.
unique(mydata)
The following command selects first and second values from a categorical variable carrier.
mydata[, .SD[1:2], by=carrier]Select LAST value from a group
The following code is used to extract the last row within each group defined by the carrier column in the mydata data.table.
mydata[, .SD[.N], by=carrier]
In SQL, Window functions are very useful for solving complex data problems. RANK OVER PARTITION is the most popular window function. It assigns a unique rank within each partition defined by the specified column, ordered by another column. It can be easily translated in data.table with the help of frank() function. frank() is similar to base R's rank() function but much faster.
In this case, we are calculating rank of variable 'distance' by 'carrier'. We are assigning rank 1 to the highest value of 'distance' within unique values of 'carrier'.
dt = mydata[, rank:=frank(-distance,ties.method = "min"), by=carrier]
We can calculate cumulative sum by using cumsum() function.
dat = mydata[, cum:=cumsum(distance), by=carrier]
The lag and lead of a variable can be calculated with shift() function. The syntax of shift() function is as follows - shift(variable_name, number_of_lags, type=c("lag", "lead"))
DT DT[ , X := shift(A, 1, type="lag")]
DT[ , Y := shift(A, 1, type="lead")]
![]() |
Lag and Lead Function |
DT = data.table(x=6:10)The %like% is mainly used to find all the values that matches a pattern.
DT[x %between% c(7,9)]
DT = data.table(Name=c("dep_time","dep_delay","arrival"), /> DT[Name %like% "dep"]
The merging in data.table is very similar to base R merge() function. The only difference is data.table by default takes common key variable as a primary key to merge two datasets. Whereas, data.frame takes common variable name as a primary key to merge the datasets.
Sample Data Inner JoinIt returns all the matching observations in both the datasets.
merge(dt1, dt2, by="A")
It returns all observations from the left dataset and the matched observations from the right dataset.
merge(dt1, dt2, by="A", all.x = TRUE)Right Join
It returns all observations from the right dataset and the matched observations from the left dataset.
merge(dt1, dt2, by="A", all.y = TRUE)
It return all rows when there is a match in one of the datasets.
merge(dt1, dt2, all=TRUE)
You can use setDF() function to accomplish this task.
setDF(mydata)
Similarly, you can use setDT() function to convert data frame to data table.
set.seed(123) X = data.frame(A=sample(3, 10, TRUE), B=sample(letters[1:3], 10, TRUE)) setDT(X, key = "A")Reshape Data
The data.table package includes several useful functions which makes data cleaning easy and smooth. To reshape or transpose data, you can use dcast.data.table() and melt.data.table() functions. These functions are sourced from reshape2 package and make them efficient. It also add some new features in these functions.
Rolling JoinsIt supports rolling joins. They are commonly used for analyzing time series data. A very R packages supports these kind of joins.
Questions for PracticeHere are a few questions you can use to practice using the data.table package in R :
Q1. Calculate total number of rows by month and then sort on descending order.
mydata[, .N, by = month] [order(-N)]
The .N operator is used to find count.
Q2. Find top 3 months with high mean arrival delay.
mydata[, .(mean_arr_delay = mean(arr_delay, na.rm = TRUE)), by = month][order(-mean_arr_delay)][1:3]
Q3. Find origin of flights having average total delay is greater than 20 minutes.
mydata[, lapply(.SD, mean, na.rm = TRUE), .SDcols = c("arr_delay", "dep_delay"), by = origin][(arr_delay + dep_delay) > 20]
Q4. Extract average of arrival and departure delays for carrier == 'DL' by 'origin' and 'dest' variables.
mydata[carrier == "DL", lapply(.SD, mean, na.rm = TRUE), by = .(origin, dest), .SDcols = c("arr_delay", "dep_delay")]
Q5. Pull first value of 'air_time' by 'origin' and then sum the returned values when it is greater than 300
mydata[, .SD[1], .SDcols="air_time", by=origin][air_time > 300, sum(air_time)]
This package provides a one-stop solution for data wrangling in R. It offers two main benefits - less coding and lower computing time. However, it's not a first choice of some of R programmers. Some prefer dplyr package for its simplicity. I would recommend learn both the packages. Check out dplyr tutorial. If you are working on data having size less than 1 GB, you can use dplyr package. It offers decent speed but slower than data.table package.
Related Posts R Tutorials : Top 100 R Tutorials Spread the Word!About Author:
Deepanshu founded ListenData with a simple objective - Make analytics easy to understand and follow. He has over 10 years of experience in data science. During his tenure, he worked with global clients in various domains like Banking, Insurance, Private Equity, Telecom and HR.
While I love having friends who agree, I only learn from those who don't
Let's Get Connected Email LinkedIn