Dplyr provides a grammar for manipulating tables in R. This cheatsheet will guide you through the grammar, reminding you how to select, filter, arrange, mutate, summarise, group, and join data frames and tibbles. (Previous version) Updated January 17. Data Wrangling with dplyr and tidyr Cheat Sheet Tidy Data - A foundation for wrangling in R F MA F MA & In a tidy data set: Each variable is saved in its own column Syntax - Helpful conventions for wrangling.
Dplyr provides a grammar for manipulating tables in R. This cheat sheet will guide you through the grammar, reminding you how to select, filter, arrange, mutate, summarise, group, and join data frames and tibbles. Updated January 2017. Summarize Data Make New Columns Combine Data Sets df'w'.valuecounts Count number of rows with each unique value of variable len(df) # of rows in DataFrame. Dplyr::casewhen - multi-case ifelse dplyr::coalesce - first non-NA values by element across a set of vectors dplyr::ifelse - element-wise if + else dplyr::naif - replace specific values with NA pmax - element-wise max pmin - element-wise min dplyr::recode - Vectorized switch dplyr::recodefactor - Vectorized switch.
dplyr isan R package for working with structured data both in and outside of R.dplyr makes data manipulation for R users easy, consistent, andperformant. With dplyr as an interface to manipulating Spark DataFrames,you can:
Statements in dplyr can be chained together using pipes defined by themagrittrR package. dplyr also supports non-standardevalutionof its arguments. For more information on dplyr, see theintroduction,a guide for connecting todatabases,and a variety ofvignettes.
You can read data into Spark DataFrames using the followingfunctions:
Function | Description |
---|---|
spark_read_csv | Reads a CSV file and provides a data source compatible with dplyr |
spark_read_json | Reads a JSON file and provides a data source compatible with dplyr |
spark_read_parquet | Reads a parquet file and provides a data source compatible with dplyr |
Regardless of the format of your data, Spark supports reading data froma variety of different data sources. These include data stored on HDFS(hdfs://
protocol), Amazon S3 (s3n://
protocol), or local filesavailable to the Spark worker nodes (file://
protocol)
Each of these functions returns a reference to a Spark DataFrame whichcan be used as a dplyr table (tbl
).
This guide will demonstrate some of the basic data manipulation verbs ofdplyr by using data from the nycflights13
R package. This packagecontains data for all 336,776 flights departing New York City in 2013.It also includes useful metadata on airlines, airports, weather, andplanes. The data comes from the US Bureau of TransportationStatistics,and is documented in ?nycflights13
Connect to the cluster and copy the flights data using the copy_to
function. Caveat: The flight data in nycflights13
is convenient fordplyr demonstrations because it is small, but in practice large datashould rarely be copied directly from R objects.
Verbs are dplyr commands for manipulating data. When connected to aSpark DataFrame, dplyr translates the commands into Spark SQLstatements. Remote data sources use exactly the same five verbs as localdata sources. Here are the five verbs with their corresponding SQLcommands:
select
~ SELECT
filter
~ WHERE
arrange
~ ORDER
summarise
~ aggregators: sum, min, sd, etc.
mutate
~ operators: +, *, log, etc.
When working with databases, dplyr tries to be as lazy as possible:
It never pulls data into R unless you explicitly ask for it.
It delays doing any work until the last possible moment: it collectstogether everything you want to do and then sends it to the databasein one step.
For example, take the followingcode:
This sequence of operations never actually touches the database. It’snot until you ask for the data (e.g. by printing c4
) that dplyrrequests the results from the database.
You can usemagrittrpipes to write cleaner syntax. Using the same example from above, youcan write a much cleaner version like this:
The group_by
function corresponds to the GROUP BY
statement in SQL.
You can copy data from Spark into R’s memory by using collect()
.
collect()
executes the Spark query and returns the results to R forfurther analysis and visualization.
It’s relatively straightforward to translate R code to SQL (or indeed toany programming language) when doing simple mathematical operations ofthe form you normally use when filtering, mutating and summarizing.dplyr knows how to convert the following R functions to Spark SQL:
dplyr supports Spark SQL window functions. Window functions are used inconjunction with mutate and filter to solve a wide range of problems.You can compare the dplyr syntax to the query it has generated by usingdbplyr::sql_render()
.
It’s rare that a data analysis involves only a single table of data. Inpractice, you’ll normally have many tables that contribute to ananalysis, and you need flexible tools to combine them. In dplyr, thereare three families of verbs that work with two tables at a time:
Mutating joins, which add new variables to one table from matchingrows in another.
Filtering joins, which filter observations from one table based onwhether or not they match an observation in the other table.
Set operations, which combine the observations in the data sets asif they were set elements.
All two-table verbs work similarly. The first two arguments are x
andy
, and provide the tables to combine. The output is always a new tablewith the same type as x
.
The following statements are equivalent:
You can use sample_n()
and sample_frac()
to take a random sample ofrows: use sample_n()
for a fixed number and sample_frac()
for afixed fraction.
It is often useful to save the results of your analysis or the tablesthat you have generated on your Spark cluster into persistent storage.The best option in many scenarios is to write the table out to aParquet file using thespark_write_parquetfunction. For example:
This will write the Spark DataFrame referenced by the tbl R variable tothe given HDFS path. You can use thespark_read_parquetfunction to read the same table back into a subsequent Sparksession:
You can also write data as CSV or JSON using thespark_write_csv andspark_write_jsonfunctions.
Many of Hive’s built-in functions (UDF) and built-in aggregate functions(UDAF) can be called inside dplyr’s mutate and summarize. The LanguangeReferenceUDFpage provides the list of available functions.
The following example uses the datediff and current_date HiveUDFs to figure the difference between the flight_date and the currentsystem date:
4 min read2020/04/16I use R to access data held in Microsoft SQL Server databases on a daily basis. As a result of running into problems, I’ve realized I don’t have an understanding of the roles different components, notably dplyr, dbplyr, odbc and dbi each play in the process. This contributes to the fact that my efforts to resolve or mitigate issues are often an inefficient combination of Google searches and trial and error. Additionally, as I find or develop workarounds, I am unwilling to promote them with others because I don’t fully understand the cause of the issue and, as a result, I am not confident I have addressed the problem at the appropriate level.
Specifically, this is most motivated by the, “Invalid descriptor index,” error documented here.
I am writing what I learn to solidly my thinking and to help others who may experience the same challenge.
Given that the source of my motivation is encountering problems when working with data in Microsoft SQL Server databases, and that I prefer to use packages in the tidyverse, this investigation will be focused on how these packages work together to collect data from databases managed by Microsoft SQL server.
I won’t be writing about other options like RODBC, RJDBC or database-specific packages.
I will also not include comments about database management systems (DBMS) other than Microsoft SQL Server.
While it’s possible to generalize many of the concepts I write about here to other DBMS systems, I will not explicitly call them out. There are plenty of resources that do that. I aim to be very focused on how these components interact in a tidyverse and Microsoft SQL Server environment in the hopes it will help paint a simpler, clearer picture for others working in that same configuration.
Additionally, I almost never write data to a DBMS, and I suspect this is the case for many people working as analysts in Enterprise environments. In light of that I will be focused on how these components work together to extract data from SQL, and not how they write data to it.
dbplyr is the database back-end for dplyr - it does not need to be loaded explicitly, it is loaded by dplyr when working with data in a database.
dbplyr translates dplyr syntax into Microsoft SQL Server specific SQL code so dplyr can be used to retrieve data from a database system without the need to write SQL code.
dbplyr relies on the DBI and odbc packages as an intermediaries for connections with a SQL Server database.
Dplyr can also pass on explicitly written (not translated from dplyr) SQL code to DBI.
dbplyr generates, or captures, the SQL code that is then passed into the front-end of the database stack provided by DBI, odbc and the ODBC driver
DBI segments the connectivity to the SQL database into a, “front-end,” and a, “back-end.”
DBI implements a standardized front-end to dbplyr, and the odbc package acts as a driver for DBI to interface with SQL Server.
An example of front-end functionality provided by DBI…
I think of the DBI package as the front end for interactive user at the R console, a script or package, into the other components, odbc and an ODBC driver, that make it possible to extract data from SQL server.
The odbc package provides the DBI back-end to any odbc driver connection, including those for Microsoft SQL Server.
This enables a connection to any database with ODBC drivers available.
I think of the odbc package as the “back-end” of DBI and the “front-end” into the ODBC driver.
Open Database Connectivity (ODBC) drivers are the last leg of the link between dplyr and SQL Server. They are what enable the odbc package to interface with SQL server.
I think of the SQL Server ODBC driver as the “front-end” into SQL server, and again, back to the user, script or package
User or package code -> DBI -> odbc -> SQL Server ODBC driver -> SQL server
I haven’t written anything new here, just focused it on the configuration I use day to day in the hopes it helps someone else.
Most was gleaned from the following and I’d recommend reviewing them for a broader perspective, and deeper insights into specific areas:
vignette('DBI', package = 'DBI')
vignette('dbplyr', 'package = 'dbplyr)