The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
cpsvote
The United States Census Bureau has collected information about voter registration and turnout since 1964 as part of the Current Population Survey (CPS). The surveys are conducted every November during federal election years, and are generally referred to as the Voting and Registration Supplement (VRS). The official website for the CPS VRS is located at https://www.census.gov/topics/public-sector/voting.html, and the “About Me” page provides more detailed information on the history of this data collection project (a 175 page document, “Design and Methodology: Current Population Survey”, provides a very deep dive in the history and methodology of the CPS).
The CPS VRS is an important resource to understand voter registration and voter turnout in the United States. The sample size, data quality, and consistency of the VRS makes it an invaluable resource to compare registration and turnout rates over time, within states, and among subgroups. Other high-quality national surveys like the American National Election Study or the General Social Survey have also been administered over decades, but their samples are not designed for state level analyses, and neither includes the same kinds of questions about voter registration and voting that are part of the CPS VRS.
Anyone interested in American elections, voter registration, or voter turnout will want to learn how to use the CPS VRS, but the CPS can be challenging to use, especially for less experienced analysts.
The cpsvote
package is designed to overcome these challenges. The package eases access to the CPS by completing the following steps using a single command or a small number of lines:
srvyr
package, and supplies the correct commands, to weight the CPS VRS data as necessary.More details on these procedures are provided below; see also vignettes("basics")
.
The content of the CPS VRS has changed over time, in some cases because of real world changes in the legal environment - most notably the passage of the National Voter Registration Act (NVRA) in 1993 - and other times to provide a more clear reflection of changing voting behavior. The content of the survey has remained relatively consistent since 1996. As a result, many users only use the CPS since 1996, and this package provides data from 1994 onwards.
Even though the content has remained relatively constant from 1996, the Census has sometimes changed the location, names, and even the coding for individual variables. It is very important that anyone using multiple years of the CPS VRS pay extremely close attention to the coding choices that were made in each year.
We have made (what we consider) sensible decisions about which columns from the VRS get combined across disparate years, while also leaving you the opportunity to bring in additional columns or join years differently.
The cpsvote
package does most of this recoding and relabeling work for you, while still retaining the data in its original format.
Most surveys provide a sample weight that allows the survey results to be generalized to the target population. Typically, survey weights are provided because the sampling design may have included survey strata, or there may have been oversampling applied to specific groups. Survey weights can also adjust for simple deviations in the sampled population and the target population, either due to non-responses or even just as a result of randomization.
The CPS uses a particularly complex survey design. As described in the “Design and Methodology: Current Population Survey” document, “The CPS sample is a multistage stratified sample of approximately 72,000 assigned housing units from 824 sample areas designed to measure demographic and labor force characteristics of the civilian noninstitutionalized population 16 years of age and older.”
Critically, the CPS is designed to generate national and state estimates, and samples households, not individuals. These two considerations both mean that individual respondents are not sampled with equal probability, i.e. a Montanan living in a single-person household will have a much higher probability of being sampled than a Californian living in a six-person household. The sample weight provided by the CPS adjusts your estimates so as to take into account these different probabilities of being sampled, and is needed to produce statistically valid estimates.
R
has not historically made using survey weights very easy, but two packages have simplified the process. Thomas Lumley’s survey
package and his 2011 volume Complex Surveys: A Guide to Analysis Using R are the recommended sources for weighting survey data in R
. The following code will create a weighted data object using survey
:
cps_survey <- survey::svydesign(ids = ~1, # simple weights, no clusters
data = cps_allyears_10k, # data set
weights = ~turnout_weight) # weight column
A recently released package, srvyr
, provides “dplyr
-like syntax for summary statistics of survey data” (srvyr
at CRAN). srvyr
acts as a wrapper around survey
and makes many of the commands easier to implement. The following code will create a weighted data object using srvyr
:
cps_srvyr <- srvyr::as_survey_design(.data = cps_allyears_10k, # data set
weights = turnout_weight) # weight column
In most of the examples provided in this documentation, we use the srvyr
command syntax, but some survey
examples are provided.
The cpsvote
package provides detailed instructions to use srvyr
to correctly weight your data.
There are two final, and related, challenges to use the CPS VRS for estimating voter turnout. Both of these adjustments need to be made in order to produce statistically valid estimates of turnout. The cpsvote
packages provides turnout measures that incorporate the correct coding, and we document a method to adjust for biases in turnout.
First, the CPS has long used an “idiosyncratic” coding rule that has been recognized over time and was carefully documented by two scholars, Aram Hur and Christopher Achen, in a 2013 article titled “Coding Voter Turnout Responses in the Current Population Survey”.
The coding rule is not at all clear from the CPS documentation, and without correct coding, any turnout estimates that are produced will not match those in official Census communications.
In the CPS codebook, the variable is reported in this way (from the 2016 documentation):
NAME | SIZE | DESCRIPTION | LOCATION |
PES1 | 2 | In any election, some people are not able to vote because they are sick or busy or have some other reason, and others do not want to vote. Did (you/name) vote in the election held on Tuesday, November X, XXXX?} | 951-952 |
EDITED UNIVERSE: PRTAGE >=18 and PRCITSHP = 1, 2, 3, or 4 | |||
VALID ENTRIES: | |||
1 Yes | |||
2 No | |||
-1 Not in Universe | |||
-2 Don’t Know | |||
-3 Refused | |||
-9 No Response |
The key decision made by Census staff has been to treat three of these non-response categories as “not voted”, essentially adding these cases (all except for “Yes” and “Not in Universe”) to the denominator for a turnout estimate. Because the CPS counts non-responses as “not voted,” their coding scheme has been vulnerable to changing non-response rates over time.
Hur and Achen (2013) describe the problem in their abstract:
“The Voting and Registration Supplement to the Current Population Survey (CPS) employs a large sample size and has a very high response rate, and thus is often regarded as the gold standard among turnout surveys. In 2008, however, the CPS inaccurately estimated that presidential turnout had undergone a small decrease from 2004. We show that growing nonresponse plus a long-standing but idiosyncratic Census coding decision was responsible. We suggest that to cope with nonresponse and overreporting, users of the Voting Supplement sample should weight it to reflect actual state vote counts.”
The cpsvote
package provides the original Census coding scheme in the cps_turnout
variable that is produced by the default functions.
A related problem with the CPS turnout estimate, documented carefully by Professor Michael McDonald in a 2014 working paper and at the United States Elections Project’s CPS Over-Report and Non-Response Bias Correction page is that, over time, two biases have crept into the CPS - one from increasing non-response rates, the second from over-reports of turnout (Michael McDonald, 2014, “What’s Wrong with the CPS?”, paper presented at the Annual Meeting of the American Political Science Association).
Hur and Achen suggest a complex post-stratification adjustment to the data that will adjust for these biases:
We recommend dropping all categories of missing turnout response, and then poststratifying the remaining CPS sample so that the survey turnout rate in each state matches the corresponding state VEP turnout.
The Hur-Achen post-stratification weight relies on the authoritative estimates for state-level turnout as calculated by the United States Election Project, directed by Professor Michael McDonald of the University of Florida. Professor McDonald helpfully provides guidance on this complex weighting procedure. Stata code is provided at his website: Commentary, Guidelines, and Stata Code.
We have integrated the Hur and Achen recoding into the cpsvote
package, contained in the variable hurachen_turnout
that is produced by the default functions. The original CPS weight is provided in the variable WEIGHT
, and a weight adjusted for proper turnout coding and post-stratification for over-reporting is provided in the variable turnout_weight
.
The cpsvote
package creates an alternative survey weight based on current scientific research.
library(cpsvote)
library(srvyr)
cps16 <- cps_load_basic(years = 2016, datadir = here::here('cps_data'))
# unweighted, using the census turnout coding
cps16_unweighted <- cps16 %>%
summarize(type = "Unweighted",
turnout = mean(cps_turnout == "YES", na.rm = TRUE))
# weighted, using the original weights and census turnout coding
cps16_censusweight <- cps16 %>%
as_survey_design(weights = WEIGHT) %>%
summarize(turnout = survey_mean(cps_turnout == "YES", na.rm = TRUE)) %>%
mutate(type = "Census")
# weighted, using the modified weights and hur-achen turnout coding
cps16_hurachenweight <- cps16 %>%
as_survey_design(weights = turnout_weight) %>%
summarize(turnout = survey_mean(hurachen_turnout == "YES", na.rm = TRUE)) %>%
mutate(type = "Hur & Achen")
turnout_estimates <- dplyr::bind_rows(cps16_unweighted,
cps16_censusweight,
cps16_hurachenweight) %>%
dplyr::transmute('Method' = type,
'Turnout Estimate' = scales::percent(turnout, .1))
knitr::kable(turnout_estimates)
The table shows the overestimate of turnout using the Census method, because it fails to account for growing non-response bias. The Hur-Achen weighted estimate of 2016 turnout, 60.4%, is closest to the 60.1% estimate provided by the US Elections Project.
See vignette("basics")
for more information on using cpsvote
, and further examples of how to work with this data. See vignette("voting")
for more information and further examples of using the CPS for calculating voter turnout and other voting-related quantities.
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.