The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.

cleanepi: Clean and standardize epidemiological data

License: MIT R-CMD-check Codecov test coverage lifecycle-experimental DOI

cleanepi is an R package designed for cleaning, curating, and standardizing epidemiological data. It streamlines various data cleaning tasks that are typically expected when working with datasets in epidemiology.

Key functionalities of cleanepi include:

  1. Removing irregularities: It removes duplicated and empty rows and columns, as well as columns with constant values.

  2. Handling missing values: It replaces missing values with the standard NA format, ensuring consistency and ease of analysis.

  3. Ensuring data integrity: It ensures the uniqueness of uniquely identified columns, thus maintaining data integrity and preventing duplicates.

  4. Date conversion: It offers functionality to convert character columns to Date format under specific conditions, enhancing data uniformity and facilitating temporal analysis. It also offers conversion of numeric values written in letters into numbers.

  5. Standardizing entries: It can standardize column entries into specified formats, promoting consistency across the dataset.

  6. Time span calculation: It calculates the time span between two elements of type Date, providing valuable demographic insights for epidemiological analysis.

cleanepi operates on data frames or similar structures like tibbles, as well as linelist objects commonly used in epidemiological research. It returns the processed data in the same format, ensuring seamless integration into existing workflows. Additionally, it generates a comprehensive report detailing the outcomes of each cleaning task.

cleanepi is developed by the Epiverse-TRACE team at the Medical Research Council The Gambia unit at the London School of Hygiene and Tropical Medicine.

Installation

cleanepi can be installed from CRAN using

install.packages("cleanepi")

The latest development version of cleanepi can be installed from GitHub.

if (!require("pak")) install.packages("pak")
pak::pak("epiverse-trace/cleanepi")
library(cleanepi)

Quick start

The main function in cleanepi is clean_data(), which internally makes call of almost all standard data cleaning functions, such as removal of empty and duplicated rows and columns, replacement of missing values, etc. However, each function can also be called independently to perform a specific task. This mechanism is explained in details in the vignette. Below is typical example of how to use the clean_data() function.

# READING IN THE TEST DATASET
test_data <- readRDS(system.file("extdata", "test_df.RDS",
                                 package = "cleanepi"))
study_id event_name country_code country_name date.of.admission dateOfBirth date_first_pcr_positive_test sex
PS001P2 day 0 2 Gambia 01/12/2020 06/01/1972 Dec 01, 2020 1
PS002P2 day 0 2 Gambia 28/01/2021 02/20/1952 Jan 01, 2021 1
PS004P2-1 day 0 2 Gambia 15/02/2021 06/15/1961 Feb 11, 2021 -99
PS003P2 day 0 2 Gambia 11/02/2021 11/11/1947 Feb 01, 2021 1
P0005P2 day 0 2 Gambia 17/02/2021 09/26/2000 Feb 16, 2021 2
PS006P2 day 0 2 Gambia 17/02/2021 -99 May 02, 2021 2
PB500P2 day 0 2 Gambia 28/02/2021 11/03/1989 Feb 19, 2021 1
PS008P2 day 0 2 Gambia 22/02/2021 10/05/1976 Sep 20, 2021 2
PS010P2 day 0 2 Gambia 02/03/2021 09/23/1991 Feb 26, 2021 1
PS011P2 day 0 2 Gambia 05/03/2021 02/08/1991 Mar 03, 2021 2
# READING IN THE DATA DICTIONARY
test_dictionary <- readRDS(system.file("extdata", "test_dictionary.RDS",
                                       package = "cleanepi"))
options values grp orders
1 male sex 1
2 female sex 2
# DEFINING THE CLEANING PARAMETERS
use_na                  <- list(target_columns = NULL, na_strings = "-99")
remove_duplicates       <- list(target_columns   = NULL)
standardize_dates       <- list(target_columns  = NULL,
                                error_tolerance = 0.4,
                                format          = NULL,
                                timeframe       = as.Date(c("1973-05-29",
                                                            "2023-05-29")),
                                modern_excel    = TRUE,
                                orders          = list(
                                  world_named_months = c("Ybd", "dby"),
                                  world_digit_months = c("dmy", "Ymd"),
                                  US_formats         = c("Omdy", "YOmd")
                                ))
standardize_subject_ids <- list(target_columns = "study_id",
                                prefix         = "PS",
                                suffix         = "P2",
                                range          = c(1, 100),
                                nchar          = 7)
remove_cte              <- list(cutoff = 1)
standardize_col_names   <- list(keep   = "date.of.admission",
                                rename = c(DOB = "dateOfBirth"))
to_numeric              <- list(target_columns = "sex",
                                lang           = "en")

params <- list(
  standardize_column_names = standardize_col_names,
  remove_constants         = remove_cte,
  replace_missing_values   = use_na, 
  remove_duplicates        = remove_duplicates,
  standardize_dates        = standardize_dates,
  standardize_subject_ids  = standardize_subject_ids,
  to_numeric               = to_numeric,
  dictionary               = test_dictionary
)
# PERFORMING THE DATA CLEANING
cleaned_data <- clean_data(
  data   = test_data,
  params = params
)
#> 
#> cleaning column names
#> replacing missing values with NA
#> removing the constant columns, empty rows and columns
#> removing duplicated rows
#> standardising date columns
#> checking subject IDs format
#> Warning: Detected incorrect subject ids at lines: 3, 5, 7
#> Use the correct_subject_ids() function to adjust them.
#> converting sex en into numeric
#> performing dictionary-based cleaning
study_id date.of.admission DOB date_first_pcr_positive_test sex
PS001P2 2020-12-01 06/01/1972 2020-12-01 male
PS002P2 2021-01-28 02/20/1952 2021-01-01 male
PS004P2-1 2021-02-15 06/15/1961 2021-02-11 NA
PS003P2 2021-02-11 11/11/1947 2021-02-01 male
P0005P2 2021-02-17 09/26/2000 2021-02-16 female
PS006P2 2021-02-17 NA 2021-05-02 female
PB500P2 2021-02-28 11/03/1989 2021-02-19 male
PS008P2 2021-02-22 10/05/1976 2021-09-20 female
PS010P2 2021-03-02 09/23/1991 2021-02-26 male
PS011P2 2021-03-05 02/08/1991 2021-03-03 female
# EXTRACT THE DATA CLEANING REPORT
report <- attr(cleaned_data, "report")
# DISPLAY THE DATA CLEANING REPORT
print_report(report)

Vignette

browseVignettes("cleanepi")

Lifecycle

This package is currently an experimental, as defined by the RECON software lifecycle. This means that it is functional, but interfaces and functionalities may change over time, testing and documentation may be lacking.

Contributions

Contributions are welcome via pull requests.

Code of Conduct

Please note that the cleanepi project is released with a Contributor Code of Conduct. By contributing to this project, you agree to abide by its terms.

Citing this package

citation("cleanepi")
#> To cite package 'cleanepi' in publications use:
#> 
#>   Mané K, Bah B, Ahadzie B, Mohammed N, Degoot A (2024). _cleanepi:
#>   Clean and Standardize Epidemiological Data_.
#>   doi:10.5281/zenodo.6532786 <https://doi.org/10.5281/zenodo.6532786>,
#>   <https://epiverse-trace.github.io/cleanepi/>.
#> 
#> A BibTeX entry for LaTeX users is
#> 
#>   @Manual{,
#>     title = {cleanepi: Clean and Standardize Epidemiological Data},
#>     author = {Karim Mané and Bubacarr Bah and Bankolé Ahadzie and Nuredin Mohammed and Abdoelnaser Degoot},
#>     year = {2024},
#>     doi = {10.5281/zenodo.6532786},
#>     url = {https://epiverse-trace.github.io/cleanepi/},
#>   }

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.