The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
The irg
package opts for a tabular calculation of the instantaneous rate of green-up (IRG) as opposed to a raster based approach. Sampling MODIS imagery is left up to the user and a prerequisite for all functions. The main input (DT
) for all functions is a data.table
of an NDVI time series. The sampling unit (id
) is flexible (a decision for the user) though we would anticipate points or polygons, or maybe a pixel. All functions leverage the speed of data.table
to efficiently filter, scale and model NDVI time series, and calculate IRG.
Install with CRAN
# Install
install.packages('irg')
or R-universe
# Enable the robitalec universe
options(repos = c(
robitalec = 'https://robitalec.r-universe.dev',
CRAN = 'https://cloud.r-project.org'))
# Install
install.packages('irg')
irg
depends on three packages (and stats
):
data.table
for all tabular processingRcppRoll
for fast rolling medians in filter_roll
.chk
for internal checks.No external dependencies.
irg
requires an NDVI time series in a data.table
.
Though names can be different and specified at input, but the default names and required columns are:
SummaryQA details:
Let’s take a look at the example data.
library(irg)
library(data.table)
<- fread(system.file('extdata', 'sampled-ndvi-MODIS-MOD13Q1.csv', package = 'irg'))
ndvi
# or look at the help page
?ndvi#> No documentation for 'ndvi' in specified packages and libraries:
#> you could try '??ndvi'
id | NDVI | SummaryQA | DayOfYear | yr |
---|---|---|---|---|
5 | 0.8662 | 0 | 206 | 2015 |
6 | 0.8814 | 0 | 206 | 2015 |
0 | 0.8788 | 0 | 222 | 2015 |
1 | 0.8671 | 0 | 222 | 2015 |
2 | 0.8732 | 0 | 222 | 2015 |
3 | 0.8759 | 0 | 211 | 2015 |
If your data is a data.frame
, convert it by reference:
# Pretend
<- as.data.frame(ndvi)
DF
# Convert by reference
setDT(DF)
Though irg
is not involved in the sampling step, it is important that the input data matches the package’s expectations.
We used the incredible Google Earth Engine to sample MODIS NDVI (MOD13Q1.006). There are also R packages specific to MODIS (MODIStsp
) and general purpose raster operations (raster
), and an R interface to Earth Engine rgee
.
Update: we recently added the use_example_ee_script()
function which offers an example script for extracting NDVI in Earth Engine. There are two versions, one for sampling MODIS MOD13Q1 and another for sampling Landsat 8.
Filtering steps in irg
use a baseline ‘winterNDVI’ and upper quantile as described by Bischoff et al. (2012). These steps require multiple years of sampled NDVI for each id
. In addition, make sure to include all samples throughout the year, leaving the filtering for irg
.
There are 5 filtering functions, 2 scaling functions, 3 modeling functions and 2 IRG functions.
The irg::irg
function is a wrapper for all steps - filtering, scaling, modeling and calculating IRG in one step. At this point, only defaults. Here’s 5 rows from the result.
For options, head to the steps below.
<- irg(ndvi) out
id | yr | t | fitted | irg |
---|---|---|---|---|
0 | 2016 | 0.4000000 | 0.9405414 | 0.2230530 |
0 | 2016 | 0.4027397 | 0.9469233 | 0.2003355 |
0 | 2016 | 0.4054795 | 0.9526484 | 0.1796680 |
0 | 2016 | 0.4082192 | 0.9577767 | 0.1609208 |
0 | 2016 | 0.4109589 | 0.9623641 | 0.1439600 |
There are 5 filtering functions.
functions | arguments |
---|---|
filter_ndvi | DT |
filter_qa | DT, ndvi, qa, good |
filter_roll | DT, window, id, method |
filter_top | DT, probs, id |
filter_winter | DT, probs, limits, doy, id |
# Load data.table
library(data.table)
library(irg)
# Read in example data
<- fread(system.file('extdata', 'sampled-ndvi-MODIS-MOD13Q1.csv', package = 'irg'))
ndvi
# Filter NDVI time series
filter_qa(ndvi, qa = 'SummaryQA', good = c(0, 1))
filter_winter(ndvi, probs = 0.025, limits = c(60L, 300L),
doy = 'DayOfYear', id = 'id')
filter_roll(ndvi, window = 3L, id = 'id', method = 'median')
filter_top(ndvi, probs = 0.925, id = 'id')
Two scaling functions are use to scale the day of year column and filtered NDVI time series between 0-1.
# Scale variables
scale_doy(ndvi, doy = 'DayOfYear')
scale_ndvi(ndvi)
Three functions are used to model the NDVI times series to a double logistic curve, as described by Bischoff et al. (2012).
\[fitted = \frac{1}{1 + e^ \frac{xmidS - t}{scalS}} - \frac{1}{1 + e^ \frac{xmidA - t}{scalA}}\]
Two options from this point are available: fitting NDVI and calculating IRG for observed data only, or for the full year.
To calculate for every day of every year, specify returns = 'models'
in model_params
, observed = FALSE
in model_ndvi
and assign the output of model_ndvi
.
# Guess starting parameters
model_start(ndvi, id = 'id', year = 'yr')
# Double logistic model parameters given starting parameters for nls
<- model_params(
mods
ndvi,returns = 'models',
id = 'id', year = 'yr',
xmidS = 'xmidS_start', xmidA = 'xmidA_start',
scalS = 0.05,
scalA = 0.01
)
# Fit double log to NDVI
<- model_ndvi(mods, observed = FALSE) fit
Alternatively, to calculate for the observed data only, specify returns = 'columns'
in model_params
and observed = TRUE
in model_ndvi
.
# Guess starting parameters
model_start(ndvi, id = 'id', year = 'yr')
# Double logistic model parameters given starting parameters for nls
model_params(
ndvi,returns = 'columns',
id = 'id', year = 'yr',
xmidS = 'xmidS_start', xmidA = 'xmidA_start',
scalS = 0.05,
scalA = 0.01
)
# Fit double log to NDVI
model_ndvi(ndvi, observed = TRUE)
\[IRG = \frac{e ^ \frac{t + xmidS}{scalS}}{2 scalS e ^ \frac{t + xmidS}{scalS} + scalS e ^ \frac{2t}{scalS} + scalS e ^ \frac{2midS}{scalS}}\] Finally, calculate IRG:
# Calculate IRG for each day of the year
calc_irg(fit)
# or for observed data
calc_irg(ndvi)
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.