The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
The airGR package implements semi-distributed model capabilities using a lag model between subcatchments. It allows to chain together several lumped models as well as integrating anthropogenic influence such as reservoirs or withdrawals.
Here we explain how to implement the semi-distribution with airGR. For everyday use, however, it is easier to use the airGRiwrm package.
RunModel_Lag
documentation gives an example of
simulating the influence of a reservoir in a lumped model. Try
example(RunModel_Lag)
to get it.
In this vignette, we show how to calibrate 2 sub-catchments in series with a semi-distributed model consisting of 2 GR4J models. For doing this we compare 3 strategies for calibrating the downstream subcatchment:
We finally compare these calibrations with a theoretical set of parameters. This comparison is based on the Kling-Gupta Efficiency computed on the root-squared discharges as performance criterion.
We use an example data set from the package that unfortunately contains data for only one catchment.
Let’s imagine that this catchment of 360 km² is divided into 2 subcatchments:
We consider that meteorological data are homogeneous on the whole
catchment, so we use the same pluviometry BasinObs$P
and
the same evapotranspiration BasinObs$E
for the 2
subcatchments.
For the observed flow at the downstream outlet, we generate it with the assumption that the upstream flow arrives at downstream with a constant delay of 2 days.
QObsDown <- (BasinObs$Qmm + c(0, 0, BasinObs$Qmm[1:(length(BasinObs$Qmm)-2)])) / 2
options(digits = 5)
summary(cbind(QObsUp = BasinObs$Qmm, QObsDown))
## QObsUp QObsDown
## Min. : 0.02 Min. : 0.02
## 1st Qu.: 0.39 1st Qu.: 0.41
## Median : 0.98 Median : 1.00
## Mean : 1.47 Mean : 1.47
## 3rd Qu.: 1.88 3rd Qu.: 1.91
## Max. :23.88 Max. :19.80
## NA's :802 NA's :820
With a delay of 2 days between the 2 gauging stations, the theoretical Velocity parameter should be equal to:
## [1] "Velocity: 0.579 m/s"
The operations are exactly the same as the ones for a GR4J lumped model. So we do exactly the same operations as in the Get Started vignette.
InputsModelUp <- CreateInputsModel(FUN_MOD = RunModel_GR4J, DatesR = BasinObs$DatesR,
Precip = BasinObs$P, PotEvap = BasinObs$E)
Ind_Run <- seq(which(format(BasinObs$DatesR, format = "%Y-%m-%d") == "1990-01-01"),
which(format(BasinObs$DatesR, format = "%Y-%m-%d") == "1999-12-31"))
RunOptionsUp <- CreateRunOptions(FUN_MOD = RunModel_GR4J,
InputsModel = InputsModelUp,
IndPeriod_WarmUp = NULL, IndPeriod_Run = Ind_Run,
IniStates = NULL, IniResLevels = NULL)
## Warning in CreateRunOptions(FUN_MOD = RunModel_GR4J, InputsModel = InputsModelUp, : model warm up period not defined: default configuration used
## the year preceding the run period is used
# Error criterion is KGE computed on the root-squared discharges
InputsCritUp <- CreateInputsCrit(FUN_CRIT = ErrorCrit_KGE, InputsModel = InputsModelUp,
RunOptions = RunOptionsUp,
VarObs = "Q", Obs = BasinObs$Qmm[Ind_Run],
transfo = "sqrt")
CalibOptionsUp <- CreateCalibOptions(FUN_MOD = RunModel_GR4J, FUN_CALIB = Calibration_Michel)
OutputsCalibUp <- Calibration_Michel(InputsModel = InputsModelUp, RunOptions = RunOptionsUp,
InputsCrit = InputsCritUp, CalibOptions = CalibOptionsUp,
FUN_MOD = RunModel_GR4J)
## Grid-Screening in progress (0% 20% 40% 60% 80% 100%)
## Screening completed (81 runs)
## Param = 169.017, -0.020, 42.098, 2.384
## Crit. KGE[sqrt(Q)] = 0.8676
## Steepest-descent local search in progress
## Calibration completed (22 iterations, 244 runs)
## Param = 151.411, 0.443, 59.145, 2.423
## Crit. KGE[sqrt(Q)] = 0.8906
And see the result of the simulation:
We need to create InputsModel
objects completed with
upstream information with upstream observed flow for the calibration of
first case and upstream simulated flows for the other cases:
InputsModelDown1 <- CreateInputsModel(
FUN_MOD = RunModel_GR4J, DatesR = BasinObs$DatesR,
Precip = BasinObs$P, PotEvap = BasinObs$E,
Qupstream = matrix(BasinObs$Qmm, ncol = 1), # upstream observed flow
LengthHydro = 100, # distance between upstream catchment outlet & the downstream one [km]
BasinAreas = c(180, 180) # upstream and downstream areas [km²]
)
## Warning in CreateInputsModel(FUN_MOD = RunModel_GR4J, DatesR = BasinObs$DatesR,
## : 'Qupstream' contains NA values: model outputs will contain NAs
For using upstream simulated flows, we should concatenate a vector with the simulated flows for the entire period of simulation (warm-up + run):
Qsim_upstream <- rep(NA, length(BasinObs$DatesR))
# Simulated flow during warm-up period (365 days before run period)
Qsim_upstream[Ind_Run[seq_len(365)] - 365] <- OutputsModelUp$RunOptions$WarmUpQsim
# Simulated flow during run period
Qsim_upstream[Ind_Run] <- OutputsModelUp$Qsim
InputsModelDown2 <- CreateInputsModel(
FUN_MOD = RunModel_GR4J, DatesR = BasinObs$DatesR,
Precip = BasinObs$P, PotEvap = BasinObs$E,
Qupstream = matrix(Qsim_upstream, ncol = 1), # upstream observed flow
LengthHydro = 100, # distance between upstream catchment outlet & the downstream one [km]
BasinAreas = c(180, 180) # upstream and downstream areas [km²]
)
## Warning in CreateInputsModel(FUN_MOD = RunModel_GR4J, DatesR = BasinObs$DatesR,
## : 'Qupstream' contains NA values: model outputs will contain NAs
We calibrate the combination of Lag model for upstream flow transfer and GR4J model for the runoff of the downstream subcatchment:
RunOptionsDown <- CreateRunOptions(FUN_MOD = RunModel_GR4J,
InputsModel = InputsModelDown1,
IndPeriod_WarmUp = NULL, IndPeriod_Run = Ind_Run,
IniStates = NULL, IniResLevels = NULL)
## Warning in CreateRunOptions(FUN_MOD = RunModel_GR4J, InputsModel = InputsModelDown1, : model warm up period not defined: default configuration used
## the year preceding the run period is used
InputsCritDown <- CreateInputsCrit(FUN_CRIT = ErrorCrit_KGE, InputsModel = InputsModelDown1,
RunOptions = RunOptionsDown,
VarObs = "Q", Obs = QObsDown[Ind_Run],
transfo = "sqrt")
CalibOptionsDown <- CreateCalibOptions(FUN_MOD = RunModel_GR4J,
FUN_CALIB = Calibration_Michel,
IsSD = TRUE) # specify that it's a SD model
OutputsCalibDown1 <- Calibration_Michel(InputsModel = InputsModelDown1,
RunOptions = RunOptionsDown,
InputsCrit = InputsCritDown,
CalibOptions = CalibOptionsDown,
FUN_MOD = RunModel_GR4J)
## Grid-Screening in progress (0% 20% 40% 60% 80% 100%)
## Screening completed (243 runs)
## Param = 1.250, 169.017, -0.020, 42.098, 2.384
## Crit. KGE[sqrt(Q)] = 0.9399
## Steepest-descent local search in progress
## Calibration completed (32 iterations, 542 runs)
## Param = 0.804, 147.332, 0.291, 35.300, 4.551
## Crit. KGE[sqrt(Q)] = 0.9611
RunModel
is run in order to automatically combine GR4J
and Lag models.
OutputsModelDown1 <- RunModel(InputsModel = InputsModelDown2,
RunOptions = RunOptionsDown,
Param = OutputsCalibDown1$ParamFinalR,
FUN_MOD = RunModel_GR4J)
Performance of the model validation is then:
## Crit. KGE[sqrt(Q)] = 0.8940
## SubCrit. KGE[sqrt(Q)] cor(sim, obs, "pearson") = 0.8945
## SubCrit. KGE[sqrt(Q)] sd(sim)/sd(obs) = 0.9897
## SubCrit. KGE[sqrt(Q)] mean(sim)/mean(obs) = 1.0009
We calibrate the model with the InputsModel
object
previously created for substituting the observed upstream flow with the
simulated one:
OutputsCalibDown2 <- Calibration_Michel(InputsModel = InputsModelDown2,
RunOptions = RunOptionsDown,
InputsCrit = InputsCritDown,
CalibOptions = CalibOptionsDown,
FUN_MOD = RunModel_GR4J)
## Grid-Screening in progress (0% 20% 40% 60% 80% 100%)
## Screening completed (243 runs)
## Param = 1.250, 169.017, -0.020, 83.096, 2.384
## Crit. KGE[sqrt(Q)] = 0.8827
## Steepest-descent local search in progress
## Calibration completed (37 iterations, 591 runs)
## Param = 0.330, 165.670, 0.273, 26.311, 3.857
## Crit. KGE[sqrt(Q)] = 0.9031
The regularisation follow the method proposed by de Lavenne et al. (2019).
As a priori parameter set, we use the calibrated parameter set of the upstream catchment and the theoretical velocity:
The Lavenne criterion is initialised with the a priori parameter set and the value of the KGE of the upstream basin.
IC_Lavenne <- CreateInputsCrit_Lavenne(InputsModel = InputsModelDown2,
RunOptions = RunOptionsDown,
Obs = QObsDown[Ind_Run],
AprParamR = ParamDownTheo,
AprCrit = OutputsCalibUp$CritFinal)
The Lavenne criterion is used instead of the KGE for calibration with regularisation
OutputsCalibDown3 <- Calibration_Michel(InputsModel = InputsModelDown2,
RunOptions = RunOptionsDown,
InputsCrit = IC_Lavenne,
CalibOptions = CalibOptionsDown,
FUN_MOD = RunModel_GR4J)
## Grid-Screening in progress (0% 20% 40% 60% 80% 100%)
## Screening completed (243 runs)
## Param = 1.250, 169.017, -0.020, 83.096, 2.384
## Crit. Composite = 0.8926
## Steepest-descent local search in progress
## Calibration completed (26 iterations, 482 runs)
## Param = 0.520, 149.905, 0.443, 58.557, 2.462
## Crit. Composite = 0.9116
## Formula: sum(0.86 * KGE[sqrt(Q)], 0.14 * GAPX[ParamT])
The KGE is then calculated for performance comparisons:
OutputsModelDown3 <- RunModel(InputsModel = InputsModelDown2,
RunOptions = RunOptionsDown,
Param = OutputsCalibDown3$ParamFinalR,
FUN_MOD = RunModel_GR4J)
KGE_down3 <- ErrorCrit_KGE(InputsCritDown, OutputsModelDown3)
## Crit. KGE[sqrt(Q)] = 0.8983
## SubCrit. KGE[sqrt(Q)] cor(sim, obs, "pearson") = 0.9102
## SubCrit. KGE[sqrt(Q)] sd(sim)/sd(obs) = 0.9542
## SubCrit. KGE[sqrt(Q)] mean(sim)/mean(obs) = 1.0130
Both calibrations overestimate this parameter:
mVelocity <- matrix(c(Velocity,
OutputsCalibDown1$ParamFinalR[1],
OutputsCalibDown2$ParamFinalR[1],
OutputsCalibDown3$ParamFinalR[1]),
ncol = 1,
dimnames = list(c("theoretical",
"calibrated with observed upstream flow",
"calibrated with simulated upstream flow",
"calibrated with sim upstream flow and regularisation"),
c("Velocity parameter")))
knitr::kable(mVelocity)
Velocity parameter | |
---|---|
theoretical | 0.579 |
calibrated with observed upstream flow | 0.804 |
calibrated with simulated upstream flow | 0.330 |
calibrated with sim upstream flow and regularisation | 0.520 |
Theoretically, the parameters of the downstream GR4J model should be the same as the upstream one with the velocity as extra parameter:
OutputsModelDownTheo <- RunModel(InputsModel = InputsModelDown2,
RunOptions = RunOptionsDown,
Param = ParamDownTheo,
FUN_MOD = RunModel_GR4J)
KGE_downTheo <- ErrorCrit_KGE(InputsCritDown, OutputsModelDownTheo)
## Crit. KGE[sqrt(Q)] = 0.8976
## SubCrit. KGE[sqrt(Q)] cor(sim, obs, "pearson") = 0.9082
## SubCrit. KGE[sqrt(Q)] sd(sim)/sd(obs) = 0.9562
## SubCrit. KGE[sqrt(Q)] mean(sim)/mean(obs) = 1.0121
comp <- matrix(c(0, OutputsCalibUp$ParamFinalR,
rep(OutputsCalibDown1$ParamFinalR, 2),
OutputsCalibDown2$ParamFinalR,
OutputsCalibDown3$ParamFinalR,
ParamDownTheo),
ncol = 5, byrow = TRUE)
comp <- cbind(comp, c(OutputsCalibUp$CritFinal,
OutputsCalibDown1$CritFinal,
KGE_down1$CritValue,
OutputsCalibDown2$CritFinal,
KGE_down3$CritValue,
KGE_downTheo$CritValue))
colnames(comp) <- c("Velocity", paste0("X", 1:4), "KGE(√Q)")
rownames(comp) <- c("Calibration of the upstream subcatchment",
"Calibration 1 with observed upstream flow",
"Validation 1 with simulated upstream flow",
"Calibration 2 with simulated upstream flow",
"Calibration 3 with simulated upstream flow and regularisation",
"Validation theoretical set of parameters")
knitr::kable(comp)
Velocity | X1 | X2 | X3 | X4 | KGE(√Q) | |
---|---|---|---|---|---|---|
Calibration of the upstream subcatchment | 0.000 | 151 | 0.443 | 59.1 | 2.42 | 0.891 |
Calibration 1 with observed upstream flow | 0.804 | 147 | 0.291 | 35.3 | 4.55 | 0.961 |
Validation 1 with simulated upstream flow | 0.804 | 147 | 0.291 | 35.3 | 4.55 | 0.894 |
Calibration 2 with simulated upstream flow | 0.330 | 166 | 0.273 | 26.3 | 3.86 | 0.903 |
Calibration 3 with simulated upstream flow and regularisation | 0.520 | 150 | 0.443 | 58.6 | 2.46 | 0.898 |
Validation theoretical set of parameters | 0.579 | 151 | 0.443 | 59.1 | 2.42 | 0.898 |
Even if calibration with observed upstream flows gives an improved performance criteria, in validation using simulated upstream flows the result is quite similar as the performance obtained with the calibration with upstream simulated flows. The theoretical set of parameters give also an equivalent performance but still underperforming the calibration 2 one. Regularisation allows to get similar performance as the one for calibration with simulated flows but with the big advantage of having parameters closer to the theoretical ones (Especially for the velocity parameter).
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.