The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
trimr is an R package that implements most commonly-used response time trimming methods, allowing the user to go from a raw data file to a finalised data file ready for inferential statistical analysis.
The trimming functions fall broadly into three families (together with the function names for each method implemented in trimr):
Of course, you would like to use trimr on your own data. Below are some demonstrations on how to use trimr that utilise some data that comes with the package. In previous versions of trimr your data had to contain columns with the same names used below. However, you can now specify your own column names. The default values are the old required values meaning no code written for previous versions of trimr will be broken by this change. You can have other columns in your data file (which will just be ignored by trimr).
trimr ships with some example data—“exampleData”—that the user can explore the trimming functions with. This data is simulated (i.e., not real), and has data from 32 subjects. This data is from a task switching experiment, where RT and accuracy was recorded for two experimental conditions: Switch, when the task switched from the previous trial, and Repeat, when the task repeated from the previous trial. (If you have data from a factorial design (i.e., the condition codes are spread over more than one column), then please see the final section of this vignette for how to deal with this in trimr).
# load the trimr package
library(trimr)
# activate the data
data(exampleData)
# look at the top of the data
head(exampleData)
## participant condition rt accuracy
## 1 1 Switch 1660 1
## 2 1 Switch 913 1
## 3 1 Repeat 2312 1
## 4 1 Repeat 754 1
## 5 1 Switch 3394 1
## 6 1 Repeat 930 1
The exampleData consists of 4 columns:
If these columns had different names from the defaults these can be specified, for example:
# perform the trimming
<- absoluteRT(data = exampleData,
trimmedData pptVar = "id", condVar = "cond", rtVar = "RT", accVar = "correct",
minRT = 150, maxRT = 2000, digits = 0)
The user can use RTs logged in milliseconds (as here) or in seconds (e.g., 0.657). The user can control the number of decimal places to round the trimmed data to.
The absolute value criterion is the simplest of all of the trimming methods available (except of course for having no trimming). An upper- and lower-criterion is set, and any response time that falls outside of these limits are removed. The function that performs this trimming method in trimr is called absoluteRT.
In this function, the user decalares lower- and upper-criterion for RT trimming (minRT and maxRT arguments, respectively); RTs outside of these criteria are removed. Note that these criteria must be in the same unit as the RTs are logged in within the data frame being used. The function also has some other important arguments:
In this first example, let’s trim the data using criteria of RTs less than 150 milliseconds and greater than 2,000 milliseconds, with error trials removed before trimming commences. Let’s also return the mean RTs for each condition, and round the data to zero decimal places.
# perform the trimming
<- absoluteRT(data = exampleData, minRT = 150, maxRT = 2000,
trimmedData digits = 0)
# look at the top of the data
head(trimmedData)
## participant Switch Repeat
## 1 1 901 742
## 2 10 793 669
## 3 11 1054 943
## 4 12 880 662
## 5 13 914 773
## 6 14 1000 817
Note that trimr returns a data frame with each row representing each participant in the data file (logged in the participant column), and separate columns for each experimental condition in the data.
If the user wishes to recive back trial-level data, change the “returnType” argument to “raw”:
# perform the trimming
<- absoluteRT(data = exampleData, minRT = 150, maxRT = 2000,
trimmedData returnType = "raw", digits = 0)
# look at the top of the data
head(trimmedData)
## participant condition rt accuracy
## 1 1 Switch 1660 1
## 2 1 Switch 913 1
## 4 1 Repeat 754 1
## 6 1 Repeat 930 1
## 7 1 Switch 1092 1
## 11 1 Repeat 708 1
Now, the data frame returned is in the same shape as the initial data file, but rows containing trimmed RTs are removed.
This trimming method uses a standard deviation multiplier as the upper criterion for RT removal (users still need to enter a lower-bound manually). For example, this method can be used to trim all RTs 2.5 standard deviations above the mean RT. This trimming can be done per condition (e.g., 2.5 SDs above the mean of each condition), per participant (e.g., 2.5 SDs above the mean of each participant), or per condition per participant (e.g., 2.5 SDs above the mean of each participant for each condition).
In this function, the user delcares a lower-bound on RT trimming (e.g., 150 milliseconds) and an upper-bound in standard deviations. The value of standard deviation used is set by the SD argument. How this is used varies depending on the values the user passes to two important function arguments:
Note that if both are set to TRUE, the trimming will occur per participant per condition (e.g., if SD is set to 2.5, the function will trim RTs 2.5 SDs above the mean RT of each participant for each condition).
In this example, let’s trim RTs faster than 150 milliseconds, and greater than 3 SDs above the mean of each participant, and return the mean RTs:
# trim the data
<- sdTrim(data = exampleData, minRT = 150, sd = 3,
trimmedData perCondition = FALSE, perParticipant = TRUE,
returnType = "mean", digits = 0)
# look at the top of the data
head(trimmedData)
## participant Switch Repeat
## 1 1 1042 775
## 2 10 779 666
## 3 11 1082 964
## 4 12 871 652
## 5 13 914 773
## 6 14 1034 827
Now, let’s trim per condition per participant:
# trim the data
<- sdTrim(data = exampleData, minRT = 150, sd = 3,
trimmedData perCondition = TRUE, perParticipant = TRUE,
returnType = "mean", digits = 0)
# look at the top of the data
head(trimmedData)
## participant Switch Repeat
## 1 1 1099 742
## 2 10 784 660
## 3 11 1079 968
## 4 12 874 644
## 5 13 911 776
## 6 14 1038 827
Three functions in this family implement the trimming methods proposed & discussed by van Selst & Jolicoeur (1994): nonRecursive, modifiedRecursive, and hybridRecursive. van Selst & Jolicoeur noted that the outcome of many trimming methods is influenced by the sample size (i.e., the number of trials) being considered, thus potentially producing bias. For example, even if RTs are drawn from identical positively-skewed distributions, a “per condition per participant” SD procedure (see sdTrim above) would result in a higher mean estimate for small sample sizes than larger sample sizes. This bias was shown to be removed when a “moving criterion” (MC) was used; this is where the SD used for trimming is adapted to the sample size being considered.
The non-recursive method proposed by van Selst & Jolicoeur (1994) is very similar to the standard deviation method outlined above with the exception that the user does not specify the SD to use as the upper bound. The SD used for the upper bound is rather decided by the sample size of the RTs being passed to the trimming function, with larger SDs being used for larger sample sizes. Also, the function only trims per participant per condition.
The nonRecursive function checks the sample size of the data being passed to it, and looks up the SD criterion required for the data’s sample size. The function looks in a data file contained in trimr called linearInterpolation. Should the user wish to see this data file (although the user will never need to access it if they are not interested), type:
# load the data
data(linearInterpolation)
# show the first 20 rows (there are 100 in total)
1:20, ] linearInterpolation[
## sampleSize modifiedRecursive nonRecursive
## 1 4 8.000 1.4580
## 2 5 6.200 1.6800
## 3 6 5.300 1.8410
## 4 7 4.800 1.9610
## 5 8 4.475 2.0500
## 6 9 4.250 2.1200
## 7 10 4.110 2.1700
## 8 11 4.000 2.2200
## 9 12 3.920 2.2460
## 10 13 3.850 2.2740
## 11 14 3.800 2.3100
## 12 15 3.750 2.3260
## 13 16 3.728 2.3390
## 14 17 3.706 2.3520
## 15 18 3.684 2.3650
## 16 19 3.662 2.3780
## 17 20 3.640 2.3910
## 18 21 3.631 2.3948
## 19 22 3.622 2.3986
## 20 23 3.613 2.4024
Notice there are two columns. This current function will only look in the nonRecursive column; the other column is used by the modifiedRecursive function, discussed below. If the sample size of the current set of data is 16 RTs (for example), the function will use an upper SD criterion of 2.359, and will proceed much like the sdTrim function’s operations.
Note the user can only be returned the mean trimmed RTs (i.e., there is no “returnType” argument for this function).
# trim the data
<- nonRecursive(data = exampleData, minRT = 150, digits = 0)
trimmedData
# see the top of the data
head(trimmedData)
## participant Switch Repeat
## 1 1 1053 732
## 2 10 779 652
## 3 11 1066 960
## 4 12 871 638
## 5 13 900 766
## 6 14 1018 817
The modifiedRecursive function is more involved than the nonRecursive function. This function performs trimming in cycles. It first temporarily removes the slowest RT from the distribution; then, the mean of the sample is calculated, and the cut-off value is calculated using a certain number of SDs around the mean, with the value for SD being determined by the current sample size. In this procedure, required SD decreases with increased sample size (cf., the nonRecursive method, with increasing SDs with increasing sample size; see the linearInterpolation data file above); see Van Selst and Jolicoeur (1994) for justification.
The temporarily removed RT is then returned to the sample, and the fastest and slowest RTs are then compared to the cut-off, and removed if they fall outside. This process is then repeated until no outliers remain, or until the sample size drops below four. The SD used for the cut-off is thus dynamically altered based on the sample size of each cycle of the procedure, rather than static like the nonRecursive method.
# trim the data
<- modifiedRecursive(data = exampleData, minRT = 150, digits = 0)
trimmedData
# see the top of the data
head(trimmedData)
## participant Switch Repeat
## 1 1 1047 717
## 2 10 779 647
## 3 11 1075 931
## 4 12 871 638
## 5 13 911 763
## 6 14 1014 799
van Selst and Jolicoeur (1994) reported slight opposing trends of the non-recursive and modified-recursive trimming methods (see page 648, footnote 2). They therefore, in passing, suggested a “hybrid-recursive” method might balance the opposing trends. The hybrid-recursive method simply takes the average of the non-recursive and the modified-recursive methods.
# trim the data
<- hybridRecursive(data = exampleData, minRT = 150, digits = 0)
trimmedData
# see the top of the data
head(trimmedData)
## participant Switch Repeat
## 1 1 1050 724
## 2 10 779 649
## 3 11 1071 946
## 4 12 871 638
## 5 13 906 764
## 6 14 1016 808
In the example data that ships with trimr, the RT data comes from just two conditions (Switch vs. Repeat), which are coded in the column “condition”. However, in experimental psychology, factorial designs are prevalent, where RT data comes from more than one independent variable, with each IV having multiple levels. How can trimr deal with this format?
First, let’s re-shape the exampleData set to how data might be stored from a factorial design. Let there be two IVs, each with two levels:
The taskSequence factor is coding whether the task has Switched or Repeated from the task on the previous trial (as before). The reward factor is coding whether the participant was presented with a reward or not on the current trial (presented randomly). Let’s reshape our data frame to match this fictitious experimental scenario:
# get the example data that ships with trimr
data(exampleData)
# pass it to a new variable
<- exampleData
newData
# add a column called "taskSequence" that logs whether the current task was a
# repetition or a switch trial (which is currently coded in the "condition")
# column
$taskSequence <- newData$condition
newData
# add a column called "reward" that logs whether the participant received a
# reward or not. Fill it with random entries, just for example. This uses R's
# "sample" function
$reward <- sample(c("Reward", "NoReward"), nrow(newData),
newDatareplace = TRUE)
# delete the "condition" column
<- subset(newData, select = -condition)
newData
# now let's look at our new data
head(newData)
## participant rt accuracy taskSequence reward
## 1 1 1660 1 Switch NoReward
## 2 1 913 1 Switch Reward
## 3 1 2312 1 Repeat Reward
## 4 1 754 1 Repeat NoReward
## 5 1 3394 1 Switch Reward
## 6 1 930 1 Repeat NoReward
This now looks how data typically comes in from a factorial design. Now, to get trimr to work on this, we need to create a new column called “condition”, and to place in this column the levels of all factors in the design. For example, if the first trial in our newData has taskSequence = Switch and reward = NoReward, we would like our condition entry for this trial to read “Switch_NoReward”. This is simple to do using R’s “paste” function. (Note that this code can be adapted to deal with any number of factors.)
# add a new column called "condition", and fill it with information from both
# columns that code for our factors
$condition <- paste(newData$taskSequence, "_", newData$reward, sep = "")
newData
# let's again look at the data
head(newData)
## participant rt accuracy taskSequence reward condition
## 1 1 1660 1 Switch NoReward Switch_NoReward
## 2 1 913 1 Switch Reward Switch_Reward
## 3 1 2312 1 Repeat Reward Repeat_Reward
## 4 1 754 1 Repeat NoReward Repeat_NoReward
## 5 1 3394 1 Switch Reward Switch_Reward
## 6 1 930 1 Repeat NoReward Repeat_NoReward
Now we can pass this data frame to trimr, and it will work perfectly.
# trim the data
<- sdTrim(newData, minRT = 150, sd = 2.5)
trimmedData
# check it worked
head(trimmedData)
## participant Switch_NoReward Switch_Reward Repeat_Reward Repeat_NoReward
## 1 1 1077.165 1015.703 763.527 701.676
## 2 10 779.555 767.363 651.778 659.562
## 3 11 1085.456 1053.368 953.772 959.271
## 4 12 873.323 864.487 625.261 653.768
## 5 13 921.102 890.580 766.724 759.025
## 6 14 1036.329 989.832 821.045 812.476
Van Selst, M. & Jolicoeur, P. (1994). A solution to the effect of sample size on outlier elimination. Quarterly Journal of Experimental Psychology, 47 (A), 631–650.
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.