The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
In this article we describe the indexing operator for torch tensors and how it compares to the R indexing operator for arrays.
Torch’s indexing semantics are closer to numpy’s semantics than R’s.
You will find a lot of similarities between this article and the
numpy
indexing article available here.
Single element indexing for a 1-D tensors works mostly as expected. Like R, it is 1-based. Unlike R though, it accepts negative indices for indexing from the end of the array. (In R, negative indices are used to remove elements.)
You can also subset matrices and higher dimensions arrays using the same syntax:
Note that if one indexes a multidimensional tensor with fewer indices
than dimensions, torch’s behaviour differs from R, which flattens the
array. In torch, the missing indices are considered complete slices
:
.
It is possible to slice and stride arrays to extract sub-arrays of the same number of dimensions, but of different sizes than the original. This is best illustrated by a few examples:
You can also use the 1:10:2
syntax which means: In the
range from 1 to 10, take every second item. For example:
Another special syntax is the N
, meaning the size of the
specified dimension.
Note: the slicing behavior relies on Non Standard Evaluation. It requires that the expression is passed to the
[
not exactly the resulting R vector.
To allow dynamic dynamic indices, you can create a new slice using
the slc
function. For example:
is equivalent to:
Like in R, you can take all elements in a dimension by leaving an index empty.
Consider a matrix:
The following syntax will give you the first row:
And this would give you the first 2 columns:
By default, when indexing by a single integer, this dimension will be dropped to avoid the singleton dimension:
You can optionally use the drop = FALSE
argument to
avoid dropping the dimension.
It’s possible to add a new dimension to a tensor using index-like syntax:
You can also use NULL
instead of
newaxis
:
Sometimes we don’t know how many dimensions a tensor has, but we do
know what to do with the last available dimension, or the first one. To
subsume all others, we can use ..
:
Vector indexing is also supported but care must be taken regarding performance as, in general its much less performant than slice based indexing.
Note: Starting from version 0.5.0, vector indexing in torch follows R semantics, prior to that the behavior was similar to numpy’s advanced indexing. To use the old behavior, consider using
?torch_index
,?torch_index_put
ortorch_index_put_
.
You can also use boolean vectors, for example:
The above examples also work if the index were long or boolean tensors, instead of R vectors. It’s also possible to index with multi-dimensional boolean tensors:
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.