The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
Animl comprises a variety of machine learning tools for analyzing ecological data. The package includes a set of functions to classify subjects within camera trap field data and can handle both images and videos.
Below are the steps required for automatic identification of animals within camera trap images or videos.
First, build the file manifest of a given directory.
library(animl)
<- "examples/TestData"
imagedir
#create save-file placeholders and working directories
WorkingDirectory(imagedir,globalenv())
# Read exif data for all images within base directory
<- build_file_manifest(imagedir, out_file=filemanifest, exif=TRUE)
files
# Process videos, extract frames for ID
<- extract_frames(files, out_dir = vidfdir, out_file=imageframes,
allframes frames=2, parallel=T, workers=parallel::detectCores())
This produces a dataframe of images, including frames taken from any videos to be fed into the classifier. The authors recommend a two-step approach using Microsoft’s ‘MegaDector’ object detector to first identify potential animals and then using a second classification model trained on the species of interest.
More info on MegaDetector.
#Load the Megadetector model
<- megadetector("/mnt/machinelearning/megaDetector/md_v5a.0.0.pt")
md_py
# Obtain crop information for each image
<- detect_MD_batch(md_py, allframes)
mdraw
# Add crop information to dataframe
<- parse_MD(mdraw, manifest = allframes, out_file = detections) mdresults
Then feed the crops into the classifier. We recommend only classifying crops identified by MD as animals.
# Pull out animal crops
<- get_animals(mdresults)
animals
# Set of crops with MD human, vehicle and empty MD predictions.
<- get_empty(mdresults)
empty
<- "/Models/Southwest/v3/southwest_v3.pt"
model_file <- "/Models/Southwest/v3/southwest_v3_classes.csv"
class_list
# load the model
<- load_model(model_file, class_list)
southwest
# obtain species predictions
<- predict_species(animals, southwest[[1]], southwest[[2]], raw=FALSE)
animals
# recombine animal detections with remaining detections
<- rbind(animals,empty) manifest
If your data includes videos or sequences, we recommend using the sequenceClassification algorithm. This requires the raw output of the prediction algorithm.
classes = southwest[[2]]$Code
# Sequence Classification
pred <- predict_species(animals, southwest[[1]], southwest[[2]], raw=TRUE)
manifest <- sequenceClassification(animals, empty=empty, pred, classes, "Station", emptyclass="empty")
The Conservation Technology Lab has several models available for use.
We recommend running animl on a computer with a dedicated GPU. Animl also depends on exiftool for accessing file metadata.
animl depends on python and will install python package dependencies
if they are not available if installed via CRAN.
However, we
recommend setting up a conda environment using the provided config
file.
The R version of animl depends on the python version to handle the machine learning: animl-py
Next, install animl-py in your preferred python environment (such as conda) using pip:
pip install animl
Animl-r can be installed through CRAN:
install.packages('animl')
Animl-r can also be installed by downloading this repo, opening the animl.Rproj file in RStudio and selecting Build -> Install Package.
Kyra Swanson
Mathias Tobler
Edgar Navarro
Josh Kessler
Jon Kohler
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.