The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
birdnetR
integrates BirdNET, a state‐of‐the‐art deep
learning classifier for automated (bird) sound identification, into an
R-workflow. This package will simplify the analysis of (large) audio
datasets from bioacoustic projects, allowing researchers to easily apply
machine learning techniques—even without a background in computer
science.
birdnetR
is an R wrapper around the birdnet
Python package. It
provides the core functionality to analyze audio using the pre-trained
‘BirdNET’ model or a custom classifier, and to predict bird species
occurrence based on location and week of the year. However, it does not
include all the advanced features available in the BirdNET
Analyzer. For advanced applications, such as training custom
classifiers and validation, users should use the ‘BirdNET Analyzer’
directly. birdnetR
is under active development, and changes
may affect existing workflows.
Install the released version from CRAN:
install.packages("birdnetR")
or install the development version from GitHub with:
::pak("birdnet-team/birdnetR") pak
Note
Python dependencies are installed on
demand, meaning they are installed when you use them for the first time.
This will result in longer initial setup.
This is a simple example using the tflite
BirdNET model
to predict species in an audio file.
# Load the package
library(birdnetR)
# Initialize a BirdNET model
<- birdnet_model_tflite()
model
# Path to the audio file (replace with your own file path)
<- system.file("extdata", "soundscape.mp3", package = "birdnetR")
audio_path
# Predict species within the audio file
<- predict_species_from_audio_file(model, audio_path)
predictions
# Get most probable prediction within each time interval
get_top_prediction(predictions)
Feel free to use birdnetR
for your acoustic analyses and
research. If you do, please cite as:
@article{kahl2021birdnet,
title={BirdNET: A deep learning solution for avian diversity monitoring},
author={Kahl, Stefan and Wood, Connor M and Eibl, Maximilian and Klinck, Holger},
journal={Ecological Informatics},
volume={61},
pages={101236},
year={2021},
publisher={Elsevier}
}
Please ensure you review and adhere to the specific license terms provided with each model. Note that educational and research purposes are considered non-commercial use cases.
This project is supported by Jake Holshuh (Cornell class of ’69) and The Arthur Vining Davis Foundations. Our work in the K. Lisa Yang Center for Conservation Bioacoustics is made possible by the generosity of K. Lisa Yang to advance innovative conservation technologies to inspire and inform the conservation of wildlife and habitats.
The development of BirdNET is supported by the German Federal Ministry of Education and Research through the project “BirdNET+” (FKZ 01|S22072). The German Federal Ministry for the Environment, Nature Conservation and Nuclear Safety contributes through the “DeepBirdDetect” project (FKZ 67KI31040E). In addition, the Deutsche Bundesstiftung Umwelt supports BirdNET through the project “RangerSound” (project 39263/01).
BirdNET is a joint effort of partners from academia and industry. Without these partnerships, this project would not have been possible. Thank you!
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.