The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.

localLLM: Running Local LLMs with 'llama.cpp' Backend

The 'localLLM' package provides R bindings to the 'llama.cpp' library for running large language models. The package uses a lightweight architecture where the C++ backend library is downloaded at runtime rather than bundled with the package. Package features include text generation, reproducible generation, and parallel inference.

Version: 1.0.1
Depends: R (≥ 3.6.0)
Imports: Rcpp (≥ 1.0.14), tools, utils
Suggests: testthat (≥ 3.0.0), covr
Published: 2025-10-15
DOI: 10.32614/CRAN.package.localLLM
Author: Eddie Yang ORCID iD [aut], Yaosheng Xu ORCID iD [aut, cre]
Maintainer: Yaosheng Xu <xu2009 at purdue.edu>
BugReports: https://github.com/EddieYang211/localLLM/issues
License: MIT + file LICENSE
URL: https://github.com/EddieYang211/localLLM
NeedsCompilation: yes
SystemRequirements: C++17, libcurl (optional, for model downloading)
CRAN checks: localLLM results

Documentation:

Reference manual: localLLM.html , localLLM.pdf

Downloads:

Package source: localLLM_1.0.1.tar.gz
Windows binaries: r-devel: localLLM_1.0.1.zip, r-release: localLLM_1.0.1.zip, r-oldrel: localLLM_1.0.1.zip
macOS binaries: r-release (arm64): localLLM_1.0.1.tgz, r-oldrel (arm64): localLLM_1.0.1.tgz, r-release (x86_64): localLLM_1.0.1.tgz, r-oldrel (x86_64): localLLM_1.0.1.tgz

Linking:

Please use the canonical form https://CRAN.R-project.org/package=localLLM to link to this page.

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.