The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.

tidyllm: Tidy Integration of Large Language Models

A tidy interface for integrating large language model (LLM) APIs such as 'Claude', 'Openai', 'Groq','Mistral' and local models via 'Ollama' into R workflows. The package supports text and media-based interactions, interactive message history, batch request APIs, and a tidy, pipeline-oriented interface for streamlined integration into data workflows. Web services are available at <https://www.anthropic.com>, <https://openai.com>, <https://groq.com>, <https://mistral.ai/> and <https://ollama.com>.

Version: 0.3.0
Depends: R (≥ 4.2.0)
Imports: S7 (≥ 0.2.0), base64enc, glue, jsonlite, curl, httr2, lubridate, purrr, rlang, stringr, grDevices, pdftools, tibble, cli, png, lifecycle
Suggests: knitr, rmarkdown, testthat (≥ 3.0.0), tidyverse, httptest2, httpuv
Published: 2024-12-08
DOI: 10.32614/CRAN.package.tidyllm
Author: Eduard Brüll [aut, cre], Jia Zhang [ctb]
Maintainer: Eduard Brüll <eduard.bruell at zew.de>
BugReports: https://github.com/edubruell/tidyllm/issues
License: MIT + file LICENSE
URL: https://edubruell.github.io/tidyllm/
NeedsCompilation: no
Materials: README NEWS
CRAN checks: tidyllm results

Documentation:

Reference manual: tidyllm.pdf
Vignettes: Get Started (source, R code)

Downloads:

Package source: tidyllm_0.3.0.tar.gz
Windows binaries: r-devel: tidyllm_0.3.0.zip, r-release: tidyllm_0.3.0.zip, r-oldrel: tidyllm_0.3.0.zip
macOS binaries: r-release (arm64): tidyllm_0.3.0.tgz, r-oldrel (arm64): tidyllm_0.3.0.tgz, r-release (x86_64): tidyllm_0.3.0.tgz, r-oldrel (x86_64): tidyllm_0.3.0.tgz
Old sources: tidyllm archive

Linking:

Please use the canonical form https://CRAN.R-project.org/package=tidyllm to link to this page.

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.