The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
tidyllm is an R package designed to access various large language model APIs, including Claude, ChatGPT, Groq, Mistral, and local models via Ollama. Built for simplicity and functionality, it helps you generate text, analyze media, and integrate model feedback into your data workflows with ease.
To install tidyllm from CRAN, use:
install.packages("tidyllm")
Or for the development version from GitHub:
# Install devtools if not already installed
if (!requireNamespace("devtools", quietly = TRUE)) {
install.packages("devtools")
}::install_github("edubruell/tidyllm") devtools
Here’s a quick example using tidyllm to describe an image using the Claude model to and follow up with local open-source models:
library("tidyllm")
# Describe an image with claude
<- llm_message("Describe this image",
conversation .imagefile = here("image.png")) |>
claude()
# Use the description to query further with groq
|>
conversation llm_message("Based on the previous description,
what could the research in the figure be about?") |>
ollama(.model = "gemma2")
For more examples and advanced usage, check the Get Started vignette.
Please note: To use tidyllm, you need either an installation of ollama or an active API key for one of the supported providers (e.g., Claude, ChatGPT). See the Get Started vignette for setup instructions.
For detailed instructions and advanced features, see:
We welcome contributions! Feel free to open issues or submit pull requests on GitHub.
This project is licensed under the MIT License - see the LICENSE file for details.
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.