The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
The development version of tidyllm
reflects the ongoing
updates in the GitHub repository. Milestone versions are incremented
when significant new features, improvements, or breaking changes are
introduced.
New CRAN release. Largest changes compared to 0.1.0:
Major Features: - Batch Request Support: Added
support for batch requests with both Anthropic and OpenAI APIs, enabling
large-scale request handling. - Schema Support: Improved structured
outputs in JSON mode with advanced .json_schema
handling in
openai()
, enhancing support for well-defined JSON
responses. - Azure OpenAI Integration: Introduced
azure_openai()
function for accessing the Azure OpenAI
service, with full support for rate-limiting and batch operations
tailored to Azure’s API structure. - Embedding Model Support: Added
embedding generation functions for the OpenAI, Ollama, and Mistral APIs,
supporting message content and media embedding. - Mistral API
Integration: New mistral()
function provides full support
for Mistral models hosted in the EU, including rate-limiting and
streaming capabilities. - PDF Batch Processing: Introduced the
pdf_page_batch()
function, which processes PDFs page by
page, allowing users to define page-specific prompts for detailed
analysis. - Support for OpenAI-compatible APIs: Introduced a
.compatible
argument (and flexible url and path) in
openai()
to allow compatibility with third-party
OpenAI-compatible APIs.
Improvements: - API Format Refactoring: Complete
refactor of to_api_format()
to reduce code duplication,
simplify API format generation, and improve maintainability. - Improved
Error Handling: Enhanced input validation and error messaging for all
API-functions functions, making troubleshooting easier. - Rate-Limiting
Enhancements: Updated rate limiting to use
httr2::req_retry()
, leveraging 429 headers for more
accurate request management. - Expanded Testing: Added comprehensive
tests for API functions using httptest2
, covering
rate-limiting, batch processing, error handling, and schema
validation.
Breaking Changes: - Redesigned Reply Functions:
get_reply()
was split into get_reply()
for
text outputs and get_reply_data()
for structured outputs,
improving type stability compared to an earlier function that had
different outputs based on a .json
-arguement. - Deprecation
of chatgpt()
: The chatgpt()
function has been
deprecated in favor of openai()
for feature alignment and
improved consistency.
Minor Updates and Bug Fixes: - Expanded PDF Support
in llm_message()
: Allows extraction of specific page ranges
from PDFs, improving flexibility in document handling. - New
ollama_download_model()
function to download models from
the Ollama API - All sequential chat API functions now support
streaming
.compatible
-arguement in openai()
to
allow working with compatible third party APIsto_api_format()
:
API format generation now has much less code duplication and is more
maintainable.get_reply()
was split into two type-stable functions:
get_reply()
for text and get_reply_data()
for
structured outputs.httr2::req_retry()
: Rate limiting now uses the
right 429 headers where they come.Enhanced Input Validation: All API functions now have improved input validation, ensuring better alignment with API documentation
Improved error handling More human-readable error messages for failed requests from the API
Advanced JSON Mode in openai()
: The
openai()
function now supports advanced
.json_schemas
, allowing structured output in JSON mode for
more precise responses.
Reasoning Models Support: Support for O1
reasoning models has been added, with better handling of system prompts
in the openai()
function.
Streaming callback functions refactored: Given that the streaming callback format for Open AI, Mistral and Groq is nearly identical the three now rely on the same callback function.
chatgpt()
Deprecated: The
chatgpt()
function has been deprecated in favor of
openai()
. Users should migrate to openai()
to
take advantage of the new features and enhancements.openai()
,
ollama()
, and claude()
functions now return
more informative error messages when API calls fail, helping with
debugging and troubleshooting.ollama_embedding()
to generate embeddings using the
Ollama API.openai_embedding()
to generate embeddings using the
OpenAI API.mistral_embedding()
to generate embeddings using the
Mistral API.llm_message()
: The
llm_message()
function now supports specifying a range of
pages in a PDF by passing a list with filename
,
start_page
, and end_page
. This allows users to
extract and process specific pages of a PDF.pdf_page_batch()
function, which processes PDF files page
by page, extracting text and converting each page into an image,
allowing for a general prompt or page-specific prompts. The function
generates a list of LLMMessage
objects that can be sent to
an API and work with the batch-API functions in
tidyllm.mistral()
function to use Mistral Models on Le Platforme on
servers hosted in the EU, with rate-limiting and streaming support.last_user_message()
pulls the last message the user
sent.get_reply()
gets the assistant reply at a given index
of assistant messages.get_user_message()
gets the user message at a given
index of user messages..dry_run
argument, allowing users
to generate an httr2
-request for easier debugging and
inspection.httptest2
-based tests with mock responses for all API
functions, covering both basic functionality and rate-limiting.ollama_download_model()
function to download models from
the Ollama API. It supports a streaming mode that provides live progress
bar updates on the download progress.llm_message()
groq()
function now supports images.llm_message()
.JSON Mode: JSON mode is now more widely
supported across all API functions, allowing for structured outputs when
APIs support them. The .json
argument is now passed only to
API functions, specifying how the API should respond, and it is not
needed anymore in last_reply()
.
Improved last_reply()
Behavior: The
behavior of the last_reply()
function has changed. It now
automatically handles JSON replies by parsing them into structured data
and falling back to raw text in case of errors. You can still force raw
text replies even for JSON output using the .raw
argument.
last_reply()
: The .json
argument is no longer used, and JSON replies are automatically parsed.
Use .raw
to force raw text replies.These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.