The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.

tidyllm Development Version Overview

The development version of tidyllm reflects the ongoing updates in the GitHub repository. Milestone versions are incremented when significant new features, improvements, or breaking changes are introduced.

Versioning Policy


Version 0.2.0

New CRAN release. Largest changes compared to 0.1.0:

Major Features: - Batch Request Support: Added support for batch requests with both Anthropic and OpenAI APIs, enabling large-scale request handling. - Schema Support: Improved structured outputs in JSON mode with advanced .json_schema handling in openai(), enhancing support for well-defined JSON responses. - Azure OpenAI Integration: Introduced azure_openai() function for accessing the Azure OpenAI service, with full support for rate-limiting and batch operations tailored to Azure’s API structure. - Embedding Model Support: Added embedding generation functions for the OpenAI, Ollama, and Mistral APIs, supporting message content and media embedding. - Mistral API Integration: New mistral() function provides full support for Mistral models hosted in the EU, including rate-limiting and streaming capabilities. - PDF Batch Processing: Introduced the pdf_page_batch() function, which processes PDFs page by page, allowing users to define page-specific prompts for detailed analysis. - Support for OpenAI-compatible APIs: Introduced a .compatible argument (and flexible url and path) in openai() to allow compatibility with third-party OpenAI-compatible APIs.

Improvements: - API Format Refactoring: Complete refactor of to_api_format() to reduce code duplication, simplify API format generation, and improve maintainability. - Improved Error Handling: Enhanced input validation and error messaging for all API-functions functions, making troubleshooting easier. - Rate-Limiting Enhancements: Updated rate limiting to use httr2::req_retry(), leveraging 429 headers for more accurate request management. - Expanded Testing: Added comprehensive tests for API functions using httptest2, covering rate-limiting, batch processing, error handling, and schema validation.

Breaking Changes: - Redesigned Reply Functions: get_reply() was split into get_reply() for text outputs and get_reply_data() for structured outputs, improving type stability compared to an earlier function that had different outputs based on a .json-arguement. - Deprecation of chatgpt(): The chatgpt() function has been deprecated in favor of openai() for feature alignment and improved consistency.

Minor Updates and Bug Fixes: - Expanded PDF Support in llm_message(): Allows extraction of specific page ranges from PDFs, improving flexibility in document handling. - New ollama_download_model() function to download models from the Ollama API - All sequential chat API functions now support streaming

Version 0.1.11

Major Features

Improvements

Version 0.1.10

Breaking Changes

Improvements

Version 0.1.9

Major Features

Breaking Changes

Improvements


Version 0.1.8

Major Features

Improvements


Version 0.1.7

Major Features


Version 0.1.6

Major Features


Version 0.1.5

Major Features

Improvements


Version 0.1.4

Major Features

Improvements


Version 0.1.3

Major Features

Breaking Changes


Version 0.1.2

Improvements


Version 0.1.1

Major Features

Breaking Changes

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.