The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.

llmhelper

R-CMD-check CRAN status r-universe Lifecycle: experimental

Overview

llmhelper provides a unified and user-friendly interface for interacting with Large Language Models (LLMs) in R. Key features:

Installation

From CRAN (once available)

install.packages("llmhelper")

From GitHub

# install.packages("pak")
pak::pak("Zaoqu-Liu/llmhelper")

From R-universe

install.packages("llmhelper", repos = "https://Zaoqu-Liu.r-universe.dev")

Quick Start

Setting up an LLM Provider

library(llmhelper)

# OpenAI
openai_client <- llm_provider(
  base_url = "https://api.openai.com/v1/chat/completions",
  api_key = Sys.getenv("OPENAI_API_KEY"),
  model = "gpt-4o-mini"
)

# Ollama (local)
ollama_client <- llm_ollama(
  model = "qwen2.5:1.5b-instruct",
  auto_download = TRUE
)

# DeepSeek
deepseek_client <- llm_provider(
  base_url = "https://api.deepseek.com/v1/chat/completions",
  api_key = Sys.getenv("DEEPSEEK_API_KEY"),
  model = "deepseek-chat"
)

Getting Responses

# Simple text response
response <- get_llm_response(
  prompt = "What is machine learning?",
  llm_client = openai_client,
  max_words = 100
)

# Structured JSON response
schema <- list(
  name = "analysis_result",
  schema = list(
    type = "object",
    properties = list(
      summary = list(type = "string", description = "Brief summary"),
      key_points = list(
        type = "array",
        items = list(type = "string"),
        description = "Main key points"
      ),
      confidence = list(type = "number", description = "Confidence score 0-1")
    ),
    required = c("summary", "key_points", "confidence")
  )
)

json_response <- get_llm_response(
  prompt = "Analyze the benefits of R programming",
  llm_client = openai_client,
  json_schema = schema
)

Using Prompt Templates

template <- "
Analyze the following dataset: {dataset_name}
Focus on: {focus_area}
Output format: {output_format}
"

prompt <- build_prompt(
  template = template,
  dataset_name = "iris",
  focus_area = "species classification",
  output_format = "bullet points"
)

Interactive JSON Schema Generation

result <- generate_json_schema(
  description = "A user profile with name, email, and preferences",
  llm_client = openai_client
)

# Use the generated schema
final_schema <- extract_schema_only(result)

Managing Ollama Models

# List available models
ollama_list_models()

# Download a new model
ollama_download_model("llama3.2:1b")

# Delete a model
ollama_delete_model("old-model:latest")

Diagnostics

# Debug connection issues
diagnose_llm_connection(
  base_url = "https://api.openai.com/v1/chat/completions",
  api_key = Sys.getenv("OPENAI_API_KEY"),
  model = "gpt-4o-mini"
)

Main Functions

Function Description
llm_provider() Create an OpenAI-compatible LLM provider
llm_ollama() Create an Ollama LLM provider
get_llm_response() Get text or JSON responses from LLM
build_prompt() Build prompts from templates
set_prompt() Create prompt objects with system/user messages
generate_json_schema() Interactively generate JSON schemas
diagnose_llm_connection() Debug connection issues
ollama_list_models() List available Ollama models
ollama_download_model() Download Ollama models
ollama_delete_model() Delete Ollama models

Environment Variables

Set your API keys as environment variables:

# In your .Renviron file or before using the package:
Sys.setenv(OPENAI_API_KEY = "your-openai-key")
Sys.setenv(DEEPSEEK_API_KEY = "your-deepseek-key")
Sys.setenv(LLM_API_KEY = "your-default-key")

Requirements

Citation

citation("llmhelper")

License

GPL (>= 3)

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Author

Zaoqu Liu (liuzaoqu@163.com) - ORCID: 0000-0002-0452-742X

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.