The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.

Introduction to PacketLLM

Introduction

Welcome to PacketLLM! This package provides a seamless way to interact with OpenAI’s Large Language Models (LLMs) such as GPT-4o directly within your RStudio environment. It runs as an RStudio Gadget, offering a familiar chat interface without the need to switch between applications.

With PacketLLM, you can: - Generate code and explanations. - Analyze text from uploaded files (.R, .pdf, .docx). - Manage multiple conversations in tabs. - Customize model behavior through settings.

This vignette will guide you through the essential setup and basic usage.

Prerequisites: OpenAI API Key

PacketLLM requires an OpenAI API key.

Installation

Install PacketLLM from GitHub:

# Uncomment the next line if you need the remotes package
# install.packages("remotes")
remotes::install_github("AntoniCzolgowski/PacketLLM")

Launching the Chat Gadget

Once installed and your API key is set, launch the gadget from the R console:

library(PacketLLM)
run_llm_chat_app()

This command should be executed in an interactive RStudio session.

Understanding the Interface

The PacketLLM interface consists of:

Basic Workflow

  1. Launch the App: Run run_llm_chat_app().
  2. Start Chatting: Type your question or prompt.
  3. Send Message: Click the Send button.
  4. Wait for Response: The app will show a processing indicator while waiting for the model’s reply.

Optional steps: - Add File Context: Click the + button to attach files before sending. - Start a New Conversation: Click New Conversation for a fresh session. - Adjust Settings: Use Settings to change model options before sending the first message.

Exploring Further

Experiment with different models, system prompts, and file attachments to enhance your workflow with PacketLLM. For issues or suggestions, please visit the GitHub Issues page.

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.