llm

package module
v1.2.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 20, 2025 License: AGPL-3.0 Imports: 24 Imported by: 0

README

LLM Package

The LLM (Language Learning Model) package provides a flexible, extensible interface for integrating with various AI language models in the RoastMyContract application.

Overview

This package offers a unified interface for interacting with different LLM providers, including:

  • OpenAI (GPT models)
  • Google Gemini
  • Google Vertex AI
  • Anthropic (Claude models)
  • Mock implementation for testing

The design follows the adapter pattern, allowing easy integration of new LLM providers without modifying existing code.

Key Features

  • Provider Agnostic: Use any supported LLM provider through a consistent interface
  • Extensible: Easily add new LLM providers by implementing the LlmInterface
  • Configurable: Fine-tune model parameters like temperature, token limits, etc.
  • Multiple Output Formats: Support for text, JSON, and image generation
  • Fallback Mechanism: Automatic fallback to alternative providers if the primary one fails

Usage Examples

Basic Usage
// Create an LLM instance with a specific provider
llmEngine, err := llm.TextModel(llm.ProviderOpenAI, llm.LlmOptions{
    ApiKey: os.Getenv("OPENAI_API_KEY"),
    Model:  "gpt-4",
})
if err != nil {
    // Handle error
}

// Generate text
response, err := llmEngine.GenerateText(
    "You are a helpful assistant.", // System prompt
    "What is a contract?",          // User message
)
Using Factory Functions
// Create a JSON model
llmEngine, err := llm.JSONModel(llm.ProviderGemini, llm.LlmOptions{
    ApiKey:      os.Getenv("GEMINI_API_KEY"),
    Model:       "gemini-pro",
    Temperature: 0.3,
})

// Generate JSON
jsonResponse, err := llmEngine.GenerateJSON(
    "You are a legal assistant that analyzes contracts.",
    "Analyze this contract clause: ...",
    llm.LlmOptions{
        Temperature: 0.3, // Override default temperature
    },
)
Custom Configuration
// Create an LLM with custom options using NewLLM directly
llmEngine, err := llm.NewLLM(llm.LlmOptions{
    Provider:     llm.ProviderVertex,
    ProjectID:    "my-gcp-project",
    Region:       "europe-west1",
    Model:        "gemini-2.5-flash",
    MaxTokens:    8192,
    Temperature:  0.7,
    Verbose:      true,
    OutputFormat: llm.OutputFormatJSON,
})

// Or use factory functions for convenience
llmEngine, err := llm.TextModel(llm.ProviderVertex, llm.LlmOptions{
    ProjectID:   "my-gcp-project",
    Region:      "europe-west1",
    Model:       "gemini-2.5-flash",
    MaxTokens:   8192,
    Temperature: 0.7,
})

Adding a New Provider

To add a new LLM provider:

  1. Create a new file yourprovider_implementation.go
  2. Implement the LlmInterface interface
  3. Register your provider in the init function

Example:

// Define your provider type
type YourProviderLLM struct {
    options LlmOptions
    // Your provider-specific fields
}

// Implement the LlmInterface methods
func (y *YourProviderLLM) Generate(systemPrompt, userMessage string, options ...LlmOptions) (string, error) {
    // Merge options
    opts := y.options
    if len(options) > 0 {
        opts = mergeOptions(opts, options[0])
    }
    // Your implementation
}

// Register your provider in init()
func init() {
    RegisterProvider(ProviderYourProvider, func(options LlmOptions) (LlmInterface, error) {
        return newYourProviderImplementation(options)
    })
}

Interface

The core interface that all LLM providers must implement:

type LlmInterface interface {
    // GenerateText generates a text response
    GenerateText(systemPrompt string, userPrompt string, options ...LlmOptions) (string, error)

    // GenerateJSON generates a JSON response
    GenerateJSON(systemPrompt string, userPrompt string, options ...LlmOptions) (string, error)

    // GenerateImage generates an image from a prompt
    GenerateImage(prompt string, options ...LlmOptions) ([]byte, error)

    // Generate is the core method for generating content (DEPRECATED)
    Generate(systemPrompt string, userMessage string, options ...LlmOptions) (string, error)
}

Configuration Options

The LlmOptions struct provides configuration options for LLM requests:

Option Description
Provider Which LLM provider to use
ApiKey API key for the provider
ProjectID Project ID (for Vertex AI)
Region Region (for Vertex AI)
Model Model name to use
MaxTokens Maximum tokens to generate
Temperature Controls randomness (0.0-1.0)
Verbose Enable verbose logging
OutputFormat Desired output format
ProviderOptions Provider-specific options

Best Practices

  1. Error Handling: Always check for errors when calling LLM methods
  2. Fallback Mechanism: Implement fallbacks to handle provider outages
  3. Prompt Engineering: Craft clear system prompts for better results
  4. Token Management: Be mindful of token limits for large inputs
  5. Environment Variables: Store API keys in environment variables, not in code

Provider-Specific Notes

OpenAI
  • Supports GPT-3.5 and GPT-4 models
  • Requires an OpenAI API key
  • Image generation returns decoded image bytes (PNG) rather than URLs
Gemini
  • Supports Gemini Pro models
  • Requires a Google API key
Vertex AI
  • Supports Gemini models on Google Cloud
  • Requires GCP project ID and region
  • Credentials can be supplied in several ways:
    1. Set LlmOptions.ProviderOptions["credentials_json"] with the raw service-account JSON string (or []byte).
    2. Point LlmOptions.ProviderOptions["credentials_file"] to the path of a service-account JSON file.
    3. Configure environment variables:
      • VERTEXAI_CREDENTIALS_JSON containing the JSON credentials.
      • VERTEXAI_CREDENTIALS_FILE with a path to the credentials file.
      • GOOGLE_APPLICATION_CREDENTIALS (standard ADC path) as a fallback.
    4. When none of the above are provided, Application Default Credentials will be used.
  • The legacy vertexapicredentials.json file is no longer required; use the options above instead.
Anthropic
  • Supports Claude models
  • Requires an Anthropic API key

Factory Functions

The package provides convenient factory functions:

  • TextModel(provider, options) - Creates an LLM configured for text output
  • JSONModel(provider, options) - Creates an LLM configured for JSON output
  • ImageModel(provider, options) - Creates an LLM configured for image generation

All factory functions require both a provider and options parameter:

// Text model
textLLM, err := llm.TextModel(llm.ProviderOpenAI, llm.LlmOptions{
    ApiKey: os.Getenv("OPENAI_API_KEY"),
    Model:  "gpt-4",
})

// JSON model
jsonLLM, err := llm.JSONModel(llm.ProviderGemini, llm.LlmOptions{
    ApiKey: os.Getenv("GEMINI_API_KEY"),
    Model:  "gemini-pro",
})

// Image model
imageLLM, err := llm.ImageModel(llm.ProviderOpenAI, llm.LlmOptions{
    ApiKey: os.Getenv("OPENAI_API_KEY"),
    Model:  "dall-e-3",
})

Testing

The package includes a mock implementation for testing:

Using Mock Responses in Tests

You can easily test your code by providing mock responses:

// Create a mock LLM with a default response
mockLLM, _ := llm.NewLLM(llm.LlmOptions{
    Provider:     llm.ProviderMock,
    MockResponse: "This is a mock response",
})

// Or provide per-call mock responses
response, _ := mockLLM.GenerateText(
    "system prompt",
    "user message",
    llm.LlmOptions{
        MockResponse: "Specific response for this test case",
    },
)

// The mock will return the provided response
fmt.Println(response) // Output: Specific response for this test case

The mock will return the first non-empty MockResponse it finds, checking in this order:

  1. The options passed to the specific function call
  2. The options used when creating the LLM client


## Similar Projects

- [llm](https://github.com/sashabaranov/go-openai)
- [gollm](https://github.com/teilomillet/gollm)
- [fantasy](https://github.com/charmbracelet/fantasy)

Documentation

Index

Constants

View Source
const GEMINI_MODEL_1_5_FLASH = "gemini-1.5-flash" // supported but now old
View Source
const GEMINI_MODEL_1_5_PRO = "gemini-1.5-pro" // supported but now old
View Source
const GEMINI_MODEL_2_0_FLASH_EXP_IMAGE_GENERATION = "gemini-2.0-flash-exp-image-generation"

const GEMINI_MODEL_2_0_FLASH = "gemini-2.0-flash-001" const GEMINI_MODEL_2_0_FLASH_LITE = "gemini-2.0-flash-lite-001"

View Source
const GEMINI_MODEL_2_5_FLASH = "gemini-2.5-flash"
View Source
const GEMINI_MODEL_2_5_FLASH_LITE = "gemini-2.5-flash-lite"
View Source
const GEMINI_MODEL_2_5_PRO = "gemini-2.5-pro"
View Source
const GEMINI_MODEL_3_0_IMAGEN = "imagen-3.0-generate-002" // not supported
View Source
const OPENROUTER_MODEL_CODESTRAL_EMBED_2505 = "mistralai/codestral-embed-2505"

Mistral Codestral Embedding 2505 Input $0.15/M Output $0.00/M

View Source
const OPENROUTER_MODEL_DEEPSEEK_V3_1 = "deepseek/deepseek-chat-v3.1"

DeepSeek DeepSeek V3.1 Input $0.20/M Output $0.80/M Context 163,840

View Source
const OPENROUTER_MODEL_GEMINI_2_5_FLASH = "google/gemini-2.5-flash"

Google Gemini 2.5 Flash Input $0.30/M Output $2.50/M Context 1,048,576

View Source
const OPENROUTER_MODEL_GEMINI_2_5_FLASH_IMAGE = "google/gemini-2.5-flash-image"

Google Gemini 2.5 Flash Image Input $0.30/M Output $2.50/M Context 1,048,576

View Source
const OPENROUTER_MODEL_GEMINI_2_5_FLASH_LITE = "google/gemini-2.5-flash-lite"

Google Gemini 2.5 Flash Lite Input $0.10/M Output $0.40/M Context 1,048,576

View Source
const OPENROUTER_MODEL_GEMINI_2_5_PRO = "google/gemini-2.5-pro"

Google Gemini 2.5 Pro Input $1.25/M Output $10/M Context 1,048,576

View Source
const OPENROUTER_MODEL_GEMINI_3_PRO_PREVIEW = "google/gemini-3-pro-preview"

Google Gemini 3 Pro Preview Input $2/M Output $12/M Context 1,048,576

View Source
const OPENROUTER_MODEL_GEMINI_EMBED_001 = "google/gemini-embedding-001"

Google Gemini Embedding 001 Input $0.15/M Output $0.00/M

View Source
const OPENROUTER_MODEL_GEMMA_3_12B_IT = "google/gemma-3-12b-it"

Google Gemma 3 12B Input $0.048/M Output $0.193/M Context 96,000

View Source
const OPENROUTER_MODEL_GEMMA_3_27B_IT = "google/gemma-3-27b-it"

Google Gemma 3 27B Input $0.067/M Output $0.267/M Context 96,000

View Source
const OPENROUTER_MODEL_GPT_4_1_NANO = "openai/gpt-4.1-nano"

OpenAI GPT-4.1 Nano Input $0.10/M Output $0.40/M Context 1,047,576

View Source
const OPENROUTER_MODEL_GPT_5_1 = "openai/gpt-5.1"

OpenAI GPT-5.1 Input $1.25/M Output $10.00/M Context 400,000 Web Search: $10/K

View Source
const OPENROUTER_MODEL_GPT_5_IMAGE = "openai/gpt-5-image"

OpenAI GPT-5 Image Input $10.00/M Output $10/M Context 1,048,576

View Source
const OPENROUTER_MODEL_GPT_5_IMAGE_MINI = "openai/gpt-5-image-mini"

OpenAI GPT-5 Image Mini Input $2.50/M Output $2/M Context 1,048,576

View Source
const OPENROUTER_MODEL_GPT_5_NANO = "openai/gpt-5-nano"

OpenAI GPT-5 Nano Input $0.05/M Output $0.40/M Context 400,000

View Source
const OPENROUTER_MODEL_GPT_OSS_120B = "openai/gpt-oss-120b"

OpenAI GPT-OSS-120B Input $0.072/M Output $0.28/M Context 131,000

View Source
const OPENROUTER_MODEL_GPT_OSS_20B = "openai/gpt-oss-20b"

OpenAI GPT-OSS-20B Input $0.04/M Output $0.15/M Context 131,000

View Source
const OPENROUTER_MODEL_MISTRAL_EMBED_2312 = "mistralai/mistral-embed-2312"

Mistral Mistral Embedding 2312 Input $0.10/M Output $0.00/M

View Source
const OPENROUTER_MODEL_MISTRAL_MEDIUM_3_1 = "mistralai/mistral-medium-3.1"

Mistral Mistral Medium 3.1 Input $0.40/M output $2/M Context 131,072

View Source
const OPENROUTER_MODEL_MISTRAL_NEMO = "mistralai/mistral-nemo"

Mistral Mistral Nemo Input $0.01/M Output $0.04/M Context 131,072

View Source
const OPENROUTER_MODEL_O4_MINI = "openai/o4-mini"

OpenAI O4 Mini Input $1.10/M Output $4.40/M Context 200,000

View Source
const OPENROUTER_MODEL_QWEN_3_235B_A22B_INSTRUCT_2507 = "qwen/qwen3-235b-a22b-2507"

Qwen Qwen3 235B A22B Instruct 2507 Input $0.078/M Output $0.312/M Context 262,144

View Source
const OPENROUTER_MODEL_QWEN_3_30B_A3B = "qwen/qwen3-30b-a3b"

Qwen Qwen3 30B A3B Input $0.02/M Output $0.08/M Context 40,960

View Source
const OPENROUTER_MODEL_QWEN_3_EMBEDDING_0_6B = "qwen/qwen3-embedding-0.6b"

Qwen Qwen3 Embedding 0.6B Input $0.01/M Output $0.00/M

View Source
const OPENROUTER_MODEL_TEXT_EMBEDDING_3_LARGE = "openai/text-embedding-3-large"

OpenAI Text Embedding 3 Large Input $0.13/M Output $0.00/M

View Source
const OPENROUTER_MODEL_TEXT_EMBEDDING_3_SMALL = "openai/text-embedding-3-small"

OpenAI Text Embedding 3 Small Input $0.02/M Output $0.00/M

View Source
const OPENROUTER_MODEL_TEXT_EMBEDDING_ADA_002 = "openai/text-embedding-ada-002"

OpenAI Text Embedding Ada 002 Input $0.10/M Output $0.00/M

Variables

This section is empty.

Functions

func CountTokens

func CountTokens(text string) int

CountTokens provides a simple approximation of token counting Note: This is a basic implementation and not accurate for all models Production code should use model-specific tokenizers

func EstimateMaxTokens

func EstimateMaxTokens(promptTokens, contextWindowSize int) int

EstimateMaxTokens estimates the maximum number of tokens that could be generated given the model's context window size and the prompt length

func RegisterCustomProvider added in v0.2.0

func RegisterCustomProvider(name string, factory LlmFactory)

RegisterCustomProvider registers a custom LLM provider

func RegisterProvider added in v0.2.0

func RegisterProvider(provider Provider, factory LlmFactory)

RegisterProvider registers a new LLM provider factory

Types

type AgentInterface added in v0.2.0

type AgentInterface interface {
	// SetRole sets the role of the agent
	// i.e. "You are a helpful assistant"
	SetRole(role string)

	// GetRole returns the role of the agent
	GetRole() string

	// SetTask sets the task for the agent
	// i.e. "Your task is to write a book about self-improvement"
	SetTask(task string)

	// GetTask returns the task of the agent
	GetTask() string

	// Execute runs the agent and returns the response
	Execute() (response string, err error)
}

AgentInterface defines the core interface that all agents must implement

type LlmFactory added in v0.2.0

type LlmFactory func(options LlmOptions) (LlmInterface, error)

LlmFactory is a function type that creates a new LLM instance Now returns (LlmInterface, error)

type LlmInterface added in v0.2.0

type LlmInterface interface {
	// GenerateText generates a text response from the LLM based on the given prompt
	GenerateText(systemPrompt string, userPrompt string, options ...LlmOptions) (string, error)

	// GenerateJSON generates a JSON response from the LLM based on the given prompt
	GenerateJSON(systemPrompt string, userPrompt string, options ...LlmOptions) (string, error)

	// GenerateImage generates an image from the LLM based on the given prompt
	GenerateImage(prompt string, options ...LlmOptions) ([]byte, error)

	// DEPRECATED: Generate generates a response from the LLM based on the given prompt and options
	Generate(systemPrompt string, userMessage string, options ...LlmOptions) (string, error)

	// GenerateEmbedding generates embeddings for the given text
	GenerateEmbedding(text string) ([]float32, error)
}

LlmInterface is an interface for making LLM API calls

func ImageModel

func ImageModel(provider Provider, options LlmOptions) (LlmInterface, error)

ImageModel creates an LLM model for image output

func JSONModel

func JSONModel(provider Provider, options LlmOptions) (LlmInterface, error)

JSONModel creates an LLM model for JSON output

func NewLLM added in v0.2.0

func NewLLM(options LlmOptions) (LlmInterface, error)

NewLLM creates a new LLM instance based on the provider specified in options

func TextModel

func TextModel(provider Provider, options LlmOptions) (LlmInterface, error)

TextModel creates an LLM model for text output

type LlmOptions added in v0.2.0

type LlmOptions struct {
	// Provider specifies which LLM provider to use
	Provider Provider

	// MockResponse, if not empty, will be returned by the mock implementation
	// instead of making an actual API call. This is useful for testing.
	MockResponse string `json:"-"`

	// ApiKey specifies the API key for the LLM provider
	ApiKey string

	// ProjectID specifies the project ID for the LLM (used by Vertex AI)
	ProjectID string

	// Region specifies the region for the LLM (used by Vertex AI)
	Region string

	// Model specifies the LLM model to use
	Model string

	// MaxTokens specifies the maximum number of tokens to generate
	MaxTokens int

	// Temperature controls the randomness of the response
	// A higher temperature (e.g., 0.8) makes the output more random and creative,
	// while a lower temperature (e.g., 0.2) makes the output more focused and deterministic.
	Temperature float64

	// Verbose controls whether to log detailed information
	Verbose bool

	// Logger specifies a logger to use for error logging
	Logger *slog.Logger

	// OutputFormat specifies the output format from the LLM
	OutputFormat OutputFormat

	// Additional options specific to the LLM provider
	ProviderOptions map[string]any
}

type OutputFormat

type OutputFormat string

OutputFormat specifies the desired output format from the LLM

const (
	OutputFormatText     OutputFormat = "text"
	OutputFormatJSON     OutputFormat = "json"
	OutputFormatXML      OutputFormat = "xml"
	OutputFormatYAML     OutputFormat = "yaml"
	OutputFormatEnum     OutputFormat = "enum"
	OutputFormatImagePNG OutputFormat = "image/png"
	OutputFormatImageJPG OutputFormat = "image/jpeg"
)

type Provider

type Provider string

Provider represents an LLM provider type

const (
	ProviderOpenAI     Provider = "openai"
	ProviderGemini     Provider = "gemini"
	ProviderVertex     Provider = "vertex"
	ProviderMock       Provider = "mock"
	ProviderAnthropic  Provider = "anthropic"
	ProviderOpenRouter Provider = "openrouter"
	ProviderCustom     Provider = "custom"
)

Supported LLM providers

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL