llm

package module
v0.2.5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 18, 2026 License: MIT Imports: 12 Imported by: 0

README

llm

llm is a small Go library and CLI for building powerful agents across providers.

You can think of this library as a pluggable agent harness that performs the actions coming back from the models. Conceptually similar to Claude Code or Codex, but easily runnable across models and in server-side environments.

It's based on the Agent Loop described in Unrolling the Codex agent loop.

Features

  • Providers: OpenAI, Anthropic, Gemini, Ollama (more welcome!)
  • Streaming responses
  • High-level, recursive, concurrent tool calling
  • Thinking/reasoning controls (none, low, medium, high)
  • Sandboxing: containers (docker/podman) and local
  • Curated model metadata (e.g. knowledge cutoff, context window, reasoning support)

Install

CLI:

go install github.com/matthewmueller/llm/cmd/llm@latest

Library:

go get github.com/matthewmueller/llm

Programmatic Usage

package main

import (
	"context"
	"fmt"
	"log"
	"os"

	"github.com/matthewmueller/llm"
	"github.com/matthewmueller/llm/providers/openai"
	"github.com/matthewmueller/llm/providers/anthropic"
	"github.com/matthewmueller/logs"
)

func main() {
	ctx := context.Background()
	log := logs.Default()

	client := llm.New(
		log,
		openai.New(log, os.Getenv("OPENAI_API_KEY")),
		anthropic.New(log, os.Getenv("ANTHROPIC_API_KEY")),
	)

	add := llm.Func("add", "Add two numbers", func(ctx context.Context, in struct {
		A int `json:"a" description:"First number" is:"required"`
		B int `json:"b" description:"Second number" is:"required"`
	}) (int, error) {
		return in.A + in.B, nil
	})

	for event, err := range client.Chat(
		ctx,
		provider.Name(),
		llm.WithModel("gpt-5-mini-2025-08-07"),
		llm.WithThinking(llm.ThinkingLow),
		llm.WithMessage(llm.UserMessage("Use add to add 20 and 22, then answer briefly.")),
		llm.WithTool(add),
	) {
		if err != nil {
			log.Fatal(err)
		}
		if event.Thinking {
			fmt.Print(event.Thinking)
		}
		fmt.Print(event.Content)
	}
}

For testing purposes, llm also ships with a CLI.

CLI Usage (experimental)

--provider (or LLM_PROVIDER) is required.

Set up one provider:

export LLM_PROVIDER=openai
export OPENAI_API_KEY=...
export LLM_MODEL=gpt-5-mini-2025-08-07

List models:

llm models

One-shot prompt:

llm "Explain CAP theorem in 3 bullets"

Interactive REPL:

llm

Thinking level:

llm -t low "Plan a weekend trip to Portland"

Provider env vars:

  • openai: OPENAI_API_KEY
  • anthropic: ANTHROPIC_API_KEY
  • gemini: GEMINI_API_KEY
  • ollama: OLLAMA_HOST (defaults to http://localhost:11434)

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type ChatRequest

type ChatRequest struct {
	Model    string
	Thinking Thinking
	Tools    []*ToolSchema
	Messages []*Message
}

type ChatResponse

type ChatResponse struct {
	Role       string    `json:"role,omitzero"`
	Content    string    `json:"content,omitzero"`  // Content chunk
	Thinking   string    `json:"thinking,omitzero"` // Thinking/reasoning content (if any)
	ToolCall   *ToolCall `json:"tool_call,omitzero"`
	ToolCallID string    `json:"tool_call_id,omitzero"` // For tool results, the ID of the tool call being responded to
	Usage      *Usage    `json:"usage,omitzero"`        // Token usage metadata (if available)
	Done       bool      `json:"done,omitzero"`         // True when response is complete
}

ChatResponse represents a streaming response from the chat API

type Client

type Client struct {
	// contains filtered or unexported fields
}

Client manages providers

func New

func New(providers ...Provider) *Client

New creates a new Client

func (*Client) Chat

func (c *Client) Chat(ctx context.Context, provider string, options ...Option) iter.Seq2[*ChatResponse, error]

Chat sends a chat request to the appropriate provider

func (*Client) Model added in v0.1.4

func (c *Client) Model(ctx context.Context, provider, model string) (*Model, error)

func (*Client) Models

func (c *Client) Models(ctx context.Context, providers ...string) (models []*Model, err error)

Models returns a filtered list of available models

type Config

type Config struct {
	Log *slog.Logger
	// Provider string
	Model    string
	Thinking Thinking
	Tools    []Tool
	Messages []*Message
	MaxSteps int
}

type ErrMultipleModels

type ErrMultipleModels struct {
	Provider string
	Name     string
	Matches  []*Model
}

func (*ErrMultipleModels) Error

func (e *ErrMultipleModels) Error() string

type Message

type Message struct {
	Role       string    `json:"role,omitzero"`
	Content    string    `json:"content,omitzero"`
	Thinking   string    `json:"thinking,omitzero"`     // For chain-of-thought / thinking content
	ToolCall   *ToolCall `json:"tool_call,omitzero"`    // For assistant messages that invoke a tool
	ToolCallID string    `json:"tool_call_id,omitzero"` // For tool results, the ID of the tool call being responded to
}

Message represents a chat message

func AssistantMessage

func AssistantMessage(content string) *Message

AssistantMessage creates an assistant message

func SystemMessage

func SystemMessage(content string) *Message

SystemMessage creates a system message

func UserMessage

func UserMessage(content string) *Message

UserMessage creates a user message

type Model

type Model struct {
	Provider string     // Provider name
	ID       string     // Model identifier
	Meta     *ModelMeta // Model metadata (nil if not available)
}

Model represents an available model

type ModelMeta added in v0.2.4

type ModelMeta struct {
	DisplayName     string    // Human-friendly name for the model (if available)
	KnowledgeCutoff time.Time // Zero time if unknown
	ContextWindow   int       // Maximum context window in tokens
	MaxOutputTokens int       // Maximum output tokens (if known)
	HasReasoning    bool      // Whether the model supports chain-of-thought / reasoning
}

Manually curated information about the model

type Option

type Option func(*Config)

func WithMaxSteps

func WithMaxSteps(max int) Option

WithMaxSteps sets the maximum number of steps in a turn

func WithMessage

func WithMessage(messages ...*Message) Option

WithMessages sets initial conversation history

func WithModel

func WithModel(model string) Option

WithModel sets the model for the agent

func WithThinking

func WithThinking(level Thinking) Option

WithThinking sets the extended thinking level. Supported values: ThinkingLow, ThinkingMedium, ThinkingHigh. Default is ThinkingMedium if not specified.

func WithTool

func WithTool(tools ...Tool) Option

WithTool adds a tool to the agent

type Provider

type Provider interface {
	Name() string
	Model(ctx context.Context, id string) (*Model, error)
	Models(ctx context.Context) ([]*Model, error)
	Chat(ctx context.Context, req *ChatRequest) iter.Seq2[*ChatResponse, error]
}

Provider interface

type Thinking

type Thinking string

Thinking represents the level of extended thinking/reasoning

const (
	ThinkingNone   Thinking = "none"   // Disable thinking
	ThinkingLow    Thinking = "low"    // Low thinking budget
	ThinkingMedium Thinking = "medium" // Medium thinking budget
	ThinkingHigh   Thinking = "high"   // High thinking budget
)

type Tool

type Tool interface {
	Schema() *ToolSchema
	Run(ctx context.Context, in json.RawMessage) (out []byte, err error)
}

Tool interface - high-level typed tool definition

func Func

func Func[In, Out any](name, description string, run func(ctx context.Context, in In) (Out, error)) Tool

Func creates a typed tool with automatic JSON marshaling

type ToolCall

type ToolCall struct {
	ID               string          `json:"id,omitzero"`
	Name             string          `json:"name,omitzero"`
	Arguments        json.RawMessage `json:"arguments,omitzero"`
	ThoughtSignature []byte          `json:"thought_signature,omitzero"`
}

ToolCall represents a tool invocation from the model

type ToolFunction

type ToolFunction struct {
	Name        string
	Description string
	Parameters  *ToolFunctionParameters
}

ToolFunction defines the function details for a tool

type ToolFunctionParameters

type ToolFunctionParameters struct {
	Type       string
	Properties map[string]*ToolProperty
	Required   []string
}

ToolFunctionParameters defines the parameters schema for a tool

type ToolProperty

type ToolProperty struct {
	Type        string
	Description string
	Enum        []string
	Items       *ToolProperty
}

ToolProperty defines a single property in the tool schema

type ToolSchema

type ToolSchema struct {
	Type     string
	Function *ToolFunction
}

ToolSchema defines a tool's JSON schema specification

type Usage added in v0.2.5

type Usage struct {
	InputTokens       int `json:"input_tokens,omitzero"`
	OutputTokens      int `json:"output_tokens,omitzero"`
	TotalTokens       int `json:"total_tokens,omitzero"`
	CachedInputTokens int `json:"cached_input_tokens,omitzero"`
	ReasoningTokens   int `json:"reasoning_tokens,omitzero"`
}

Usage represents token usage for a single model response.

Directories

Path Synopsis
cmd
llm command
internal
cli
env
providers
tool

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL