Skip to content

Tool Reference

Complete documentation for Lumen's tool system.

Overview

Tools are typed interfaces to external services. At the language level, a tool is:

  • A qualified name (e.g., llm.chat, http.get)
  • Typed input (record of named arguments)
  • Typed output
  • Declared effects
  • Automatic trace events

Declaring Tools

lumen
use tool llm.chat as Chat
use tool http.get as Fetch
use tool postgres.query as DbQuery

The as keyword creates an alias for use in your code.

Calling Tools

Tools are called like functions with named arguments:

lumen
let response = Chat(prompt: "Hello")
let data = Fetch(url: "https://api.example.com")
let results = DbQuery(sql: "SELECT * FROM users")

Tool Types

LLM Tools

ToolInputOutputEffects
llm.chatprompt, model, temperature, etc.Stringllm
llm.embedtext, modellist[Float]llm
llm.completeprompt, max_tokensStringllm

HTTP Tools

ToolInputOutputEffects
http.geturl, headersJsonhttp
http.posturl, body, headersJsonhttp
http.puturl, body, headersJsonhttp
http.deleteurl, headersJsonhttp

Database Tools

ToolInputOutputEffects
postgres.querysql, paramslist[Json]db
postgres.executesql, paramsIntdb
redis.getkeyStringdb
redis.setkey, valueNulldb

Filesystem Tools

ToolInputOutputEffects
fs.readpathStringfs
fs.writepath, contentNullfs
fs.listpathlist[String]fs
fs.deletepathNullfs

Tool Schema

Every tool has a schema:

lumen
# Input schema defines accepted arguments
input schema:
  prompt: String (required)
  model: String (default: "gpt-4")
  temperature: Float (default: 0.7)
  max_tokens: Int (default: 1024)

# Output schema defines return type
output schema:
  content: String
  tokens_used: Int

Effect Binding

Map effects to tools:

lumen
use tool llm.chat as Chat
bind effect llm to Chat

# Now Chat calls produce {llm} effect
cell ask(question: String) -> String / {llm}
  return Chat(prompt: question)
end

Multiple Tools with Same Effect

lumen
use tool llm.chat as Chat
use tool llm.embed as Embed

bind effect llm to Chat
bind effect llm to Embed

# Both produce {llm} effect

Tool Aliases

Create multiple aliases with different constraints:

lumen
use tool llm.chat as FastChat
use tool llm.chat as SmartChat

grant FastChat model "gpt-3.5-turbo" max_tokens 256
grant SmartChat model "gpt-4o" max_tokens 4096

Tool Providers

Tools are backed by providers configured in lumen.toml:

toml
[providers]
llm.chat = "openai-compatible"

[providers.config.openai-compatible]
base_url = "https://api.openai.com/v1"
api_key_env = "OPENAI_API_KEY"

Provider Interface

Every provider implements:

rust
trait ToolProvider {
    fn name() -> String;
    fn version() -> String;
    fn schema() -> ToolSchema;
    fn call(input: Json) -> result;
    fn effects() -> list[EffectKind];
}

Provider Capabilities

CapabilityDescription
TextGenerationBasic text generation
ChatMulti-turn conversation
EmbeddingText embeddings/vectors
VisionImage input processing
ToolUseFunction/tool calling
StructuredOutputJSON schema output
StreamingStreaming responses

Custom Tools

Define custom tool interfaces:

lumen
use tool mycompany.analyze as Analyze

grant Analyze
  timeout_ms 5000
  version "v2"

cell process(data: String) -> Analysis / {external}
  return Analyze(input: data, format: "json")
end

Error Types

ErrorDescription
NotFoundTool not registered
InvalidArgsMissing or malformed arguments
ExecutionFailedGeneral execution error
RateLimitRate limit exceeded
AuthErrorAuthentication failure
ModelNotFoundModel not available
TimeoutRequest timed out
ProviderUnavailableProvider service down
OutputValidationFailedSchema mismatch

Tracing

All tool calls are automatically traced:

lumen
# Automatic trace includes:
# - Tool name
# - Input arguments
# - Output value
# - Duration
# - Provider identity
# - Status (success/failure)

View traces:

bash
lumen trace show <run-id>

Best Practices

  1. Always use grants — Constrain tool behavior
  2. Bind effects — Enable provenance tracking
  3. Handle errors — Tools can fail
  4. Set timeouts — Prevent hanging
  5. Use typed results — Parse and validate outputs

Next Steps

MIT Licensed