Documentation Index
Fetch the complete documentation index at: https://mintlify.com/block/goose/llms.txt
Use this file to discover all available pages before exploring further.
Overview
The Anthropic provider connects to Claude models including Sonnet, Opus, and Haiku. It supports advanced features like extended thinking, prompt caching, and tool use.
Source: crates/goose/src/providers/anthropic.rs
Configuration
Environment Variables
ANTHROPIC_HOST
string
default:"https://api.anthropic.com"
API endpoint URL (for proxy or alternative endpoints)
Setup
# Configure using the CLI
goose configure
# Or set environment variables
export ANTHROPIC_API_KEY="your-api-key"
export ANTHROPIC_HOST="https://api.anthropic.com" # optional
Supported Models
Claude 4.6
claude-opus-4-6 - Most capable model
claude-sonnet-4-6 - Balanced performance
Claude 4.5 (Default)
claude-sonnet-4-5 (default) - Latest Sonnet with improved reasoning
claude-sonnet-4-5-20250929 - Dated version
claude-haiku-4-5 (fast model) - Fastest Claude 4.5
claude-haiku-4-5-20251001 - Dated version
claude-opus-4-5 - Most capable 4.5 model
claude-opus-4-5-20251101 - Dated version
Claude 4.0 (Legacy)
claude-sonnet-4-0
claude-sonnet-4-20250514 - Dated version
claude-opus-4-0
claude-opus-4-20250514 - Dated version
Context Limit: 200,000 tokens for all models
Documentation: https://docs.anthropic.com/en/docs/about-claude/models
Usage
Basic Usage
use goose::providers::create;
use goose::model::ModelConfig;
// Create with default model (claude-sonnet-4-5)
let model_config = ModelConfig::new("claude-sonnet-4-5")?;
let provider = create("anthropic", model_config, vec![]).await?;
// Stream a response
let messages = vec![Message::user().with_text("Hello, Claude!")];
let stream = provider.stream(
&provider.get_model_config(),
"session-123",
"You are a helpful assistant.",
&messages,
&[],
).await?;
Custom Configuration
use goose::model::ModelConfig;
let model_config = ModelConfig::new("claude-opus-4-6")?
.with_temperature(0.7)
.with_max_tokens(4096)
.with_top_p(0.9);
let provider = create("anthropic", model_config, vec![]).await?;
Using Fast Models
The Anthropic provider automatically configures claude-haiku-4-5 as the fast model:
// Use fast model for quick operations
let (response, usage) = provider.complete_fast(
"session-123",
"You are a helpful assistant.",
&messages,
&[],
).await?;
Advanced Features
Extended Thinking
Claude 3.7 Sonnet supports extended thinking mode:
let model_config = ModelConfig::new("claude-3-7-sonnet-20250219")?
.with_extended_thinking(true);
When enabled, the provider automatically adds the appropriate beta headers:
anthropic-beta: output-128k-2025-02-19
anthropic-beta: token-efficient-tools-2025-02-19
Prompt Caching
Anthropic supports prompt caching to reduce costs on repeated content:
// Check if caching is supported
if provider.supports_cache_control().await {
// Caching is automatically applied to system prompts
// and conversation history in long sessions
}
Streaming Responses
All Anthropic models support streaming:
use futures::StreamExt;
let mut stream = provider.stream(
&model_config,
"session-123",
"You are a helpful assistant.",
&messages,
&tools,
).await?;
while let Some(result) = stream.next().await {
let (message, usage) = result?;
if let Some(msg) = message {
// Process streaming message chunk
println!("Received: {:?}", msg);
}
}
Claude excels at tool use (function calling):
use rmcp::model::Tool;
let tools = vec![
Tool {
name: "get_weather".into(),
description: Some("Get weather for a location".into()),
input_schema: serde_json::json!({
"type": "object",
"properties": {
"location": {"type": "string"},
},
"required": ["location"],
}),
},
];
let stream = provider.stream(
&model_config,
"session-123",
"You are a helpful assistant.",
&messages,
&tools,
).await?;
Implementation Details
impl ProviderDef for AnthropicProvider {
fn metadata() -> ProviderMetadata {
ProviderMetadata::with_models(
"anthropic",
"Anthropic",
"Claude and other models from Anthropic",
"claude-sonnet-4-5",
models,
"https://docs.anthropic.com/en/docs/about-claude/models",
vec![
ConfigKey::new("ANTHROPIC_API_KEY", true, true, None, true),
ConfigKey::new(
"ANTHROPIC_HOST",
true,
false,
Some("https://api.anthropic.com"),
false,
),
],
)
}
}
The provider uses the Messages API format:
{
"model": "claude-sonnet-4-5",
"max_tokens": 4096,
"system": "You are a helpful assistant.",
"messages": [
{
"role": "user",
"content": "Hello, Claude!"
}
],
"stream": true
}
Authentication
Uses API key authentication via the x-api-key header:
let auth = AuthMethod::ApiKey {
header_name: "x-api-key".to_string(),
key: api_key,
};
API version is specified via header:
anthropic-version: 2023-06-01
The provider dynamically adds beta headers for certain models:
fn get_conditional_headers(&self) -> Vec<(&str, &str)> {
let mut headers = Vec::new();
if self.model.model_name.starts_with("claude-3-7-sonnet-") {
if thinking_type(&self.model) == ThinkingType::Enabled {
headers.push(("anthropic-beta", "output-128k-2025-02-19"));
}
headers.push(("anthropic-beta", "token-efficient-tools-2025-02-19"));
}
headers
}
Error Handling
The provider handles Anthropic-specific errors:
match provider.stream(...).await {
Ok(stream) => { /* handle stream */ },
Err(ProviderError::Authentication(msg)) => {
eprintln!("Invalid API key: {}", msg);
},
Err(ProviderError::RateLimited { retry_after }) => {
eprintln!("Rate limited, retry after: {:?}", retry_after);
},
Err(ProviderError::ContextLengthExceeded(msg)) => {
eprintln!("Context too long: {}", msg);
},
Err(e) => eprintln!("Error: {}", e),
}
Fetching Available Models
// Get all available models from the API
let models = provider.fetch_supported_models().await?;
// Get recommended models (filtered and sorted)
let recommended = provider.fetch_recommended_models().await?;
Cost Tracking
The provider returns usage information with each response:
let (message, usage) = provider.complete(...).await?;
println!("Model: {}", usage.model);
println!("Input tokens: {:?}", usage.usage.input_tokens);
println!("Output tokens: {:?}", usage.usage.output_tokens);
println!("Total tokens: {:?}", usage.usage.total_tokens);
See Also