Documentation Index
Fetch the complete documentation index at: https://mintlify.com/block/goose/llms.txt
Use this file to discover all available pages before exploring further.
Declarative providers allow you to add support for new LLM providers without writing Rust code. Simply define the provider’s configuration in a JSON file.
What are Declarative Providers?
Declarative providers are JSON configuration files that define:
- Provider metadata (name, description)
- API endpoint configuration
- Supported models and their capabilities
- Authentication requirements
- Protocol format (OpenAI-compatible, etc.)
Goose automatically loads these configurations and creates fully functional providers.
Configuration File Structure
Declarative providers are located in crates/goose/src/providers/declarative/.
Basic Structure
{
"name": "provider_id",
"engine": "openai",
"display_name": "Provider Display Name",
"description": "Brief description of the provider",
"api_key_env": "PROVIDER_API_KEY",
"base_url": "https://api.provider.com/v1/chat/completions",
"models": [
{
"name": "model-name",
"context_limit": 128000,
"max_tokens": 4096
}
],
"supports_streaming": true
}
Field Reference
Required Fields
name (string)
- Unique identifier for the provider
- Used in configuration and CLI
- Must be lowercase, alphanumeric with underscores
engine (string)
- Protocol/format the provider uses
- Currently supported:
"openai"
- Determines how requests are formatted
display_name (string)
- Human-readable name shown in UIs
- Can include spaces and special characters
"display_name": "Groq (d)"
description (string)
- Brief description of the provider
- Shown in provider selection UIs
"description": "Fast inference with Groq hardware"
api_key_env (string)
- Environment variable name for API key
- Convention:
PROVIDER_API_KEY format
"api_key_env": "GROQ_API_KEY"
base_url (string)
- API endpoint URL
- For OpenAI-compatible providers, include full path to chat completions
"base_url": "https://api.groq.com/openai/v1/chat/completions"
models (array)
supports_streaming (boolean)
- Whether the provider supports streaming responses
- Most modern providers support this
"supports_streaming": true
Model Configuration
Each model in the models array defines:
{
"name": "model-identifier",
"context_limit": 131072,
"max_tokens": 32768,
"input_token_cost": 0.0000025,
"output_token_cost": 0.00001
}
Model Fields
name (string, required)
- Model identifier used in API requests
- Exact string the provider expects
context_limit (integer, required)
- Maximum context window size in tokens
- Used for context management
max_tokens (integer, required)
- Maximum output tokens per request
- Used to limit response length
input_token_cost (float, optional)
- Cost per input token in USD
- Used for cost estimation
output_token_cost (float, optional)
- Cost per output token in USD
- Used for cost estimation
Complete Examples
Groq Provider
{
"name": "groq",
"engine": "openai",
"display_name": "Groq (d)",
"description": "Fast inference with Groq hardware",
"api_key_env": "GROQ_API_KEY",
"base_url": "https://api.groq.com/openai/v1/chat/completions",
"models": [
{
"name": "llama-3.3-70b-versatile",
"context_limit": 131072,
"max_tokens": 32768
},
{
"name": "llama-3.1-8b-instant",
"context_limit": 131072,
"max_tokens": 131072
},
{
"name": "qwen/qwen3-32b",
"context_limit": 131072,
"max_tokens": 40960
}
],
"supports_streaming": true
}
Mistral Provider
{
"name": "mistral",
"engine": "openai",
"display_name": "Mistral AI (d)",
"description": "Mistral AI models",
"api_key_env": "MISTRAL_API_KEY",
"base_url": "https://api.mistral.ai/v1/chat/completions",
"models": [
{
"name": "mistral-large-latest",
"context_limit": 131072,
"max_tokens": 32768,
"input_token_cost": 0.000002,
"output_token_cost": 0.000006
},
{
"name": "mistral-small-latest",
"context_limit": 32768,
"max_tokens": 8192,
"input_token_cost": 0.0000002,
"output_token_cost": 0.0000006
},
{
"name": "codestral-latest",
"context_limit": 32768,
"max_tokens": 8192,
"input_token_cost": 0.0000002,
"output_token_cost": 0.0000006
}
],
"supports_streaming": true
}
DeepSeek Provider
{
"name": "deepseek",
"engine": "openai",
"display_name": "DeepSeek (d)",
"description": "DeepSeek AI models",
"api_key_env": "DEEPSEEK_API_KEY",
"base_url": "https://api.deepseek.com/v1/chat/completions",
"models": [
{
"name": "deepseek-chat",
"context_limit": 65536,
"max_tokens": 8192,
"input_token_cost": 0.00000014,
"output_token_cost": 0.00000028
},
{
"name": "deepseek-reasoner",
"context_limit": 65536,
"max_tokens": 8192,
"input_token_cost": 0.00000055,
"output_token_cost": 0.0000022
}
],
"supports_streaming": true
}
Local Model Provider
{
"name": "lmstudio",
"engine": "openai",
"display_name": "LM Studio (d)",
"description": "Local models via LM Studio",
"api_key_env": "LMSTUDIO_API_KEY",
"base_url": "http://localhost:1234/v1/chat/completions",
"models": [
{
"name": "local-model",
"context_limit": 8192,
"max_tokens": 2048
}
],
"supports_streaming": true
}
Creating a New Declarative Provider
1. Create Configuration File
Create a new JSON file in crates/goose/src/providers/declarative/:
touch crates/goose/src/providers/declarative/myprovider.json
2. Define Configuration
{
"name": "myprovider",
"engine": "openai",
"display_name": "My Provider",
"description": "Custom LLM provider",
"api_key_env": "MYPROVIDER_API_KEY",
"base_url": "https://api.myprovider.com/v1/chat/completions",
"models": [
{
"name": "my-model-1",
"context_limit": 128000,
"max_tokens": 4096
}
],
"supports_streaming": true
}
3. Rebuild Goose
The provider is automatically loaded at build time:
Set the API key:
export MYPROVIDER_API_KEY=your-api-key
Configure Goose:
goose configure --provider myprovider --model my-model-1
Start using:
OpenAI-Compatible APIs
Many providers offer OpenAI-compatible APIs. For these:
- Set
"engine": "openai"
- Use the provider’s base URL
- Ensure model names match what the provider expects
Examples:
- Together AI
- Fireworks AI
- Anyscale Endpoints
- Modal
- Replicate (with OpenAI compatibility)
Advanced Configuration
Multiple Model Variants
Include all model variants with accurate limits:
"models": [
{
"name": "model-small",
"context_limit": 8192,
"max_tokens": 2048,
"input_token_cost": 0.0000001,
"output_token_cost": 0.0000002
},
{
"name": "model-medium",
"context_limit": 32768,
"max_tokens": 4096,
"input_token_cost": 0.0000005,
"output_token_cost": 0.000001
},
{
"name": "model-large",
"context_limit": 131072,
"max_tokens": 8192,
"input_token_cost": 0.000002,
"output_token_cost": 0.000004
}
]
Special Model Naming
Some providers use prefixes or special characters:
"models": [
{
"name": "meta-llama/llama-4-maverick-17b-128e-instruct",
"context_limit": 131072,
"max_tokens": 8192
},
{
"name": "openai/gpt-oss-120b",
"context_limit": 131072,
"max_tokens": 65536
}
]
Cost Tracking
Include accurate pricing for cost estimation:
{
"name": "gpt-4o",
"context_limit": 128000,
"max_tokens": 16384,
"input_token_cost": 0.0000025,
"output_token_cost": 0.00001
}
Costs are per token in USD. Goose will:
- Track token usage
- Estimate costs per request
- Show cumulative costs
Validation
Testing Your Provider
After creating a declarative provider:
# Build
cargo build
# Test configuration
goose configure --provider myprovider
# Test basic completion
goose session
> Hello, can you hear me?
Common Issues
Provider not found
- Ensure JSON file is in
crates/goose/src/providers/declarative/
- Rebuild:
cargo build
- Check provider name matches filename (without
.json)
Authentication failed
- Verify environment variable name matches
api_key_env
- Check API key is set:
echo $MYPROVIDER_API_KEY
- Ensure API key format is correct
Model not found
- Verify model name exactly matches provider’s API
- Check provider documentation for exact model identifiers
Context limit errors
- Reduce
context_limit if requests fail
- Check provider’s actual model limits
- Some providers report limits differently
Limitations
Declarative providers currently:
- Only support OpenAI-compatible APIs
- Cannot implement custom OAuth flows
- Cannot handle complex authentication schemes
- Cannot customize request/response transformation
For providers requiring these features, implement a custom provider.
Contributing Declarative Providers
To add a provider to Goose:
- Create the JSON configuration file
- Test thoroughly
- Document model capabilities accurately
- Submit a PR to the Goose repository
- Include example usage in PR description
See Contributing Guide for details.
Existing Declarative Providers
Goose includes these declarative providers:
- Groq - Fast inference
- Mistral - Mistral AI models
- DeepSeek - DeepSeek models
- Cerebras - Cerebras inference
- Moonshot - Moonshot AI
- Kimi - Kimi models
- LM Studio - Local model hosting
- OVHcloud - OVHcloud AI endpoints
See crates/goose/src/providers/declarative/ for complete configurations.
Next Steps