Documentation Index Fetch the complete documentation index at: https://mintlify.com/block/goose/llms.txt
Use this file to discover all available pages before exploring further.
Agents are the core orchestration component of Goose. They manage the conversation flow between users, AI models, and tools, coordinating complex multi-step tasks autonomously.
What is an Agent?
An agent in Goose is a stateful orchestrator that:
Maintains conversation context and history
Invokes AI models with appropriate prompts and tools
Executes tool calls returned by the model
Manages permissions and security checks
Handles errors and retries
Coordinates with subagents for complex workflows
// Core agent structure (simplified)
pub struct Agent {
provider : SharedProvider , // AI model
extension_manager : Arc < ExtensionManager >, // Available tools
prompt_manager : Mutex < PromptManager >, // System prompts
config : AgentConfig , // Configuration
retry_manager : RetryManager , // Error handling
// ...
}
Agent Lifecycle
The typical lifecycle of an agent interaction:
1. Initialization
When an agent is created, it:
// From crates/goose/src/agents/agent.rs
impl Agent {
pub async fn new (
config : AgentConfig ,
provider : SharedProvider ,
extension_manager : Arc < ExtensionManager >,
) -> Result < Arc < Self >> {
// Initialize prompt manager with system prompts
let prompt_manager = PromptManager :: new ();
// Set up permission handling
let ( confirmation_tx , confirmation_rx ) = mpsc :: channel ( 32 );
// Initialize retry manager
let retry_manager = RetryManager :: new ();
// Create agent instance
Ok ( Arc :: new ( Self {
provider ,
extension_manager ,
config ,
prompt_manager : Mutex :: new ( prompt_manager ),
confirmation_tx ,
confirmation_rx : Mutex :: new ( confirmation_rx ),
retry_manager ,
// ...
}))
}
}
2. Message Processing
When a user sends a message:
Build Context : Gather conversation history, system prompts, and available tools
Model Invocation : Send to AI provider with streaming enabled
Stream Processing : Handle chunks as they arrive (text or tool calls)
Tool Execution : Execute any tools the model requests
Continue Loop : Feed tool results back to model
Completion : Return final response to user
// Simplified message processing flow
pub async fn reply (
& self ,
session : & Session ,
messages : Vec < Message >,
) -> Result < impl Stream < Item = AgentEvent >> {
let mut conversation = session . conversation . clone ();
let tools = self . extension_manager . get_tools () . await ? ;
loop {
// Get model response
let stream = self . provider . complete (
system_prompt ,
conversation . messages (),
tools . clone (),
) . await ? ;
// Process response stream
while let Some ( chunk ) = stream . next () . await {
match chunk {
ProviderMessage :: Text ( text ) => {
// Stream text to user
yield AgentEvent :: MessageChunk ( text );
}
ProviderMessage :: ToolUse ( tool ) => {
// Execute tool
let result = self . execute_tool ( & tool ) . await ? ;
conversation . add_tool_result ( result );
// Continue loop to get model's next response
}
ProviderMessage :: Done => break ,
}
}
if no_more_tool_calls {
break ;
}
}
}
When the model requests a tool:
async fn execute_tool ( & self , tool_request : & ToolRequest ) -> Result < ToolResult > {
// 1. Security check
let permission = self . check_permission ( & tool_request ) . await ? ;
if permission . denied () {
return Ok ( ToolResult :: denied ());
}
// 2. Call extension
let result = self . extension_manager
. call_tool (
& tool_request . name,
tool_request . arguments . clone (),
)
. await ? ;
// 3. Return result to conversation
Ok ( result )
}
Agent Configuration
Agents can be configured through multiple mechanisms:
Session Config
pub struct SessionConfig {
pub working_directory : PathBuf , // Where agent operates
pub extensions : Vec < ExtensionConfig >, // Available tools
pub system_prompt : Option < String >, // Override default prompt
pub max_turns : Option < u32 >, // Limit conversation length
pub temperature : Option < f32 >, // Model creativity
}
Recipe-Based Config
Recipes provide pre-configured agent behaviors:
# research-assistant.yaml
title : Research Assistant
instructions : |
You are a research assistant that helps gather and synthesize information.
Use web search and file reading tools to find relevant data.
Always cite your sources.
extensions :
- type : builtin
name : developer
- type : stdio
name : brave-search
cmd : npx
args : [ "-y" , "@modelcontextprotocol/server-brave-search" ]
settings :
goose_model : claude-sonnet-4-20250514
max_turns : 30
temperature : 0.2
Runtime Config
// Override settings per session
let config = AgentConfig {
session_manager ,
permission_manager ,
scheduler_service : None ,
goose_mode : GooseMode :: Chat ,
disable_session_naming : false ,
goose_platform : GoosePlatform :: GooseCli ,
};
Subagents
Subagents are independent agent instances spawned to handle specific sub-tasks. This enables:
Parallel execution : Multiple tasks running simultaneously
Context isolation : Prevent context window overflow
Specialized behaviors : Different instructions per task
Composed workflows : Break complex tasks into manageable pieces
Creating Subagents
Subagents can be created in two ways:
1. Ad-hoc Subagents
Create on-the-fly with custom instructions:
prompt : |
To analyze this codebase:
1. Spawn a subagent to analyze frontend:
subagent(instructions: "List all React components and their props")
2. Spawn another for backend:
subagent(instructions: "Document all API endpoints")
3. Synthesize findings from both into a report.
The model uses the subagent tool:
{
"name" : "subagent" ,
"arguments" : {
"instructions" : "List all React components and their props" ,
"settings" : {
"model" : "gpt-4o-mini" ,
"max_turns" : 10
}
}
}
2. Sub-recipes
Predefined subagent templates:
# main-recipe.yaml
title : Code Analysis Workflow
sub_recipes :
- name : "find_files"
path : "./sub-recipes/file-finder.yaml"
description : "Locate relevant files"
- name : "analyze_code"
path : "./sub-recipes/code-analyzer.yaml"
sequential_when_repeated : true
prompt : |
First, find relevant files:
subagent(subrecipe: "find_files", parameters: {"pattern": "*.rs"})
Then analyze each file found.
Subagent Architecture
Subagent Implementation
// From crates/goose/src/agents/subagent_handler.rs
pub async fn run_subagent_task (
params : SubagentRunParams
) -> Result < String > {
// Create isolated agent instance
let agent = Agent :: new (
params . config,
provider ,
extension_manager ,
) . await ? ;
// Create new session for subagent
let session = SessionManager :: create_session (
SessionType :: SubAgent ,
params . recipe,
) . await ? ;
// Execute task with isolated context
let messages = agent . reply (
& session ,
vec! [ Message :: user ( params . task_config . instructions)],
) . await ? ;
// Return summary or full conversation
if params . return_last_only {
extract_last_message ( & messages )
} else {
extract_all_text ( & messages )
}
}
Parallel Subagent Execution
Multiple subagent calls in one model response run in parallel:
// When model returns multiple subagent tool calls:
[
{ "name" : "subagent" , "id" : "1" , "args" : { "instructions" : "Task A" }},
{ "name" : "subagent" , "id" : "2" , "args" : { "instructions" : "Task B" }},
{ "name" : "subagent" , "id" : "3" , "args" : { "instructions" : "Task C" }}
]
// Agent executes all three concurrently
let results = futures :: future :: join_all (
tool_calls . iter () . map ( | call | execute_subagent ( call ))
) . await ;
Subagent Best Practices
Use summary mode by default
Subagents return concise summaries to the parent, preventing context overflow: // Default behavior (summary mode)
subagent ( instructions : "Analyze file.rs" )
// Returns: "The file contains 3 functions..."
// Full conversation mode (use sparingly)
subagent ( instructions : "..." , summary : false )
// Returns: Complete message history
Scope extensions appropriately
Give subagents only the tools they need: prompt : |
subagent(
instructions: "Read all markdown files",
extensions: ["developer"] # Read-only access
)
Use sequential_when_repeated for stateful tasks
Prevent parallel execution when tasks have side effects: sub_recipes :
- name : "database_migration"
path : "./migrations.yaml"
sequential_when_repeated : true # Don't run migrations in parallel
Choose the right model per task
Use cheaper/faster models for simple subagent tasks: prompt : |
# Simple file counting - use fast model
subagent(
instructions: "Count files in src/",
settings: {model: "gpt-4o-mini", max_turns: 3}
)
# Complex analysis - use powerful model
subagent(
instructions: "Analyze security vulnerabilities",
settings: {model: "claude-sonnet-4-20250514"}
)
Agent Modes
Goose supports different operational modes:
pub enum GooseMode {
Chat , // Interactive conversation mode
Execute , // Task execution mode (recipes)
}
Chat Mode
Interactive back-and-forth conversation
User can interrupt and provide feedback
Agent asks for clarification when needed
Suitable for exploratory tasks
Execute Mode
Recipe-driven execution
Runs to completion with minimal interaction
Uses final_output tool to return structured results
Suitable for automated workflows
Turn Management
Agents limit conversation length to prevent runaway execution:
const DEFAULT_MAX_TURNS : u32 = 1000 ;
// In agent loop
for turn in 0 .. max_turns {
let response = provider . complete ( ... ) . await ? ;
if turn >= max_turns {
return Err ( anyhow! ( "Max turns exceeded" ));
}
if response . is_final {
break ;
}
}
Turns can be configured via:
Recipe settings.max_turns
Session config
Subagent parameters
Context Management
Agents automatically compact conversation history when approaching token limits:
// From crates/goose/src/context_mgmt/
const DEFAULT_COMPACTION_THRESHOLD : f64 = 0.75 ;
if token_count > ( context_limit * COMPACTION_THRESHOLD ) {
// Compact older messages while preserving:
// - System prompt
// - Recent messages
// - Important context
messages = compact_messages ( messages , context_limit );
}
Error Handling and Retries
Agents include sophisticated retry logic for transient failures:
// From crates/goose/src/agents/retry.rs
pub struct RetryManager {
max_retries : u32 ,
backoff : ExponentialBackoff ,
}
impl RetryManager {
pub async fn retry_with_backoff < F , T >(
& self ,
operation : F ,
) -> Result < T >
where
F : Fn () -> Future < Output = Result < T >>,
{
let mut attempts = 0 ;
loop {
match operation () . await {
Ok ( result ) => return Ok ( result ),
Err ( e ) if is_retryable ( & e ) && attempts < self . max_retries => {
attempts += 1 ;
sleep ( self . backoff . next_delay ()) . await ;
}
Err ( e ) => return Err ( e ),
}
}
}
}
Retryable errors include:
Network timeouts
Rate limiting (429)
Server errors (5xx)
Transient provider errors
Security and Permissions
Agents enforce security policies before tool execution:
async fn check_permission (
& self ,
tool_request : & ToolRequest ,
) -> Result < PermissionCheckResult > {
// 1. Check configured permission level
let level = self . config . permission_manager
. get_permission_level ( & tool_request . name);
match level {
PermissionLevel :: Allow => Ok ( PermissionCheckResult :: Allowed ),
PermissionLevel :: Deny => Ok ( PermissionCheckResult :: Denied ),
PermissionLevel :: Confirm => {
// 2. Request user confirmation
let confirmation = self . request_confirmation ( & tool_request ) . await ? ;
Ok ( PermissionCheckResult :: from ( confirmation ))
}
}
}
See Sessions for session isolation and Extensions for tool sandboxing.
Next Steps
Providers Learn how agents interact with AI models
Extensions Understand the tools agents can use
Recipes Create pre-configured agent behaviors
Sessions Manage agent state and history