A3S Docs
A3S Code

Sessions

Create sessions, send prompts, stream responses, and manage conversation history

Sessions

Each session is bound to a workspace directory. The Agent creates sessions via agent.session(workspace, options?). Sessions hold their own LLM client, conversation history, and tool context.

The generation APIs — send() and stream() — send prompts to the LLM and return responses. The agent loop handles tool execution automatically.

Create Session

use a3s_code_core::{Agent, SessionOptions};

let agent = Agent::new("agent.hcl").await?;

// Default session (uses config's default model)
let session = agent.session("/my-project", None)?;

// Session with model override
let session = agent.session("/my-project", Some(
    SessionOptions::new()
        .with_model("openai/gpt-4o")
))?;
const { Agent } = require('@a3s-lab/code');

const agent = await Agent.create('agent.hcl');

// Default session
const session = agent.session('/my-project');

// Session with model override
const session = agent.session('/my-project', {
  model: 'openai/gpt-4o',
});
from a3s_code import Agent

agent = Agent.create("agent.hcl")

# Default session
session = agent.session("/my-project")

# Session with model override
session = agent.session("/my-project", model="openai/gpt-4o")

SessionOptions

Prop

Type

The model must be defined in the agent's config file under providers. The format is provider_name/model_id.

Global skill_dirs and agent_dirs are set in the agent config. Per-session dirs merge with the global ones — see Skills for details.

System Prompt Customization (Slot-Based)

Customize the agent's behavior without overriding core agentic capabilities. Four named slots let you inject custom personality while preserving tool usage strategy, autonomous behavior, and completion criteria:

SlotPositionBehavior
roleBefore coreReplaces default "You are A3S Code..." identity
guidelinesAfter coreAppended as ## Guidelines section
response_styleReplaces sectionReplaces default ## Response Format
extraEndFreeform instructions (backward-compatible)
use a3s_code_core::{SessionOptions, SystemPromptSlots};

let session = agent.session("/my-project", Some(
    SessionOptions::new().with_prompt_slots(SystemPromptSlots {
        role: Some("You are a senior Rust developer.".into()),
        guidelines: Some("Use clippy. No unwrap(). Prefer Result.".into()),
        response_style: Some("Be concise. Use bullet points.".into()),
        extra: Some("This project uses tokio and axum.".into()),
    })
))?;
const session = agent.session('/my-project', {
  role: 'You are a senior Rust developer.',
  guidelines: 'Use clippy. No unwrap(). Prefer Result.',
  responseStyle: 'Be concise. Use bullet points.',
  extra: 'This project uses tokio and axum.',
});
from a3s_code import SessionOptions

opts = SessionOptions()
opts.role = "You are a senior Rust developer."
opts.guidelines = "Use clippy. No unwrap(). Prefer Result."
opts.response_style = "Be concise. Use bullet points."
opts.extra = "This project uses tokio and axum."
session = agent.session("/my-project", options=opts)

See Prompt Slots Example for full multi-language examples.

Error Recovery & Resilience

Three layers of error recovery protect long-running sessions:

// Rust — individual controls
let session = agent.session(".", Some(
    SessionOptions::new()
        .with_parse_retries(3)          // bail after 3 consecutive parse errors
        .with_tool_timeout(30_000)      // 30s per tool
        .with_circuit_breaker(5)        // retry LLM up to 5 times
))?;

// Rust — sensible bundle (parse=2, timeout=2min, circuit_breaker=3)
let session = agent.session(".", Some(
    SessionOptions::new().with_resilience_defaults()
))?;

Send (Non-Streaming)

let result = session.send("What files handle authentication?").await?;
println!("{}", result.text);
println!("Tools: {}, Tokens: {}", result.tool_calls_count, result.usage.total_tokens);
const result = await session.send('What files handle authentication?');
console.log(result.text);
console.log(`Tools: ${result.toolCallsCount}, Tokens: ${result.totalTokens}`);
result = session.send("What files handle authentication?")
print(result.text)
print(f"Tools: {result.tool_calls_count}, Tokens: {result.total_tokens}")

Send with Attachments (Vision)

Send image attachments alongside text prompts. Requires a vision-capable model (Claude Sonnet, GPT-4o).

use a3s_code_core::Attachment;

// From file (auto-detects media type from extension)
let image = Attachment::from_file("screenshot.png")?;

// Or from bytes
let image = Attachment::jpeg(raw_bytes);

let result = session.send_with_attachments(
    "What's in this screenshot?",
    &[image],
    None,
).await?;
println!("{}", result.text);
const image = await fs.readFile('screenshot.png');
const result = await session.sendWithAttachments(
  "What's in this screenshot?",
  [{ data: image, mediaType: 'image/png' }],
);
console.log(result.text);
from a3s_code import Attachment

image = Attachment.from_file("screenshot.png")
result = session.send_with_attachments(
    "What's in this screenshot?",
    [image],
)
print(result.text)

Supported image formats: JPEG, PNG, GIF, WebP.

Streaming variant:

let (rx, handle) = session.stream_with_attachments(
    "Describe this diagram",
    &[Attachment::from_file("diagram.png")?],
    None,
).await?;

Tool Image Output

Tools can return images alongside text output. When a tool returns images, they are included as multi-modal content blocks in the tool result message sent to the LLM.

// In a custom tool implementation
async fn execute(&self, args: &Value, ctx: &ToolContext) -> Result<ToolOutput> {
    let screenshot_bytes = take_screenshot().await?;
    Ok(ToolOutput::success("Screenshot captured")
        .with_images(vec![Attachment::png(screenshot_bytes)]))
}

Stream

use a3s_code_core::AgentEvent;

// AgentEvent is #[non_exhaustive] — always include a wildcard arm
let (mut rx, _handle) = session.stream("Refactor the auth module").await?;
while let Some(event) = rx.recv().await {
    match event {
        AgentEvent::TextDelta { text } => print!("{text}"),
        AgentEvent::ToolStart { name, .. } => println!("\n🔧 {name}"),
        AgentEvent::End { text, usage } => {
            println!("\n✅ Done: {} tokens", usage.total_tokens);
            break;
        }
        _ => {} // required: AgentEvent is #[non_exhaustive]
    }
}
// Returns an EventStream — use for await...of or call .next() manually
const stream = await session.stream('Refactor the auth module');
for await (const event of stream) {
  if (event.type === 'text_delta') process.stdout.write(event.text);
  if (event.type === 'tool_start') console.log(`\n🔧 ${event.toolName}`);
  if (event.type === 'tool_end') console.log(`  → ${event.toolOutput?.slice(0, 100)}`);
}

// Or iterate manually with .next()
const stream2 = await session.stream('Explain src/main.rs');
while (true) {
  const { value, done } = await stream2.next();
  if (done) break;
  if (value.type === 'text_delta') process.stdout.write(value.text);
}
# Sync iteration (works without an event loop)
for event in session.stream("Refactor the auth module"):
    if event.event_type == "text_delta":
        print(event.text, end="", flush=True)
    elif event.event_type == "tool_start":
        print(f"\n🔧 {event.tool_name}")
    elif event.event_type == "tool_end":
        print(f"  → {event.tool_output[:100]}")

# Async iteration (inside async def)
async for event in session.stream("Refactor the auth module"):
    if event.event_type == "text_delta":
        print(event.text, end="", flush=True)
    elif event.event_type == "end":
        print(f"\nDone — {event.total_tokens} tokens")
        break

Conversation History (Rust)

Maintain multi-turn conversations by passing history:

use a3s_code_core::llm::{ContentBlock, Message};

let history = vec![
    Message::user("What's in src/?"),
    Message {
        role: "assistant".to_string(),
        content: vec![ContentBlock::Text {
            text: "The src/ directory contains main.rs and lib.rs.".to_string(),
        }],
        reasoning_content: None,
    },
];

// Continue the conversation
let result = session.send_with_history(&history, "Now explain main.rs").await?;

Direct Tool Execution

Call tools directly without going through the LLM:

session.read_file("src/main.rs").await?;
session.bash("cargo test").await?;
session.glob("**/*.rs").await?;
session.grep("fn main").await?;
session.tool("write", serde_json::json!({"file_path": "x.rs", "content": "..."})).await?;
await session.readFile('src/main.rs');
await session.bash('cargo test');
await session.glob('**/*.rs');
await session.grep('fn main');
await session.tool('write', { file_path: 'x.rs', content: '...' });
session.read_file("src/main.rs")
session.bash("cargo test")
session.glob("**/*.rs")
session.grep("fn main")
session.tool("write", {"file_path": "x.rs", "content": "..."})

Configuration

See Providers & Configuration for the full HCL config reference including all fields, the env() function, queue and search configuration.

Return Types

AgentResult

Prop

Type

TokenUsage

Prop

Type

AgentEvent

AgentEvent is #[non_exhaustive] — always include a wildcard arm when matching in Rust.

Agent lifecycle:

Prop

Type

Tool execution:

Prop

Type

HITL and permissions:

Prop

Type

Context and memory:

Prop

Type

Tasks, subagents, and lane queue:

Prop

Type

API Reference

Agent

Prop

Type

AgentSession

Prop

Type

SessionOptions

Prop

Type

AgentResponse

Prop

Type

On this page