Sessions
Create sessions, send prompts, stream responses, and manage conversation history
Sessions
Each session is bound to a workspace directory. The Agent creates sessions via agent.session(workspace, options?). Sessions hold their own LLM client, conversation history, and tool context.
The generation APIs — send() and stream() — send prompts to the LLM and return responses. The agent loop handles tool execution automatically.
Create Session
import { Agent } from '@a3s-lab/code';
const agent = await Agent.create('agent.hcl');
// Default session
const session = agent.session('/my-project');
// Session with model override
const session = agent.session('/my-project', {
model: 'openai/gpt-4o',
});from a3s_code import Agent
agent = Agent.create("agent.hcl")
# Default session
session = agent.session("/my-project")
# Session with model override
session = agent.session("/my-project", model="openai/gpt-4o")SessionOptions
Prop
Type
The model must be defined in the agent's config file under providers. The format is provider_name/model_id.
Global skill_dirs and agent_dirs are set in the agent config. Per-session dirs merge with the global ones — see Skills for details.
System Prompt Customization (Slot-Based)
Customize the agent's behavior without overriding core agentic capabilities. Four named slots let you inject custom personality while preserving tool usage strategy, autonomous behavior, and completion criteria:
| Slot | Position | Behavior |
|---|---|---|
role | Before core | Replaces default "You are A3S Code..." identity |
guidelines | After core | Appended as ## Guidelines section |
response_style | Replaces section | Replaces default ## Response Format |
extra | End | Freeform instructions (backward-compatible) |
const session = agent.session('/my-project', {
role: 'You are a senior Rust developer.',
guidelines: 'Use clippy. No unwrap(). Prefer Result.',
responseStyle: 'Be concise. Use bullet points.',
extra: 'This project uses tokio and axum.',
});from a3s_code import SessionOptions
opts = SessionOptions()
opts.role = "You are a senior Rust developer."
opts.guidelines = "Use clippy. No unwrap(). Prefer Result."
opts.response_style = "Be concise. Use bullet points."
opts.extra = "This project uses tokio and axum."
session = agent.session("/my-project", options=opts)See Prompt Slots Example for full multi-language examples.
Error Recovery & Resilience
Three layers of error recovery protect long-running sessions:
// Individual controls
const session = agent.session('.', {
maxParseRetries: 3, // bail after 3 consecutive parse errors
toolTimeoutMs: 30000, // 30s per tool
circuitBreakerThreshold: 5 // retry LLM up to 5 times
});# Individual controls
session = agent.session(".",
max_parse_retries=3, # bail after 3 consecutive parse errors
tool_timeout_ms=30000, # 30s per tool
circuit_breaker_threshold=5 # retry LLM up to 5 times
)Send (Non-Streaming)
const result = await session.send('What files handle authentication?');
console.log(result.text);
console.log(`Tools: ${result.toolCallsCount}, Tokens: ${result.totalTokens}`);result = session.send("What files handle authentication?")
print(result.text)
print(f"Tools: {result.tool_calls_count}, Tokens: {result.total_tokens}")Send with Attachments (Vision)
Send image attachments alongside text prompts. Requires a vision-capable model (Claude Sonnet, GPT-4o).
const image = await fs.readFile('screenshot.png');
const result = await session.sendWithAttachments(
"What's in this screenshot?",
[{ data: image, mediaType: 'image/png' }],
);
console.log(result.text);from a3s_code import Attachment
image = Attachment.from_file("screenshot.png")
result = session.send_with_attachments(
"What's in this screenshot?",
[image],
)
print(result.text)Supported image formats: JPEG, PNG, GIF, WebP.
Streaming variant:
Tool Image Output
Tools can return images alongside text output. When a tool returns images, they are included as multi-modal content blocks in the tool result message sent to the LLM.
Stream
// EventStream has .next() method — use while loop
const stream = await session.stream('Refactor the auth module');
while (true) {
const { value, done } = await stream.next();
if (done || !value) break;
if (value.type === 'text_delta') process.stdout.write(value.text);
if (value.type === 'tool_start') console.log(`\n🔧 ${value.toolName}`);
if (value.type === 'tool_end') console.log(` → ${value.toolOutput?.slice(0, 100)}`);
}
// For multiple prompts, create separate stream instances
const stream2 = await session.stream('Explain src/main.rs');
while (true) {
const { value, done } = await stream2.next();
if (done || !value) break;
if (value.type === 'text_delta') process.stdout.write(value.text);
}# Sync iteration (works without an event loop)
for event in session.stream("Refactor the auth module"):
if event.event_type == "text_delta":
print(event.text, end="", flush=True)
elif event.event_type == "tool_start":
print(f"\n🔧 {event.tool_name}")
elif event.event_type == "tool_end":
print(f" → {event.tool_output[:100]}")
# Async iteration (inside async def)
async for event in session.stream("Refactor the auth module"):
if event.event_type == "text_delta":
print(event.text, end="", flush=True)
elif event.event_type == "end":
print(f"\nDone — {event.total_tokens} tokens")
breakBTW — Ephemeral Side Questions
The btw() method asks a side question without adding it to conversation history. See Commands & Scheduling for full reference.
const btw = await session.btw('What is the capital of France?');
console.log(btw.answer); // "Paris"
console.log(btw.totalTokens); // token usage — question not added to historybtw = session.btw("What is the capital of France?")
print(btw.answer) # "Paris"
print(btw.total_tokens) # token usage — question not added to historyCancel Ongoing Operation
Call session.cancel() to interrupt an in-progress send() or stream() call. The LLM streaming and any running tool execution will stop as soon as possible.
// Cancel from another async context while send() is running
const sendPromise = session.send('Write a 10,000 line program');
// Cancel after 5 seconds
setTimeout(() => {
const cancelled = session.cancel();
console.log('Cancelled:', cancelled); // true
}, 5000);
await sendPromise; // resolves with partial resultimport threading
# Cancel from another thread while send() is running
def cancel_after_delay():
import time
time.sleep(5)
cancelled = session.cancel()
print(f"Cancelled: {cancelled}") # True
t = threading.Thread(target=cancel_after_delay)
t.start()
result = session.send("Write a 10,000 line program") # returns partial result
t.join()cancel() returns true if an operation was in progress and was cancelled, false if nothing was running.
Cancellation is cooperative — the operation stops at the next LLM streaming chunk boundary or tool execution checkpoint. The send() / stream() call returns normally with whatever partial result was accumulated.
Conversation History (Rust)
Maintain multi-turn conversations by passing history:
Direct Tool Execution
Call tools directly without going through the LLM:
await session.readFile('src/main.rs');
await session.bash('cargo test');
await session.glob('**/*.rs');
await session.grep('fn main');
await session.tool('write', { file_path: 'x.rs', content: '...' });session.read_file("src/main.rs")
session.bash("cargo test")
session.glob("**/*.rs")
session.grep("fn main")
session.tool("write", {"file_path": "x.rs", "content": "..."})Slash Commands & Scheduled Tasks
Sessions intercept slash commands before the LLM. Built-in commands include /help, /model, /cost, /clear, /compact, /tools, /mcp, /loop, /cron-list, and /cron-cancel. You can also register custom commands.
See Commands & Scheduling for the full reference including BTW, /loop syntax, scheduler API, and custom command registration.
Configuration
See Providers & Configuration for the full HCL config reference including all fields, the env() function, queue and search configuration.
Return Types
AgentResult
Prop
Type
TokenUsage
Prop
Type
AgentEvent
AgentEvent is #[non_exhaustive] — always include a wildcard arm when matching in Rust.
Agent lifecycle:
Prop
Type
Tool execution:
Prop
Type
HITL and permissions:
Prop
Type
Context and memory:
Prop
Type
Tasks, subagents, and lane queue:
Prop
Type
API Reference
Agent
Prop
Type
AgentSession
Prop
Type
SessionOptions
Prop
Type
AgentResponse
Prop
Type