API reference
TroveFiles is files and commands for AI agents — six endpoints, one Bearer token.
Quickstart
Three lines and your agent can save files, read them back, and run commands.
# pip install trove-sdk
from trove_sdk import TroveClient
client = TroveClient(api_key="trove-sk-...", namespace="my-agent")
# Write a file
client.write("workspace/notes.md", "# Notes")
# Run any shell command — returns stdout, raises TroveExecError on
# non-zero exit. Use exec_detailed() if you'd rather inspect the
# exit code than catch.
print(client.exec("ls -la workspace/"))Next
CLI — watch your agent live from the terminal
CLI
tail -f for your agent. Stream every file write, shell command, and snapshot to your terminal as it happens.
Install
uv tool install "trove-sdk[cli]"
# or: pip install "trove-sdk[cli]"Log in
trove login
# Opening your browser to authorize this CLI…
#
# Code: ABCD-1234
# URL: https://trovefiles.dev/cli?code=ABCD-1234
#
# Confirm the code in your browser, then approve.
# .....
# saved profile 'default' (trove-sk-abc1…3a7f) workspace=ws-...
# Skip the browser (CI / headless):
trove login --api-key trove-sk-... # explicit key
echo $TROVE_KEY | trove login # piped from stdin
trove login --no-browser # paste at the prompt
trove whoami
# profile : default
# workspace : ws-...
# api key : trove-sk-...3a7fWatch your agent
trove tail
# tailing ws-... (Ctrl-C to stop)
# 14:22:01 file.written customer-acme workspace/notes/research.md (1.2KB)
# 14:22:03 exec.completed customer-acme ls -la workspace/notes/ exit=0
# 14:22:14 exec.completed customer-acme pytest tests/ exit=1
# 14:22:16 file.written customer-acme workspace/notes/parser_fix.py (820B)Filter what you watch
trove tail --namespace customer-acme # one customer's events
trove tail --types file.written # only writes
trove tail --types file.written,exec.completed
trove tail --since 1h # last hour, then keep streaming
trove tail --json # one JSON object per lineBrowse the backlog
trove events list --limit 50
trove events list --types exec.completed --json
trove events list --namespace customer-acmeMultiple workspaces
trove login --save-as staging # browser flow, saved as 'staging'
trove --profile staging tail
trove --profile prod events list --limit 5Pipe into anything
--json emits one event per line. Same firehose the dashboard reads, addressable from your shell — pipe into jq, ship to Slack, forward to Datadog. The CLI is just a thin wrapper over the events API; anything you can do in tail you can do in curl.# Pipe into jq for projection
trove tail --json | jq -r '.data.path // .data.command'
# Forward to Slack
trove tail --types exec.completed --json | while read line; do
exit_code=$(echo "$line" | jq -r .data.exit_code)
[ "$exit_code" != "0" ] && curl -s "$SLACK_WEBHOOK" -d "$line"
doneNext
Authentication — lock down who can call those endpoints
Authentication
Two headers prove who is calling and which namespace they're touching. Set them once on a client — never again.
Your API key. Hashed server-side — cannot be retrieved. Revoke and reissue if lost.
Required on filesystem endpoints. Scoped keys auto-default to their namespace — the header can be omitted.
Next
Namespaces — give every agent its own root directory
Namespaces
Multi-tenant agents need hard isolation. A namespace is a top-level directory per agent, customer, or session — auto-created on first write.
- Pattern:
^[A-Za-z0-9_-]{1,128}$ - Auto-created on first write — no provisioning needed
- Agent always sees its root as
workspace/ - Scoped keys are hard-isolated — cross-namespace access returns 403
Next
POST /v1/exec — run any shell command inside a namespace
POST /v1/exec
Models already know the standard Unix tools — awk, jq, grep, pdftotext. Hand them a shell, they'll figure out the rest. No command whitelist. Returns a JSON envelope with exit_code, stdout, stderr, and duration_ms.
/v1/exec# Simple case: returns stdout as a string. Raises TroveExecError on
# non-zero exit (carries exit_code, stdout, stderr) so a failing
# command never silently looks like normal output.
output = client.exec('grep -r "TODO" workspace/')
print(output)
# Want to inspect the exit code without an exception? Use exec_detailed —
# returns ExecResult(exit_code, stdout, stderr, duration_ms).
result = client.exec_detailed("pytest tests/")
if result.exit_code != 0:
print("failures on stderr:", result.stderr)Output rewriting
workspace/ so internal paths never leak to your agent.Next
POST /write — persist text results back to the workspace
POST /write
The most common operation an agent does. Atomic write of UTF-8 text — for binary or anything over 10 MB, use PUT /files instead.
/writeresult = client.write("workspace/notes.md", "# Notes\n...")
print(result.path, result.size_bytes)Next
PUT /files/{path} — push binary files (PDFs, images, build artifacts)
PUT /files/{path}
Binary stuff: PDFs, images, audio, build artifacts — anything up to 100 MB. Streams raw bytes; no JSON wrapper.
/files/{path}with open("report.pdf", "rb") as f:
result = client.upload("workspace/report.pdf", f)
print(result.size_bytes)Next
POST /delete — clean up files and directories you no longer need
POST /delete
Delete a file or directory. Recursive. Permanent — the only way back is a snapshot you took beforehand.
/deleteclient.delete("workspace/notes.md")Next
Persistent shell context — stop re-running cd / source / export on every command
Persistent shell context
Each exec runs in a fresh shell. The filesystem is the only thing that carries between calls — anything that lives only in shell state is gone. Two patterns close that gap: exec_chain for one-off multi-step flows, and init.sh for setup that should apply to every command.
What persists between exec calls
| Persists | Doesn't persist |
|---|---|
Files in workspace/ | cwd from a prior cd |
init.sh prelude (re-runs every call) | Env vars exported inside an exec |
| Snapshots | Background processes |
Activated venvs (use init.sh instead) |
Three rules of thumb. Deterministic setup (cwd, venv, env vars) goes in init.sh. Computed state that one step produces and the next step needs across calls goes in a file — export FOO=$(...) in one exec is gone in the next. Multi-step flows that share state can run as one exec_chain.
exec_chain — multi-step in one shell
Joins commands with && server-side and runs them as one exec, so cd / export / shell variables hold for the whole chain. Short-circuits on the first non-zero exit. The 30-second wall clock applies to the chain as a whole — for longer flows, write progress to files so a retry can resume.
# Multi-step within ONE shell — cwd and shell variables hold for the chain.
result = client.exec_chain([
"cd workspace/data",
"TOKEN=$(curl -s https://api.example.com/token)",
'curl -H "Authorization: $TOKEN" https://api.example.com/feed -o feed.json',
])
# Stops on first non-zero exit, just like shell &&. 30s wall clock applies
# to the whole chain.
# Separate exec calls — TOKEN is gone between them. Persist via a file:
client.exec("curl -s https://api.example.com/token > workspace/.token")
client.exec('curl -H "Authorization: $(cat workspace/.token)" ... -o workspace/feed.json')init.sh — setup that runs before every call
# Without init.sh — every command repeats the setup
client.exec("cd workspace/data && source .venv/bin/activate && python analyze.py")
client.exec("cd workspace/data && source .venv/bin/activate && pytest tests/")
# With init.sh — set the prelude once, run cleanly forever
client.set_init("""
cd workspace/data
source .venv/bin/activate
""")
client.exec("python analyze.py") # cwd, venv, env all carry over
client.exec("pytest tests/") # same context — no re-setupIt's just a file at workspace/.trove/init.sh
/exec reads it and sources it into the same shell as your command, so cd, export, shell functions, and an activated venv all carry over. Snapshots include it; webhook events fire when it changes; namespace isolation holds.Manage it
client.set_init("cd workspace/data\nexport DATE=2026-05-06\n")
client.get_init() # → the script text, or None if unset
client.clear_init() # → True if removed, False if never setEach /exec still gets a fresh shell — only the prelude carries over, not state from prior commands. Errors in the prelude (a bad cd, missing file) write to stderr but don't block the user command. Avoid exit statements in the script — they terminate the shell before your command runs.
Next
Cross-session orientation — one call so the next agent session starts oriented, not amnesiac
Cross-session orientation
Each new agent session starts with amnesia: no idea what files are around, what the previous instance was working on, or where it got stuck. One call (composed from existing endpoints — no new server contract) rolls up recent files, the active init.sh, and the previous session's handoff note into a packet you pipe straight into the model's system prompt.
# At the end of a session — leave a handoff note for the next instance.
# It's just a markdown file at a known path; the runtime doesn't parse it.
client.write("workspace/.trove/agent.md", """## What I learned
- Salesforce OAuth needs the 'api' scope, not 'read'
- Cache primed at workspace/.cache/q3.json — reuse, don't recompute
""")
# Next session — one call returns recent files, the active init.sh, and the
# previous session's handoff note. Pipe straight into the system prompt.
bs = client.bootstrap()
system_prompt += bs.as_system_prompt_block()
# <workspace>
# namespace: alice
# files: 12; last edited 2026-05-07T20:00:00Z
# recent: workspace/data.csv (3.4KB), workspace/report.md (140B), ...
# init.sh: cd workspace/data; source .venv/bin/activate
# last_session: |
# ## What I learned
# - Salesforce OAuth needs the 'api' scope, not 'read'
# ...
# </workspace>It's just two files at known paths
bootstrap() composes a recursive list_dir with reads of two convention paths: workspace/.trove/init.sh (the sourced shell prelude — already documented above) and workspace/.trove/agent.md (the cross-session handoff note). The async client fans those reads out concurrently. No new endpoint to call directly; works against any server version.Leaving the handoff note
workspace/.trove/agent.md with the normal client.write(...) — there is no dedicated method. The runtime doesn't parse the file; pick whatever format the receiving agent expects (markdown, JSON, free-form). On the next bootstrap() the file shows up as bs.agent_memory and inside the rendered prompt block as a last_session: | block.When (not) to call it
bs.file_count == 0 to detect a cold start and skip the "here's your previous work" framing.Next
Snapshots — undo a delete you wish you hadn't made
Snapshots
An agent that can't roll back is a tightrope walker without a net. Snapshots are your safety net — point-in-time tarballs of a namespace, restorable with one call. Retained for 30 days.
/v1/snapshots/v1/snapshots/v1/snapshots/{id}/restore/v1/snapshots/{id}Take a checkpoint
# Take a checkpoint before a risky operation
snap = client.create_snapshot(label="before-migration")
print(snap.snapshot_id, snap.size_bytes)
# → snap-b1bde15ffe82b60a 1284List, restore, delete
# List checkpoints — newest first
for s in client.list_snapshots():
print(s.snapshot_id, s.label, s.created_at)
# Restore — wipes namespace, extracts the tarball back
files_restored = client.restore_snapshot("snap-b1bde15ffe82b60a")
print(f"{files_restored} files restored")
# Delete a snapshot when you no longer need it
client.delete_snapshot("snap-b1bde15ffe82b60a")How it works
snapshot.created and snapshot.restored webhook events so your backend can audit recovery actions.When to snapshot
file.deleted via webhooks and snapshot from your backend.Daily auto-backups
auto- snapshot-id prefix and a label likeauto-daily-2026-05-02. Manual snapshots you create yourself (id prefix snap-) are never touched by the pruner.From the dashboard
Available in trove-sdk (Python ≥ 0.2.2), the dashboard, and the cURL examples above.
Next
Agent integration — wire all of this into a tool-calling loop
Agent integration
Agents get one tool — bash — and use the Unix commands the model already knows. No custom API, no routing decisions.
import anthropic
from trove_sdk import TroveClient
trove = TroveClient(api_key="trove-sk-...", namespace="session-123")
client = anthropic.Anthropic()
tools = [{
"name": "bash",
"description": "Run a shell command in the agent's filesystem.",
"input_schema": {
"type": "object",
"properties": {"command": {"type": "string"}},
"required": ["command"],
},
}]
def run_agent(prompt: str) -> str:
messages = [{"role": "user", "content": prompt}]
while True:
resp = client.messages.create(
model="claude-opus-4-7",
max_tokens=4096,
tools=tools,
messages=messages,
)
if resp.stop_reason == "end_turn":
return next(b.text for b in resp.content if hasattr(b, "text"))
for block in resp.content:
if block.type == "tool_use" and block.name == "bash":
# exec_detailed gives us a structured result so a non-zero
# exit doesn't get blended into stdout when we hand it back
# to the model.
r = trove.exec_detailed(block.input["command"])
tool_output = r.stdout if r.exit_code == 0 else (
f"[exit {r.exit_code}]\n{r.stderr}".rstrip()
)
messages += [
{"role": "assistant", "content": resp.content},
{"role": "user", "content": [{
"type": "tool_result",
"tool_use_id": block.id,
"content": tool_output,
"is_error": r.exit_code != 0,
}]},
]Why one tool, not five
cat workspace/foo.md, listing is ls workspace/, searching is grep -rn TODO workspace/. Separate tools add tokens and another routing decision to get wrong.Next
Multi-tenancy — scale this from one agent to ten thousand customers
Multi-tenancy
One admin key mints scoped keys per customer from your backend. Each customer's agent is hard-isolated to its own namespace — no cross-tenant access, ever.
from trove_sdk import TroveAdminClient, TroveClient
# ── On customer signup (your backend, never the browser) ──────────────────────
admin = TroveAdminClient(
api_key=TROVE_ADMIN_KEY, # your long-lived admin key
workspace_id=TROVE_WORKSPACE_ID,
)
key = admin.create_key(
f"customer-{customer_id}",
namespace=f"customer-{customer_id}",
)
# Store key.key_id in your DB — you'll need it to revoke
# Give key.api_key to the backend service running this customer's agent
# ── Customer agent ────────────────────────────────────────────────────────────
# Scoped key auto-defaults X-Namespace — namespace arg here is optional
trove = TroveClient(api_key=customer_key, namespace=f"customer-{customer_id}")
trove.write("workspace/memory/prefs.md", "prefers bullet points")
trove.exec("cat workspace/memory/prefs.md")
# ── On churn / account deletion ───────────────────────────────────────────────
admin.revoke_key(stored_key_id)
# Any request using that key immediately returns 401Security model
Manage keys via API
from trove_sdk import TroveAdminClient
admin = TroveAdminClient(api_key="trove-sk-admin-...", workspace_id="ws-...")
# Mint a scoped key for a new customer
key = admin.create_key("customer-acme", namespace="customer-acme")
print(key.api_key) # store this — shown onceRevoke on churn
admin.revoke_key("key-...")Next
Per-session sandboxes — go a level deeper for per-session throwaway keys
Per-session sandboxes
A production-grade pattern for agent runtimes: each session gets its own namespace and a throwaway scoped key. Three keys, three roles, hard isolation between sessions.
| Key | Where it lives | What it does |
|---|---|---|
scope: admin | Backend secrets manager | Mints/revokes keys; manages webhooks. Cannot touch the filesystem. |
scope: workspace+ namespace | The agent process for one session | Read/write its own namespace. Cross-namespace requests return 403. |
scope: workspaceno namespace | Backend ops jobs | Walks every namespace. Used for billing rollups, capacity, abuse detection. |
# provision.py — backend, holds the admin key
import os
from trove_sdk import TroveAdminClient
admin = TroveAdminClient(
api_key=os.environ["TROVE_ADMIN_KEY"], # never leaves your backend
workspace_id=os.environ["TROVE_WORKSPACE_ID"],
)
def start_session(session_id: str) -> dict:
"""Mint a throwaway key bound to this session's namespace."""
namespace = f"session-{session_id}"
key = admin.create_key(name=f"agent:{session_id}", namespace=namespace)
# Persist key.key_id with your session record — you'll need it to revoke
return {"namespace": namespace, "key_id": key.key_id, "api_key": key.api_key}
def end_session(key_id: str) -> None:
"""Revoke. In-flight requests with this key now return 401."""
admin.revoke_key(key_id)Why three keys, not one
Full runnable example with CLI walkthrough: python/examples/sessions
Next
Webhooks — listen to what's happening across all your sessions
Webhooks
TroveFiles POSTs a signed JSON event to your endpoint whenever activity happens in your workspace. Use an admin key to manage webhooks.
Don't want to set up an HTTP endpoint?
Create & manage
from trove_sdk import TroveAdminClient
admin = TroveAdminClient(api_key="trove-sk-admin-...", workspace_id="ws-...")
# Subscribe to specific events, scoped to a namespace
webhook = admin.create_webhook(
"https://api.yourapp.com/trove-events",
events=["file.written", "exec.completed"],
namespace="customer-acme", # optional — omit for all namespaces
description="Notify on file writes", # optional label
)
print(webhook.signing_secret) # save this — not shown again
# Or subscribe to everything (including future event types)
webhook = admin.create_webhook("https://...", events=["*"])
# List and delete
hooks = admin.list_webhooks()
admin.delete_webhook(webhook.webhook_id)
# Send a test event to confirm delivery
result = admin.test_webhook(webhook.webhook_id)
print(result.ok, result.status)Event types
Use events: ["*"] to subscribe to all events including future ones.
| Event type | Fired when |
|---|---|
file.written | File created or updated via /write or PUT /files |
file.deleted | File or directory deleted via /delete |
exec.completed | Shell command finished via /exec |
snapshot.created | Namespace snapshotted |
snapshot.restored | Namespace restored from a snapshot |
workspace.created | Workspace provisioned |
key.created | API key minted |
key.revoked | API key revoked |
Event payload
Every event has the same envelope:
{
"id": "evt-...",
"type": "file.written",
"api_version": "2025-01-01",
"workspace_id": "ws-...",
"namespace": "customer-acme",
"created_at": "2025-04-30T12:00:00Z",
"actor": { "key_id": "key-...", "key_name": "customer-acme" },
"data": { ... }
}Verify signatures
Every delivery includes X-Trove-Signature: t=<unix>,v1=<hmac_sha256_hex>. Algorithm: HMAC-SHA256(secret, "{t}.{raw_body}"). Tolerance: 5 minutes. Pass the raw bytes — JSON re-serialization invalidates the signature.
from trove_sdk import verify_webhook, WebhookSignatureError
# FastAPI
from fastapi import FastAPI, Request, HTTPException
import os
app = FastAPI()
@app.post("/trove-events")
async def receive(request: Request):
body = await request.body() # must be raw bytes — do not parse first
try:
event = verify_webhook(
secret=os.environ["TROVE_WEBHOOK_SECRET"],
body=body,
signature_header=request.headers["x-trove-signature"],
)
except WebhookSignatureError:
raise HTTPException(status_code=400, detail="Bad signature")
if event.type == "file.written":
print(f"File written in {event.namespace}: {event.data}")
elif event.type == "exec.completed":
print(f"Exec finished: {event.data}")
return {"ok": True}Next
Examples — see the full thing wired up to LangChain, LangGraph, Agno, Pydantic AI
Examples
Full working examples in the trove-sdk repo — each seeds a workspace and runs a tool-calling agent backed by Claude.
GitHub Actions
Pre-load files for an agent run by syncing a folder from CI. One PUT per file, scoped to a per-build namespace.
name: Sync build to TroveFiles
on: [push]
jobs:
sync:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm ci && npm run build
- name: Upload to TroveFiles
env:
TROVE_KEY: ${{ secrets.TROVE_KEY }}
run: |
find ./build -type f | while read f; do
curl --fail -X PUT \
-H "Authorization: Bearer $TROVE_KEY" \
-H "X-Namespace: ci-${{ github.run_id }}" \
--data-binary "@$f" \
"https://api.trovefiles.dev/files/${f#./build/}"
doneNext
Limits — know the guardrails before you ship to production
Limits
| Limit | Value |
|---|---|
| Exec timeout | 30s |
| POST /write size | 10 MB |
| PUT /files size | 100 MB |
| Namespace pattern | [A-Za-z0-9_-]{1,128} |
| Path traversal (..) | rejected |
Next
Get an API key — you've read the manual — time to ship something