Skip to main content

Enterprise — zero to deployed, the Declaragent way

The point of Declaragent isn't "here's another YAML file." It's that you describe what you want in English and Declaragent writes the YAML. The REPL you talk to is itself an agent — same runtime, same tools, same audit chain, different system prompt — and its job is to scaffold, configure, and operate your fleet.

This walkthrough is validated line-by-line against shipped code. Every command should work as written on @declaragent/[email protected]. When a feature is narrower than you'd expect (the builder writes stdio-only MCP servers, channels are user-global, not per-agent), we say so.

Time: ~60 min end-to-end. ~30 if you already have Docker and an Anthropic API key.

What you'll build

A two-agent orders fleet:

  • concierge — webhook-triggered RPC client that delegates to peers
  • pr-reviewer — RPC server with a typed review-pr capability

Plus, along the way: a Vault secret provider, one stdio MCP server, a user-global Slack channel, per-tool rate limits, controlPlane.auth, hash-chained audit shipped to Elastic, the /fleet graph topology render, and a regression fixture of the builder conversation itself that CI can replay.

Prerequisites

ToolWhy
Bun ≥ 1.1Runtime
DockerVault + Elastic + Grafana
Anthropic API keyThe LLM provider
A TTYThe REPL is an Ink TUI; CI shells fall back to flag-driven init

Step 1 — Install and paste a key (~1 min)

bun add -g @declaragent/cli@latest
declaragent auth login anthropic

auth login anthropic runs a paste-an-API-key TUI (auth-login.tsx:51). Anthropic doesn't expose OAuth for API access, so the CLI prompts for a key you generated at console.anthropic.com/settings/keys, masks it with *, and persists to ~/.declaragent/config.json.

Paste your API key for Anthropic — Claude (native API):
Get one at https://console.anthropic.com/settings/keys
› ****************************************************
✓ saved anthropic sk-ant-••••••••aB3X
→ /Users/you/.declaragent/config.json

OpenRouter (auth login openrouter) is the one preset with real browser PKCE, if you prefer that. Then:

declaragent auth status

Step 2 — Meet the builder (~1 min)

mkdir orders-workspace && cd orders-workspace
declaragent

The REPL boots with a 4-line banner (banner.tsx:26):

╭───╮ Declaragent v0.7.6
│ d │ anthropic/claude-opus-4-6 · default
╰───╯ ~/orders-workspace
an agent that builds agents — same @declaragent/core

>

That > prompt is an agent. It has Read, Write, Edit, Bash, Glob, Grep, and the 15 Declara* builder tools — DeclaraProposeChange, DeclaraApplyChange, DeclaraAddSkill, DeclaraAddSource, DeclaraAddChannel, DeclaraAddMCP, DeclaraAddPlugin, DeclaraAddPeer, DeclaraAddSecret, DeclaraFleetAdd, DeclaraAuthPlaybook, DeclaraAuditVerify, DeclaraEventsTail, DeclaraDlqShow, DeclaraFleetStatus (builder/index.ts). It writes no files until you /yes a proposal.

Quick orientation:

> /help # every slash command
> /rules # permission mode + rules
> /scope # directory the builder may write into
> /cost # token ledger (in / out / cache / $)

Step 3 — Scaffold the fleet (~2 min)

fleet-starter is a fleet shape (two agents under a shared fleet.yaml), so it's not a single-agent init template — you compose it from init --fleet plus two fleet add calls. Let the builder do the typing:

> Scaffold a two-agent fleet called "orders": one rpc-client as the
concierge, one rpc-server as the pr-reviewer.

Builder:
Proposal pr_01 — Scaffold the orders fleet
1. [runCommand] declaragent init --fleet orders --out .
preview:
./fleet.yaml (new, empty peers block)
./.env.example
2. [runCommand] declaragent fleet add --template rpc-client --id concierge
preview:
./agents/concierge/{agent.yaml, event-sources.yaml, rpc-peers.yaml, skills/}
3. [runCommand] declaragent fleet add --template rpc-server --id pr-reviewer
preview:
./agents/pr-reviewer/{agent.yaml, capabilities.yaml, event-sources.yaml, skills/}

Type /yes to apply, /no to cancel, or /edit <n> <replacement> to revise step n.

> /yes

What just happened:

  • DeclaraProposeChange registered proposal pr_01 and blocked the model's turn (propose-change.ts:42).
  • The REPL listener rendered the proposal (proposals.ts:360).
  • You typed /yes — the registry resolves the proposal to { confirmed: true }.
  • The next model turn calls DeclaraApplyChange, which runs the three runCommand steps in order.

Now:

> /diff

/diff runs git diff scoped to the builder's current scope root. The builder did not commit anything — it wrote files. Any apply can be reversed with /undo, which does a scoped git checkout to the HEAD captured at apply time (builder/undo.ts).

Step 4 — Revise in flight with /edit (~1 min)

/edit <n> <replacement> overwrites the description of step n in the active proposal (proposals.ts:51). It does not add new steps and does not change the step's kind or payload — it lets you clarify what you're confirming before you /yes.

> Add a skill to the concierge that summarises customer tickets.

Builder:
Proposal pr_02 — Ticket-summary skill on concierge
1. [addSkill] agents/concierge/skills/ticket-summary.md
preview:
---
name: summarise-ticket
triggers: [support_ticket]
---
Fetch the ticket, extract customer + severity + repro, reply ≤ 6 lines.

Type /yes / /no / /edit <n> <replacement>.

> /edit 1 Also log the ticket-id into a scratch skill so the pr-reviewer can cite it

Builder:
Updated step 1 description. Re-rendering:
1. [addSkill] Also log the ticket-id into a scratch skill so the pr-reviewer can cite it

> /yes

One step gets applied, with the refined description in the audit chain. If you wanted a different second step (say, "add a source too"), you'd tell the builder that in plain English and it would propose pr_03.

Step 5 — Wire a Vault secret provider (~3 min)

DeclaraAddSecret reserves an env-var slot without ever seeing the value (add-secret.ts:1-24). For a non-env provider like Vault, the builder verifies secrets.yaml already declares that provider — if not, it tells you to declare it first. So there are two sub-steps: scaffold secrets.yaml, then add the ref.

Bring Vault up in another terminal:

docker run --rm -d --name orders-vault -p 8200:8200 \
-e VAULT_DEV_ROOT_TOKEN_ID=dev-root-token hashicorp/vault:1.18
export VAULT_ADDR=http://localhost:8200 VAULT_TOKEN=dev-root-token

Then, in the REPL:

> Wire Vault as the default secret provider. Point the concierge's
anthropic-key at a Vault ref, and seed Vault with my current
ANTHROPIC_API_KEY.

Builder:
Proposal pr_03 — Vault provider + seeded ref
1. [editFile] secrets.yaml (new, fleet root)
preview:
version: 1
default: vault-dev
providers:
vault-dev:
type: vault
address: ${env:VAULT_ADDR}
auth: { method: token, token: ${env:VAULT_TOKEN} }
defaultTtlMs: 300000
2. [runCommand] Seed Vault with ANTHROPIC_API_KEY
preview:
vault kv put secret/orders/anthropic-key value="$ANTHROPIC_API_KEY"
3. [addSecret] anthropic-key → vault-dev:secret/data/orders/anthropic-key#value
preview:
.env.example: + # vault-dev:secret/data/orders/anthropic-key#value (no local env var needed)

> /yes

Five things to know about DeclaraAddSecret:

  1. It never receives the secret value — the secret-guard redacts any pasted credential before the model reads it (secret-guard.ts). BuilderSecretLeakError is raised if one slips through a Write.
  2. For env: refs it derives a stable env-var name (e.g. DECLARA_ORDERS_ANTHROPIC_KEY) and appends it to .env.example. For non-env refs (like vault:) it skips the env-var synthesis — the resolver pulls at runtime.
  3. It does not rewrite agent.yaml. You reference the secret from your agent config via ${vault:secret/data/orders/anthropic-key#value} — either hand-authored or proposed as an editFile step.
  4. Shipped providers: vault, aws-sm, gcp-sm, k8s, env (secrets/config-loader.ts:95). secrets.yaml declares which providers exist; individual refs resolve inline at runtime.
  5. Rotation is a first-class verb:
    declaragent secrets rotate 'vault:secret/data/orders/anthropic-key#value' \
    --reason "90-day schedule"

Step 6 — Add an MCP server (~2 min)

The builder tool DeclaraAddMCP supports stdio transport only today (add-mcp.ts:59). The CLI verb declaragent mcp add supports stdio + http + sse + http-streamable with OAuth PKCE (mcp-cli.ts) — use that directly when you need a remote server. All MCP servers live in user-global ~/.declaragent/mcp-servers.json (add-mcp.ts:1-20), shared across every agent on the machine.

A concrete stdio example:

> Register a filesystem MCP server rooted at the fleet directory.

Builder:
Proposal pr_04 — Register filesystem MCP server (stdio)
1. [addMCP] name: fs-orders, transport: stdio
preview:
command: npx
args: ['-y', '@modelcontextprotocol/server-filesystem', '/Users/you/orders-workspace']
writes: ~/.declaragent/mcp-servers.json

> /yes

Tools from this server appear as mcp__fs-orders__<tool> after the REPL restarts. For an HTTP MCP server with OAuth (e.g. the GitHub MCP), drop to the shell:

declaragent mcp add gh --transport http --url https://mcp.github.com
declaragent mcp login gh # browser PKCE flow

Step 7 — Install a plugin (~1 min)

Same shape — user-global, stored in ~/.declaragent/plugins.json, and DeclaraAddPlugin takes a local path, not an npm package name (add-plugin.ts:16). The builder reads the plugin's plugin.json manifest, records the declared permissions as consentedPermissions (your /yes is the consent), and registers it.

If you have a plugin checked out locally:

> Install the plugin at ../declaragent-plugin-github.

Builder:
Proposal pr_05 — Install plugin from path
1. [addPlugin] path: ../declaragent-plugin-github
preview:
plugin: @acme/[email protected]
permissions: [repo:read, pr:write]
writes: ~/.declaragent/plugins.json

> /yes

The conversational flow tops out here today. Installing a published npm plugin is still a bun add @declaragent/plugin-foo you run yourself; the builder doesn't shell out to npm.

Step 8 — Add a channel (Slack) (~2 min)

DeclaraAddChannel supports slack, telegram, discord, whatsapp (builder/types.ts:223) and writes to user-global ~/.declaragent/channels.json (JSON, not YAML) (add-channel.ts:1-20). The "user-global" part is worth internalising — it means one Slack config serves every agent on your machine.

The DeclaraAuthPlaybook tool walks you through the Slack OAuth + Socket Mode setup interactively:

> Wire up Slack for the concierge.

Builder:
I'll run the Slack auth playbook first, then register the channel.

Proposal pr_06 — Slack OAuth playbook + channel registration
1. [runCommand] declaragent auth playbook slack
preview:
Opens a browser to create a Slack app, guides you through
Socket Mode setup, and captures SLACK_BOT_TOKEN + SLACK_APP_TOKEN
into ~/.declaragent/.env (never into the transcript).
2. [addChannel] slack
preview:
id: orders-slack
type: slack
botToken: ${env:SLACK_BOT_TOKEN}
appToken: ${env:SLACK_APP_TOKEN}
writes: ~/.declaragent/channels.json

> /yes

Step 9 — Inspect what you just built (~1 min)

Before running anything:

> /fleet graph

Emits Mermaid (fleet-graph-cli.ts:177 — format is literally graph LR\n from-->|label| to):

Swap the format:

> /fleet graph dot
> /fleet graph json

Shell verbs:

declaragent fleet validate # schema + peer-graph dry-run
declaragent fleet capabilities # aggregated capability table
declaragent capabilities gen --peer pr-reviewer --out generated/
# ^ emits generated/pr-reviewer.ts with typed Request/Response — check into the caller's repo

Step 10 — Run the fleet (~1 min)

The builder cannot deploy. DEFAULT_DEPLOY_DENY_RULES (builder/index.ts:156) blocks Bash:declaragent deploy* and Bash:declaragent fleet deploy*. It can start the local foreground runner:

> Start the fleet.

Builder:
Proposal pr_07 — Run fleet locally (foreground)
1. [runCommand] declaragent fleet run
preview:
binds webhook port (defaults configured per agent's event-sources.yaml)
binds :9464 (metrics) by default; override with DECLARAGENT_METRICS_PORT

> /yes

fleet run is foreground. There is no --detach flag — only --agent <id> (repeatable) to subset which agents boot (index.tsx:904). For a background daemon, use the single-agent declaragent up -d per agent, or run the fleet under a supervisor (systemd / tmux / Docker).

From a second shell, drive it:

declaragent fleet status
curl -sX POST http://localhost:8787/webhook/concierge \
-H 'Content-Type: application/json' \
-d '{"user":"demo","text":"review https://github.com/declaragent/declaragent/pull/61"}'
declaragent events list --last 20

Or from inside the REPL (using the builder's observability tools):

> Tail the last 20 events.

Builder: [calls DeclaraEventsTail with {last: 20}]
(rendered as a tabular summary)

Step 11 — Safety you run under

  • Scope root sandboxWrite / Edit / Bash paths outside the scope root raise BuilderScopeError. /scope prints the root. A tool can opt-in via confirmOutsideScope: true — surfaces in the proposal preview.
  • Plan mode/mode plan registers proposals but DeclaraApplyChange short-circuits. Nothing gets written. Useful for "show me what you would do."
  • Undo/undo reverts the last apply via scoped git checkout to the HEAD captured before writing (builder/undo.ts). Requires git.
  • History/history 10 renders the last 10 builder actions from the hash-chained audit — same rows exported to your SIEM.
  • Deploy deny — only a human can run declaragent deploy or declaragent fleet deploy. Enforced at the permission gate, not a config toggle.
  • Secret redactionsecret-guard.ts patterns scrub API keys, tokens, and private keys from text before the LLM sees it. The model literally cannot leak what it cannot read.

Step 12 — Enterprise hardening (~5 min)

The SIEM, control-plane-auth, and rate-limit blocks go into each agent.yaml under keys the zod schemas already recognise. Ask the builder to do it:

> Add to both agents:
1. audit.export → Elastic at http://localhost:9200, index declaragent-audit
2. tools.rateLimit on concierge: Bash rps=2 burst=4, RequestAgent rps=10 burst=20
3. controlPlane.auth disabled for the local demo, but bound to 127.0.0.1:9464
4. mcp.supervised: all (already the default; be explicit)

Then run `fleet audit-rpc --strict` and apply any suggested auth blocks.

Builder:
Proposal pr_08 — Enterprise hardening (5 items)
1. [editFile] agents/*/agent.yaml: audit.export (kind=elastic, basic auth from env)
2. [editFile] agents/concierge/agent.yaml: tools.rateLimit
3. [editFile] agents/*/agent.yaml: controlPlane.auth.enabled=false, bind=127.0.0.1:9464
4. [editFile] agents/*/agent.yaml: mcp.supervised: all
5. [runCommand] declaragent fleet audit-rpc --strict --json

> /yes

Step 5 exits non-zero if any peer lacks an auth: block (fleet-audit-rpc-cli.ts). If gaps exist, the builder follows up with --suggest-enable diffs (peer auth providers are oidc and oauth2-clientpeers-loader.ts:140). Go deeper: SIEM audit export, zero-trust RPC migration.

Step 13 — Record the conversation as a CI fixture (~1 min)

BUILDER_RECORD=1 declaragent

Every user turn, every assistant turn, every tool call, and every tool result is appended as JSONL to <fixturesDir>/recorded-<iso-timestamp>.jsonl with fsync (recording-provider.ts:397). The default fixturesDir is caller-controlled; set BUILDER_RECORD_OUT=./tests/fixtures/scaffold.jsonl to pin it.

Replay in CI: packages/cli/src/builder/__tests__/ contains fixture-replay tests that walk the JSONL, assert the assistant's tool calls match, and fail the build if the model's step choices drift. No live LLM call required — the fixture captures the full decision trace. This is how the repo's 34 shipped builder-backlog items stay green.

Step 14 — Observe (~3 min)

Grafana dashboard

Importable JSON + prewired alertmanager rules. Full recipe: Grafana dashboard import.

# from a declaragent repo clone
curl -X POST http://admin:admin@localhost:3000/api/dashboards/db \
-H 'Content-Type: application/json' \
-d "$(jq '{dashboard: ., overwrite: true,
inputs:[{name:"DS_PROMETHEUS",type:"datasource",pluginId:"prometheus",value:"Prometheus"}]
}' docs/grafana/declaragent-fleet-dashboard.json)"

Shell verbs

declaragent events list --last 50 --correlation <id>
declaragent audit verify --json # hash-chain integrity
declaragent fleet status --history # deploy / rotation / rollback timeline
declaragent dlq list --kind dispatch
declaragent secrets list --provider vault-dev --json

REPL slash

> /cost # $ spent this session
> /history 20 # last 20 builder actions from audit

Step 15 — Deploy (~5 min)

Deploy deny rules mean you do this part. Two paths, both auto-generated.

A. declaragent deploy gcp-cloud-run (one agent at a time)

cd agents/concierge
declaragent deploy gcp-cloud-run --project my-gcp --region us-central1

Generates .declaragent/deploy/:

  • Dockerfile — multi-stage Bun build (EXPOSE 8787 9464deploy-dockerfile.ts:15)
  • service.yaml — Cloud Run spec with secret refs rewritten as Secret Manager bindings; containerPort: 8787 for webhook, 9464 for metrics (deploy-service-yaml.ts:132)
  • .dockerignore + README.md with the three gcloud commands

You run gcloud builds submit and gcloud run services replace yourself — the CLI deliberately stops short. Recipe: Deploy to GCP Cloud Run.

B. declaragent fleet render (any Kubernetes)

declaragent fleet render --target k8s --format helm --out ./deploy/chart
# or --format kustomize

Deterministic, offline, snapshot-testable. Secrets are stubbed; you wire External Secrets Operator for real creds. Full recipe: GitOps with ArgoCD or Flux.

Either way, after the real deploy, production rollouts use:

declaragent fleet deploy --canary --canary-wait-ms 120000

First host goes, 120-second soak, then the rest. Rollback on probe failure.

Honest gaps

Things that are not conversational today — you'll drop to the CLI or hand-edited YAML:

  • Full secrets.yaml scaffold. DeclaraAddSecret doesn't synthesise the providers block; the editFile step in Step 5 above does.
  • HTTP MCP servers from the REPL. DeclaraAddMCP is stdio-only; declaragent mcp add --transport http + mcp login run from the shell.
  • npm-published plugins. DeclaraAddPlugin takes local paths; bun add is yours to run.
  • Deploy. By design. DEFAULT_DEPLOY_DENY_RULES is a feature — the builder cannot ship code to prod.
  • Canary traffic weighting. fleet deploy --canary is first-host-then-rest, not 5% → 50% → 100%. Layer Argo Rollouts for traffic shifting.
  • Human-in-the-loop tool approval. No needsApproval: true flag on tools yet. Wire through an MCP server that calls your approval system.

Cleanup

# In the REPL
> /exit

# In the shell
docker rm -f orders-vault
rm -rf orders-workspace

Where to go next

NextWhat it adds
Build an agent through conversationSingle-agent deep dive on the builder toolkit
GitOps with ArgoCD or FluxFull Argo / Flux Application + ExternalSecrets wiring
SIEM audit exportSplunk / Elastic / Datadog with back-pressure
Zero-trust RPC migrationOperational walkthrough for the 0.8.0 flip
Cross-host fleet on Kafkafleet.yaml#hosts[] fan-out
Grafana dashboard importDashboard JSON + alertmanager rules

And the design docs that drive the builder: