AI Agents
If you are operating Hooksbase from a coding agent, terminal automation, or LLM-driven workflow, start with the CLI. It is the most stable public surface for non-browser automation because it gives you predictable JSON output, profile management, and a narrower operational contract than scraping the dashboard.
Available through
Agents should prefer deterministic terminal and HTTP surfaces, then stop after re-reading the target state.
CLI
PreferredUse profiles, environment variables, --json output, limit/cursor paging, and explicit mutations.
Public API
AvailableUse raw HTTP only for documented public route families that the CLI does not wrap.
Dashboard
Not applicableAgents should not scrape or automate the browser UI. Use dashboard docs only to understand human workflows.
TypeScript SDK
AvailableUse the SDK when the agent is embedded in a Node.js or TypeScript runtime.
Why agents should start with the CLI
The CLI is the default control plane for external agents because it already solves three problems that agents otherwise have to reinvent:
- credential resolution through profiles, environment variables, or explicit flags
- stable machine-readable output through
--json - common operator actions such as webhook listing, delivery inspection, replay, DLQ export, schedule lifecycle, and ingest send
Use the TypeScript SDK when you are embedding Hooksbase inside your own Node.js or TypeScript runtime. Use the CLI when the agent is orchestrating shell commands, scripts, or local terminal workflows.
Auth model
- Name
Preferred credential- Type
- project API key
- Description
Give the agent a project admin or write key depending on the actions it needs. Admin keys unlock
projects getand admin-only Public API routes.
- Name
CLI profile- Description
Store durable credentials with
hooksbase auth loginwhen the host environment should remember them across runs.
- Name
Ephemeral runs- Description
For one-shot automation, pass
--base-urland--api-keydirectly or provideHOOKSBASE_BASE_URL,HOOKSBASE_API_KEY, and optionallyHOOKSBASE_PROFILE.
Recommended agent loop
For most agent workflows, use this sequence:
- Validate credentials and discover the current profile state.
- Read the current project and webhook inventory.
- Inspect deliveries, replay jobs, or DLQ rows one page at a time.
- Execute one explicit mutation such as replay, re-drive, schedule create, or ingest send.
- Re-read the mutated surface and stop.
That pattern keeps the agent grounded in server truth and avoids wide, unbounded crawls.
Minimal agent-safe loop
hooksbase auth profiles --validate --json
hooksbase projects get --json
hooksbase webhooks list --limit 20 --json
hooksbase deliveries list --limit 20 --json
Core command patterns
Use these patterns as the defaults for terminal agents:
Inventory and inspection
hooksbase projects get --json
hooksbase templates list --json
hooksbase webhooks get wh_123 --json
hooksbase deliveries get del_123 --json
hooksbase dlq get dlq_123 --json
hooksbase usage show --from 1740000000000 --to 1740086400000 --json
Explicit mutations
hooksbase deliveries replay del_123 --json
hooksbase dlq redrive dlq_123 --json
hooksbase schedules create wh_123 --file ./schedule.json --json
hooksbase webhooks rotate-signing-secret wh_123 --json
hooksbase ingest send --ingest-url https://hooks.hooksbase.com/v1/ingest/hook_123 --ingest-secret whsec_... --file ./payload.json --json
hooksbase ingest send --webhook-name orders --ingest-secret whsec_... --file ./payload.json --json
Operational guidance:
- prefer
--jsonfor every agent-invoked command - use
--limitand--cursorinstead of assuming full collection reads - use the CLI for explicit remote state changes, not inferred local bookkeeping
- use idempotency keys on ingest sends whenever the agent may retry
- prefer
--webhook-namewhen the agent already has project API key access and should resolve the current ingest URL automatically - stop after one mutation unless the user explicitly asks for a broader batch
- re-read the same webhook, delivery, schedule, DLQ entry, or operation job before deciding whether the mutation worked
When to fall back to raw HTTP
The CLI does not currently wrap every customer-facing public route.
Use raw HTTP when the agent needs:
- HTTP pack catalog reads
- destinations and routing
- payload transforms
- metrics or backlog
- email allowlist management
- event drain management
- operator alerting
- file retrieval helpers
- bulk replay, bulk DLQ recovery, or bulk-operation polling
Use the dashboard when the agent request touches browser-only product workflows:
- project creation or deletion
- team membership and invitations
- billing checkout or provider management
- onboarding wizard state and validation
That separation is intentional. Raw HTTP and the CLI are public automation surfaces; the dashboard is a product UI for signed-in humans.
Common mistakes
- Scraping the dashboard instead of using the CLI or raw HTTP.
- Running list commands without
--jsonand then trying to parse table output. - Giving the agent a write key and expecting admin-only project reads to work.
- Letting the agent perform broad destructive actions without first re-reading the target resource.
- Falling back to undocumented browser routes for dashboard-only workflows.