Deliveries And Replay
Deliveries are the durable record of what Hooksbase accepted, where it routed the event, which attempts were made, and whether the work ultimately succeeded, is still retrying, or moved into the dead-letter path.
Auth model
- Name
Public API- Type
- project API key
- Description
Delivery history, replay, replay jobs, DLQ inspection, and usage are project-authenticated Public API routes.
- Name
Dashboard- Type
- session auth
- Description
The dashboard adds body previews, per-project delivery summaries, and live-updated delivery and DLQ views for operators.
- Name
CLI / SDK- Description
Both first-party clients cover the core delivery, replay, and DLQ paths.
Delivery model
A delivery records the accepted source payload plus the persisted destination snapshot that Hooksbase will use for retries and replay. That snapshot is what makes retries and replays deterministic: even if you change the webhook's destination, routing, or transform after the fact, existing deliveries keep using the config that was in effect when they were accepted.
Delivery list filters can narrow by:
- webhook
- status
- provider, provider source ID, provider event type, and provider verification state
- source kind (
originalorreplay) - source delivery
- error code
- response status
- idempotency key
- time range
Use GET /v1/deliveries/{id} when you need the canonical
delivery plus all recorded attempts.
The dashboard layers richer reads on top of the Public API:
- delivery body previews prefer the dispatched payload when a transform ran
- embedded
fileRef.urlvalues are refreshed so stale signed file URLs keep downloading in the UI - delivery summaries give a time-bucketed operational view without replacing the canonical delivery rows
Replay and DLQ lifecycle
Accepted delivery
Hooksbase stores the source payload, provider metadata, resolved destination snapshot, and dispatch snapshot when a transform runs.
Attempts and retries
Attempts use the persisted dispatch payload and destination snapshot until the delivery succeeds or reaches a terminal failure.
DLQ entry
Terminal failures create a dead-letter entry linked to the failed delivery and its retained source payload.
Replay or re-drive
Replay creates a new delivery from the retained payload artifacts. DLQ re-drive is replay from the dead-letter entry.
Replay and replay jobs
Replay creates a new delivery linked back to the original via
replay_of_delivery_id. Hooksbase reuses the retained payload
artifacts instead of recomputing them from today's webhook config.
Operationally that means:
- replay uses the original source payload, saved destination snapshot, provider metadata, and saved transformed dispatch snapshot when one exists
- replay gets a new delivery ID, replay job ID, and sequence number
- replay fails with
409when the required retained payload has expired - replay volume and backlog still enforce quota admission
Bulk replay and bulk DLQ re-drive require Starter+ and create async
bulk-operation jobs over a frozen snapshot of matching rows. Poll
GET /v1/bulk-operations/{id} for counts and per-item
status. Idempotency keys are supported when creating the job, so repeated
submissions with the same key return the same operation instead of
duplicating work.
Create a bulk replay job
curl https://api.hooksbase.com/v1/deliveries/bulk-replay \
-H "Authorization: Bearer swk_..." \
-H "Content-Type: application/json" \
-H "Idempotency-Key: replay-failed-orders-2026-04-23" \
-d '{
"filters": {
"webhookId": "wh_123",
"status": "failed",
"from": 1740000000000,
"to": 1740086400000
},
"maxItems": 100
}'
Poll the bulk operation
curl https://api.hooksbase.com/v1/bulk-operations/bulk_123 \
-H "Authorization: Bearer swk_..."
Body previews and files
Files appear most often on deliveries created by form ingest and email ingest. The important behavior is:
- payload JSON can contain
fileRef: { key, url }pointers - public signed URLs are temporary; the authenticated file route remains the durable access path
- free-tier projects keep file metadata only and do not persist file objects
Related file routes:
-
GET /v1/files/{signedToken} -
GET /v1/webhooks/{webhookId}/deliveries/{deliveryId}/files/{index}
DLQ, analytics, and live updates
Each of these adjacent areas has its own dedicated guide, but they still hang off delivery history operationally:
- Dead-letter queue for terminal failures, export, and re-drive
- Files and retention for signed downloads, replay behavior, and retention
- Delivery analytics for metrics, backlog, usage, and summaries
- Dashboard live updates for auto-refreshed delivery and DLQ views in the UI
Related routes:
-
GET /v1/deliveries -
GET /v1/deliveries/{id} -
POST /v1/deliveries/{id}/replay -
GET /v1/deliveries/{id}/replay-jobs -
GET /v1/replay-jobs/{id} -
GET /v1/dlq -
GET /v1/dlq/{id} -
POST /v1/dlq/{id}/re-drive -
GET /v1/dlq/export -
POST /v1/deliveries/bulk-replay -
POST /v1/dlq/bulk-re-drive -
GET /v1/bulk-operations/{id} -
GET /v1/webhooks/{id}/metrics -
GET /v1/webhooks/{id}/backlog -
GET /v1/usage
Common mistakes
- Expecting replay to use the webhook's current transform or routing config.
- Waiting too long to replay after payload retention has expired.
- Confusing DLQ re-drive with in-place retry. It always creates a new delivery.
- Treating delivery analytics as the source of quota truth. Quota enforcement is separate.
- Assuming a stale signed
fileRef.urlmeans the file object is gone. The dashboard refreshes those URLs and the authenticated file route remains available.