
An AI agent is a program that uses a large language model (LLM) to take goal-directed actions in a loop — observing inputs, deciding on the next step, calling tools, and updating its state until the goal is met.
If you've used a chatbot, you've used an LLM. An agent is a layer on top: the LLM is the brain, but the agent is the body that perceives, acts, and persists.
How does an AI agent work?
Every AI agent has the same four parts:
- A model — an LLM (Claude, GPT, Gemini, an open model) that takes the current state and the next input and returns a decision.
- Tools — the actions the agent can take. HTTP calls, database writes, sending emails, searching the web, calling other agents.
- Memory — what the agent remembers across the loop. Could be a short-lived context window, a vector database, a key-value store, or a structured database row.
- An event source — what triggers the agent to run. A webhook, an email, a form, a cron tick, a user message, a tool result from the previous step.
The loop is straightforward: the agent receives an event, the model decides which tool to use (or whether to stop), the tool runs, the result is fed back into the model, and the loop repeats until the model decides the goal is met.
The agent is rarely the interesting code. The interesting code is the system around it — the part that brings events in reliably, persists state correctly, and ships results to the right place.
Types of AI agents
Most agents you'll build fall into one of these shapes:
- Task agents — receive a single event, run to completion, exit. (Triage this email. Enrich this lead. Draft this release note.)
- Long-running agents — receive a goal, run for minutes or hours, calling tools as needed. (Research this competitor. Migrate this column. Investigate this incident.)
- Conversational agents — interact with a user over multiple turns, often with tool use mixed in. (Customer support, internal Q&A.)
- Background agents — wake up on a schedule or threshold, check state, take action only if needed. (Monitor metrics. Reconcile balances. Re-index search.)
- Multi-agent systems — multiple specialized agents pass work between each other through a coordinator or shared event bus.
Hooksbase exists for the non-conversational shapes — task, long-running, background, and multi-agent — because those are the agents that depend on events from the outside world to function.
AI agent vs AI workflow
The terms get used interchangeably, so it's worth being precise:
- An AI workflow is a fixed sequence of steps with one or more LLM calls inside it. The path is deterministic; the model fills in the variable parts.
- An AI agent decides the path itself. The model picks which tool to call, in what order, and when to stop.
A workflow is easier to build, easier to test, and cheaper to run. An agent is more flexible but less predictable. Most production systems are a mix — a workflow with one or two agentic steps inside.
What AI agents need from infrastructure
Building the model-and-tools part of an agent is now straightforward. Frameworks like the Claude Agent SDK, the OpenAI Agents SDK, LangGraph, and CrewAI handle the model loop. You can wire up an agent in an afternoon.
The hard part is the event infrastructure underneath:
- Reliable triggers. When a webhook comes in and the agent isn't ready, what happens? Retry? Queue? Drop? Without a delivery layer, the answer is usually "drop, then debug later."
- Idempotent ingest. When the same HTTP event arrives twice, the agent shouldn't run twice. An
Idempotency-Keyhonored before the agent starts solves this when the producer sends one. - Deterministic replay. When the agent fails (timeout, model error, tool failure), you need to re-run the exact same input. That means the original payload — and any transformation applied to it — has to be persisted.
- Provider verification. When the trigger is a supported provider such as Stripe or GitHub, you need the signature verified before the agent runs. A forged event spending tokens is a bad day.
- Observability. When a customer asks "why didn't your agent do anything?", you need to be able to answer in seconds, not days.
These are infrastructure problems, not agent problems. They are also the problems most agent teams end up rebuilding from scratch — usually around the time the third customer churns over a silent failure.
Where Hooksbase fits
Hooksbase is the event layer for AI agents. It accepts HTTP, email, and form events on every tier, adds scheduled cron on Starter+, verifies supported provider events after ingest auth when configured, routes them through programmable rules, applies optional Starter+ payload transforms, and delivers them to your agent (or any HTTP endpoint, queue, or object store allowed by your tier) with retries, idempotency when keys are supplied, Pro+ FIFO ordering, deterministic replay while payloads are retained, DLQ, and tiered delivery history.
You don't need any of it to prototype an agent. You need it once the agent has to stay reliable in front of customers.
Related reading: What is agentic AI? for the broader concept, How to build an AI agent with reliable event triggers for a step-by-step build, or Event infrastructure for AI agents for the architectural argument.