Creating an Agent
To create a simple Agent, all that you need is aname
, system
prompt and a model
. All configuration options are detailed in the createAgent
reference.
Here is a simple agent created using the createAgent
function:
While
system
prompts can be static strings, they are more powerful when they
are dynamic system prompts defined as callbacks
that can add additional context at runtime.run()
with a user prompt. This performs an inference call to the model with the system prompt as the first message and the input as the user message.
When including your Agent in a Network, a
description
is required. Learn
more about using Agents in Networks here.Adding tools
Tools are functions that extend the capabilities of an Agent. Along with the prompt (seerun()
), Tools are included in calls to the language model through features like OpenAI’s “function calling” or Claude’s “tool use.”
Tools are defined using the createTool
function and are passed to agents via the tools
parameter:
run()
is called, any step that the model decides to call is immediately executed before returning the output. Read the “How agents work” section for additional information.
Learn more about Tools in this guide.
How Agents work
Agents themselves are relatively simple. When you callrun()
, there are several steps that happen:
1
Preparing the prompts
The initial messages are created using the
system
prompt, the run()
user
prompt, and Network State, if the agent is part
of a Network.For added control, you can dynamically modify the Agent’s prompts before the next step using the
onStart
lifecycle hook.2
Inference call
An inference call is made to the provided
model
using Inngest’s step.ai
. step.ai
automatically retries on failure and caches the result for durability.The result is parsed into an InferenceResult
object that contains all messages, tool calls and the raw API response from the model.To modify the result prior to calling tools, use the optional
onResponse
lifecycle hook.3
Tool calling
If the model decides to call one of the available
tools
, the Tool is automatically called.After tool calling is complete, the
onFinish
lifecycle hook is called with the updated InferenceResult
. This enables you to modify or inspect the output of the called tools.4
Complete
The result is returned to the caller.
Lifecycle hooks
Agent lifecycle hooks can be used to intercept and modify how an Agent works enabling dynamic control over the system:lifecycle
options object.
- Dynamically alter prompts using Network State or the Network’s history.
- Parse output of model after an inference call.
System prompts
An Agent’s system prompt can be defined as a string or an async callback. When Agents are part of a Network, the Network State is passed as an argument to create dynamic prompts, or instructions, based on history or the outputs of other Agents.Dynamic system prompts
Dynamic system prompts are very useful in agentic workflows, when multiple models are called in a loop, prompts can be adjusted based on network state from other call outputs.Static system prompts
Agents may also just have static system prompts which are more useful for simpler use cases.Using Agents in Networks
Agents are the most powerful when combined into Networks. Networks include state and routers to create stateful workflows that can enable Agents to work together to accomplish larger goals.Agent descriptions
Similar to how Tools have adescription
that enables an LLM to decide when to call it, Agents also have an description
parameter. This is required when using Agents within Networks. Here is an example of an Agent with a description: