Create agents to accomplish specific tasks with tools inside a network.
name
, system
prompt and a model
. All configuration options are detailed in the createAgent
reference.
Here is a simple agent created using the createAgent
function:
system
prompts can be static strings, they are more powerful when they
are dynamic system prompts defined as callbacks
that can add additional context at runtime.run()
with a user prompt. This performs an inference call to the model with the system prompt as the first message and the input as the user message.
description
is required. Learn
more about using Agents in Networks here.run()
), Tools are included in calls to the language model through features like OpenAI’s “function calling” or Claude’s “tool use.”
Tools are defined using the createTool
function and are passed to agents via the tools
parameter:
run()
is called, any step that the model decides to call is immediately executed before returning the output. Read the “How agents work” section for additional information.
Learn more about Tools in this guide.
run()
, there are several steps that happen:
Preparing the prompts
system
prompt, the run()
user
prompt, and Network State, if the agent is part
of a Network.onStart
lifecycle hook.Inference call
model
using Inngest’s step.ai
. step.ai
automatically retries on failure and caches the result for durability.The result is parsed into an InferenceResult
object that contains all messages, tool calls and the raw API response from the model.onResponse
lifecycle hook.Tool calling
tools
, the Tool is automatically called.onFinish
lifecycle hook is called with the updated InferenceResult
. This enables you to modify or inspect the output of the called tools.Complete
lifecycle
options object.
description
that enables an LLM to decide when to call it, Agents also have an description
parameter. This is required when using Agents within Networks. Here is an example of an Agent with a description: