AI Agents can be a complex topic to understand and differentiate from RAG, AI workflows, Agentic workflows, and more. This guide will provide a definition of AI Agents with practical examples inspired by the Building effective agents manifesto from Anthropic.

Table of contents:

From AI Workflows to AI Agents

The first AI applications were chat-based experiences offering different ways to interact with existing LLM APIs. Then, as the LLM knowledge was limited to its training data (ex: OpenAI knowledge cutoff date of September 2021), it quickly became clear that this was not enough to build practical AI applications.

Workflows: enriching LLM knowledge

This is where AI applications started to introduce workflows to power patterns such as RAG: Retrieval Augmented Generation.

RAG enabled AI applications to retrieve information from external sources and forward it as context to LLMs, resulting in more accurate and relevant responses.

The introduction of workflows in AI applications enabled additional patterns:

  • Prompt chaining: one of many Prompt Engineering patterns to improve LLM accuracy and reduce hallucinations.
  • Tool calling: enabling LLMs to call provided functions to perform specific tasks or retrieve fresh information.
  • External databases for memory and embeddings: enabling LLMs to store and retrieve information from external databases in an efficient way.

Reasoning and Action: the birth of Agentic Workflows

Relying on tools, embeddings, and prompt engineering greatly helped to improve LLM accuracy and reduce hallucinations. Still, such patterns were not enough to solve complex problems or completely solve the problem of hallucinations (ex: Air Canada chatbot misinformation).

In October 2022, a research paper called “Reasoning and Action” (also known as ReAct) was published. This paper introduced a new approach to developing LLM applications by introducing the concepts of reasoning and action in AI workflows, giving birth to Agentic workflows.

Agentic workflows consist of leveraging LLMs to take actions on the workflow state, making the workflow more autonomous and reducing hallucinations. A good example of an Agentic workflow is the SafeGuard pattern, which adds LLM reasoning and action capabilities to review both the input provided by the user and the output of the LLM.

While Agentic workflows leverage LLMs to improve AI workflow relevance and reduce hallucinations, a new pattern emerged taking the principles of chain-of-thought reasoning with action-taking to another level: AI Agents.

AI Agents: Fully autonomous AI applications

The previous AI workflows and Agentic workflows are based on static workflows with some degree of autonomy. AI Agents aim for full autonomy of the AI application, enabling it to solve complex problems such as developing complete web applications or fixing issues in production.

AI Agents are still an exploratory field, with few companies having successfully built and deployed them in production (ex: Devin). Still, it is the most active experimentation space in the LLM ecosystem with multiple open-source projects (ex: AutoGPT).

How to use this guide

Developing AI applications leverages multiple patterns from AI workflows with static steps to fully autonomous AI Agents, each fitting specific use cases. The best way to start is to begin simple and iterate towards complexity.

This guide features a Code Assistant that will will progressively evolve from a static AI workflow to an autonomous AI Agent.

Below are the different versions of our Code Assistant, each progressively adding more autonomy and complexity:

v1 Explaining a given code file: The first version starts as a AI workflow using a tool to provide a file as context to the LLM (RAG).

v2 Performing complex code analysis: Then, we will add Agentic capabilities to our assistant to enable it more complex analysis.

v3 Autonomously reviewing a pull request: Finally, we will add more autonomy to our assistant, transforming it into a semi-autonomous AI Agent.

Coming soon Pushing our Code Assistant to production: This additional chapter will cover best practices to deploy your AI Agents to production.

Depending on your experience developing AI applications, you can choose to start directly with the second part covering Agentic workflows.

Happy coding!