Skip to content

LLM agents can autonomously plan, execute tasks, and use tools. This guide explores frameworks for building intelligent agents.

What are LLM Agents?

Autonomous systems that:

  • Plan multi-step tasks
  • Use external tools
  • Make decisions
  • Learn from feedback
  • Achieve goals

LangChain Agents

ReAct Agent

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
from langchain.llms import OpenAI

tools = [
    Tool(
        name="Calculator",
        func=calculator.run,
        description="Useful for math"
    ),
    Tool(
        name="Search",
        func=search.run,
        description="Useful for current events"
    )
]

agent = initialize_agent(
    tools,
    OpenAI(temperature=0),
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION
)

agent.run("What's 25% of the current Bitcoin price?")

Custom Tools

1
2
3
4
5
6
7
8
9
from langchain.tools import BaseTool

class DatabaseTool(BaseTool):
    name = "database_query"
    description = "Query the database for information"
    
    def _run(self, query: str) -> str:
        result = db.execute(query)
        return str(result)

AutoGPT Pattern

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
class Agent:
    def __init__(self, goal):
        self.goal = goal
        self.tasks = []
    
    def run(self):
        while not self.is_goal_achieved():
            # Plan next step
            task = self.plan_next_task()
            
            # Execute
            result = self.execute(task)
            
            # Update state
            self.update_state(result)

CrewAI

Multi-agent collaboration:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
from crewai import Agent, Task, Crew

researcher = Agent(
    role='Researcher',
    goal='Find latest AI news',
    backstory='Expert researcher',
    tools=[search_tool]
)

writer = Agent(
    role='Writer',
    goal='Write engaging articles',
    backstory='Professional writer'
)

task = Task(
    description='Research and write article about LLMs',
    agent=researcher
)

crew = Crew(
    agents=[researcher, writer],
    tasks=[task]
)

result = crew.kickoff()

Tool Integration

Code Execution

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
from langchain_experimental.tools import PythonREPLTool

python_repl = PythonREPLTool()
agent = initialize_agent(
    [python_repl],
    llm,
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION
)

agent.run("Calculate fibonacci(10)")

Web Browsing

1
2
3
4
from langchain.tools import BrowserTool

browser = BrowserTool()
agent.run("Search for recent AI breakthroughs and summarize")

File System

1
2
3
4
5
6
from langchain.tools.file_management import (
    ReadFileTool,
    WriteFileTool,
)

tools = [ReadFileTool(), WriteFileTool()]

Memory Systems

Conversation Memory

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
from langchain.memory import ConversationBufferMemory

memory = ConversationBufferMemory(
    memory_key="chat_history",
    return_messages=True
)

agent = initialize_agent(
    tools,
    llm,
    memory=memory
)

Vector Memory

1
2
3
4
5
6
7
from langchain.memory import VectorStoreRetrieverMemory
from langchain.vectorstores import FAISS

vectorstore = FAISS.from_texts(texts, embeddings)
retriever = vectorstore.as_retriever()

memory = VectorStoreRetrieverMemory(retriever=retriever)

Planning Strategies

Chain of Thought

1
2
3
4
5
6
prompt = """
Let's solve this step by step:
1. First, identify what we know
2. Then, determine what we need to find
3. Finally, execute the solution
"""

Tree of Thoughts

1
2
3
4
5
6
7
8
def explore_thoughts(problem):
    thoughts = generate_initial_thoughts(problem)
    
    for thought in thoughts:
        score = evaluate(thought)
        if score > threshold:
            children = expand(thought)
            explore_thoughts(children)

Error Handling

1
2
3
4
5
6
7
8
def robust_agent_run(agent, query, max_retries=3):
    for attempt in range(max_retries):
        try:
            return agent.run(query)
        except Exception as e:
            if attempt == max_retries - 1:
                return f"Failed after {max_retries} attempts"
            # Maybe adjust prompt or tools

Evaluation

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
def evaluate_agent(agent, test_cases):
    results = []
    for case in test_cases:
        output = agent.run(case['input'])
        success = check_output(output, case['expected'])
        results.append({
            'input': case['input'],
            'success': success,
            'output': output
        })
    return results

Best Practices

  1. Clear role definitions
  2. Specific tool descriptions
  3. Implement guardrails
  4. Monitor token usage
  5. Handle failures gracefully
  6. Test extensively
  7. Version control prompts

Challenges

  • Reliability issues
  • Cost of API calls
  • Slow execution
  • Hallucinations
  • Security concerns

Future Directions

  • Better planning algorithms
  • More reliable tool use
  • Multi-modal agents
  • Collaborative agents
  • Self-improvement

Conclusion

LLM agents represent a new paradigm in AI development. Understanding frameworks and patterns is essential for building effective autonomous systems.