A Comprehensive Guide About Langgraph: Code Included

Read Time:
minutes

Introduction

We all have heard about langchain and langchain agents and how they are autonomously automating any workflows using large language models (LLMs). If you still don’t know about langchain and langchain agents then don’t worry we will cover that also in this blog. If you really want to deep dive into langchain and langchain agents then I will suggest you to first take a look at my other blog about langchain and then read this.

Langchain agents are good at following the conditional chain flow where the output of one chain depends on another and AI agents are very good at following the sequential flow but are they really autonomous? 👀 What if we have a conditional flow with so many different workflows integrated into one or we want to make our agent wait or run any specific workflow at specific time infinitely then a normal langchain agent might not perform well for this workflow.

This is where langgraph comes into picture. Langgraph is a package made by langchain team to create more flexible and conditional workflows which can be visualized as a graph. So before learning about langgraph, let’s take a quick look at graphs.

💡You can get all of the code shown in this blog from here.

What is a Graph?

Imagine you have a bunch of data that can be represented as a network where each data or entity has a relationship with another data or entity and this relationship can be of multiple types (one-to-one, one-to-many, many-to-many etc..). There are 2 main components in a graph which are nodes and edges.

Some examples of this type of data are transportation data or social media network where every entity or user have a relation with other entity or users and this is where graphs becomes easy to visualize this type of data.

There are 2 types of graphs:

  • Directed - In a directed graph, edges have a direction indicating the flow or relationship between nodes (e.g., following on social media).
  • Undirected - In an undirected graph, edges have no direction and represent symmetric relationships (e.g., connection on linkedin).

What is Langgraph?

As we have discussed above, langgraph allows us to create more flexible and complex workflows for the agent runtime. The chain of actions in langchain agents is sequencial and it is good for sequential workflows but when it comes to complex workflows or multiple workflows where you have to combine every workflow and follow the conditional path then it will become hard to implement it with simple langchain agents.

You might be thinking that we can make separate tools for different workflow and then let LLM decide which tool to use using function calling, well you are right but what if there are multiple workflows ( For example, newsletter workflow, email writing workflow, cold email workflow all inside one single workflow) then you will have so many tools to consider and LLM might not decide which tool is associated with which workflow and this is where we can use langgraph.

Langgraph makes your workflow easy to visualize using graphs where each tool is a node and edge describes the relationship or chain between them (more about this later). You can also create different type of cycles in your agent workflow or create conditional workflows (for example, if we get an email then run email writer workflow but if it is a cold email reply then run cold email workflow where you research about lead).

Here is how a basic graph workflow looks like 👇

Now let’s take a look at each component of langgraph in detail 🚀

Nodes

A node can be any function or tool your agent uses in langgraph and these nodes are connected with other nodes using edges. Every workflow ends with a “END” node in langgraph which shows the end of workflow. You also need to define a starting node which will be the starting point of your workflow. Optionally, you can also define an ending node if you know the ending point of your workflow.

Let’s create 2 functions and use them as a node


def function1(input_1):
  return input_1 + " Function1"
def function2(input_2):
  return input_2 + " Function2"

Let’s create a graph and define our nodes


from langgraph.graph import Graph

# Define a Langchain graph
workflow = Graph()

workflow.add_node("node_1", function1)
workflow.add_node("node_2", function2)

Now to connect these nodes, we will use edges

Edges

As you might have already guessed, edges are used to connect the nodes which we are created. There are 2 type of edges in langgraph:

1. Simple Edge: A simple edge between 2 nodes

This is how a simple edge looks like


workflow.add_edge('node_1', 'node_2')

2. Conditional Edge:  This edge allows you to go to any node based on condition. It accepts one condition function and you can define the different nodes which are connected with this edge using an alias.

This is how a conditional edge looks like:


def where_to_go(state):
  # Your Logic here
  if something:
    return "end"
  else:
    return "continue"
    
# agent node is connected with 2 nodes END and weather_tool
workflow.add_conditional_edges('agent',where_to_go,{
    "end": END,
    "continue": "weather_tool"
})

State

State is the information which can be passed between nodes in a whole graph. If you want to keep track of specific information during the workflow then you can use state.

There are 2 types of graph which you can make in langgraph:

  • Basic Graph: Basic graph will only pass the output of first node to next node because it can’t contain states.
  • Stateful Graph: This graph can contain state which will be passed between node and you can access this state at any node

Here is how you can define state and pass it to stateful graph:


from typing import TypedDict
from langgraph.graph import StateGraph
class AgentState(TypedDict):
    messages: list[str]

workflow = StateGraph(AgentState)

Creating a Basic Workflow using Langgraph

Now we know about langgraph and its components then let’s create a basic workflow using langgraph. We will create one weather assistant which will give us weather information of any location.

First of all let’s install required dependencies


!pip install langgraph langchain openai langchain_openai langchain_community pyowm

We will use open weather API to get information about weather information of any location. You can get your own API key from their website for free. Additionally, you will also require OpenAI API key to use their LLM.

Store your openweather and OpenAI API keys in environment variables


import os
os.environ['OPENAI_API_KEY'] = openai_secret
os.environ["OPENWEATHERMAP_API_KEY"] = openweather_secret

We will use “OpenWeatherMapAPIWrapper()” to make a call to OpenWeather API and we will use “ChatOpenAI” to use GPT-3.5 model.


from langchain_openai import ChatOpenAI
from langchain_community.utilities import OpenWeatherMapAPIWrapper
openai_llm = ChatOpenAI(temperature=0.4)
weather = OpenWeatherMapAPIWrapper()

We will create one node which will take user input and extract the city name from user’s query and pass it to next node where we will get the weather information using openweather wrapper


# Node to extract city from user input
def agent(input_1):
  res = openai_llm.invoke(f"""
  You are given one question and you have to extract city name from it
  Don't respond anything except the city name and don't reply anything if you can't find city name

  Here is the question:
  {input_1}
  """)
  return res.content
  
# Node to find weather information
def weather_tool(input_2):
  data = weather.run(input_2)
  return data

Now let’s connect these 2 nodes using edges and create a graph


workflow = Graph()

workflow.add_node("agent", agent)
workflow.add_node("weather", weather_tool)

# Connecting 2 nodes
workflow.add_edge('agent', 'weather')

Here is how our workflow will look like 👇

Additionally, you can define the starting and ending points of your workflow. Here we know that our input will be passed to an agent and then we will find the weather info so the starting point will be the agent node and the ending point will be the weather node.


workflow.set_entry_point("agent")
workflow.set_finish_point("weather")

app = workflow.compile()

And finally, let’s run our langgraph!


app.invoke("What is weather in delhi?")

And you will get output like this


In Delhi, the current weather is as follows:
Detailed status: haze
Wind speed: 3.6 m/s, direction: 80°
Humidity: 34%
Temperature: 
  - Current: 37.05°C
  - High: 37.05°C
  - Low: 37.05°C
  - Feels like: 39.1°C
Rain: {}
Heat index: None
Cloud cover: 0%

But the output is not properly readable, what if we have a responder node that will format this output properly and work as a weather agent instead of a function.

Let’s add a new node called ‘responder’ which will format the weather tool output and provide better results. But wait 🤔, we will need the user’s query to properly format our answer but we will only get the weather tool output in our node so how can we get the user’s query which we are passing to agent node? 👀

This is where states comes into picture. We will create one state called “messages” which will store all the conversation happening in the entire workflow. so let’s create it first!


# We will keep adding our conversation in this list
from typing import TypedDict, Annotated, Sequence
import operator
class AgentState(TypedDict):
    messages: Annotated[Sequence[str], operator.add]

And now let’s create our third node


def responder(state):
  agent = openai_llm.invoke(f"""
  You have given a weather information and you have to respond to user's query based on the information

  Here is the user query:
  ---
  {state["messages"][0]}
  ---

  Here is the information:
  ---
  {state["messages"][2]}
  ---
  """)
  return {"messages": [agent.content]}

Make sure first 2 nodes are also using state and adding their responses in messages


def agent(state):
  query = state["messages"]
  res = openai_llm.invoke(f"""
  You are given one question and you have to extract city name from it

  Only reply the city name if it exists or reply 'no_response' if there is no city name in question

  Here is the question:
  {query[0]}
  """)
  return {"messages":[res.content]}
def weather_tool(state):
  context = state["messages"]
  city_name = context[1]
  data = weather.run(city_name)
  return {"messages": [data]}

Finally let’s create a stateful graph as we have state to pass between nodes. Also define nodes and connect them using edges.


workflow = StateGraph(AgentState)
workflow.add_node('agent',agent)
workflow.add_node('weather',weather_tool)
workflow.add_node("responder",responder)

# Connect the nodes
workflow.add_edge('agent', 'weather')
workflow.add_edge('weather', 'responder')

# Set entry and finish point
workflow.set_entry_point("agent")
workflow.set_finish_point("responder")
app = workflow.compile()

This is how the new workflow will look like 👇

Let’s try it with responder!


inputs = {"messages": ["What is weather in delhi?"]}
response = app.invoke(inputs)
print(response['messages'][-1])

And now the agent will reply in more readable way


In Delhi, the current weather is hazy with a temperature of 37.05°C. The wind speed is 3.6 m/s coming from the direction of 80°. The humidity is at 34% and there is no rain expected. The cloud cover is at 0% and it currently feels like 39.1°C.

You can also ask specific questions like “Is there any chance of rain in delhi?” or “Is it a best weather to go on long drive in delhi?” and responder will respond accordingly.

Still there is a one problem 💀, the agent still can’t answer questions like “How are you?” because it will try to find a city name in it and if it can’t find it then weather tool will throw an error so we will have to handle this case too.

What if we run weather tool conditionally? 🤔, means we will only run that tool when user asks weather information otherwise we will not respond anything. This is where we will need to create a conditional edge so let’s create one!

We will tell our llm to response with “no_response” if it can’t find city name in user query and based on that output, we will use weather tool or end the workflow.

This is how the updated workflow will look like 👇

Let's write code for it


from langgraph.graph import Graph, END
# Defining condition function
def where_to_go(state):
  ctx = messages[0]
  if ctx == "no_response":
    return "end"
  else:
    return "continue"
  
# Create an conditional edge
workflow.add_conditional_edges('agent',where_to_go,{
    "end": END,
    "continue": "weather_tool"
})
# Remaining edges
workflow.add_edge("weather_tool","responder")

Now if we ask questions like “How are you?” then it will respond with “no_response” instead of throwing an error and if you ask questions regarding weather information then it will use weather tool and respond with responder 🚀!

💡You can get all of the code shown in this blog from here.

Now we know how to create a basic workflow using langgraph so let’s get our hand dirty by creating a multi purpose AI agent👀!

Mini Project: Let’s Create a Multi-Purpose AI Agent

Let’s create an AI agent that can give us live weather information, create draft replies for our emails or even it can chat with us normally like a chatbot. You can add more workflows in this agent but for the simplicity of this blog I am only going to add 2 workflows. So let’s get started 🚀!

Workflow

Before creating the project, let’s take a look at the workflow of our agent!

We will first add the user input in our entry node where the user input will be categorized into 3 categories:

  • email_query: If user want to generate an email response to given email
  • weather_query: If user want weather information about any location
  • other: If user want any other information

Now based on the categories, we will redirect the query to right node. 🔂

We will use CrewAI to create a crew which can categorize the email and then based on the category it will write a response. We will also create an separate agent for weather where we will provide the openweather function as a tool and it will automatically format the final weather information response. For all other queries, we will just make a simple OpenAI call.

Prerequisites

Here are the things you will need to create this project

Add your API keys in environment variables


import os
os.environ['OPENAI_API_KEY'] = openai_secret
os.environ["OPENWEATHERMAP_API_KEY"] = openweather_secret

So now we have everything ready, let’s get back to coding 💻!

Let’s code it

First of all, let’s install required dependencies!


!pip install langgraph langchain openai langchain_openai langchain_community pyowm crewai

Let’s initialize ChatOpenAI and OpenweatherWrapper objects


from langchain_openai import ChatOpenAI
from langchain_community.utilities import OpenWeatherMapAPIWrapper
openai_llm = ChatOpenAI(temperature=0.4)
weather = OpenWeatherMapAPIWrapper()

Let’s import required packages and modules


# Import required dependencies
from crewai import Crew, Agent, Task
from textwrap import dedent
import os
import json
import requests

Before creating agents or workflows, let’s first define the states which we are going to pass among nodes.

We will use these state variables in our workflow 👇

  • messages: It will store the conversation history to keep track of conversation happening in workflow between agents
  • email: If user want to generate a email response then entry node will extract the email body from user input and add it in this state
  • query: It will store the user’s query
  • category: It will store the category of user’s query ( email_query, weather_query or other )

Let’s write the code to define our states:


from typing import TypedDict
class AgentState(TypedDict):
    messages: list[str]
    email: str
    query: str
    category: str

First let’s create our email crew which will have 2 agents: ClassifierAgent and emailWriterAgent

ClassifierAgent will classify the given email in Important or Casual Category and emailWriterAgent will generate response based on the category.


class Agents:
	def classifierAgent():
	    return Agent(
	      role='Email Classifier',
	      goal='You will be given an email and you have to classify the given email in one of these 2 categories: 1) Important 2) Casual ',
	      backstory='An email classifier who is expert in classifying every type of email and have classified so many emails so far',
	      verbose = True,
	      allow_delegation=False,
	    )
	def emailWriterAgent():
	  return Agent(
	    role='Email writing expert',
	    goal="You are email writing assistant for Shivam. You will be given an email and a category of that email and your job is to write a reply for that email. If email category is 'Important' then write the reply in professional way and If email category is 'Casual' then write in a casual way",
	    backstory='An email writer with an expertise in email writing for more than 10 years',
	    verbose = True,
	    allow_delegation=False,
	  )

Now let’s create the tasks for these agents


class Tasks:
	def classificationTask(agent,email):
	    return Task(
	        description=dedent(f"""
	        You have given an email and you have to classify this email
	        {email}
	        """),
	        agent = agent,
	        expected_output = "Email category as a string"
	    )
	def writerTask(agent,email):
	  return Task(
	      description=dedent(f"""
	      Create an email response to the given email based on the category provided by 'Email Classifier' Agent
	      {email}
	      """),
	      agent = agent,
	      expected_output = "A very concise response to the email based on the category provided by 'Email Classifier' Agent"
	  )

Finally let’s create our email crew


class EmailCrew:
  def __init__(self,email):
    self.email = email
  def run(self):
		# Agents
    classifierAgent = Agents.classifierAgent()
    writerAgent = Agents.emailWriterAgent()
		# Tasks
    classifierTask = Tasks.classificationTask(agent=classifierAgent,email=self.email)
    writerTask = Tasks.writerTask(agent=writerAgent,email=self.email)
		# Create crew
    crew = Crew(
      agents=[classifierAgent,writerAgent],
      tasks=[classifierTask,writerTask],
      verbose=2, # You can set it to 1 or 2 to different logging levels
    )
		# Run the crew
    result = crew.kickoff()
    return result

Finally, let’s create a node where we will run this crew


class Nodes:
  def writerNode(self,state):
    email = state["email"]
    emailCrew = EmailCrew(email)
    crewResult = emailCrew.run()
    messages = state["messages"]
    messages.append(crewResult)
    return {"messages": messages}

Now let’s create our weather agent workflow 🌤️

We will create a weather tool which will use openweather wrapper to get the weather information. We will assign this tool to our weather agent.


from langchain.tools import tool
class Tools:
  @tool("Tool to get the weather of any location")
  def weather_tool(location):
    """
    Use this tool when you have given a location and you want to find the weather of that location
    """
    data = weather.run(location)
    return data
    
class Agents:
	# ... Other agents
	def weatherAgent():
    return Agent(
        role = 'Weather Expert',
        goal = 'You will be given a location name and you have to find the weather information about that location using the tools provided to you',
        backstory = "An weather expert who is expert in providing weather information about any location",
        tools = [Tools.weather_tool],
        verbose = True,
        allow_delegation = False,
    )

Let’s create a task for our weather agent


class Tasks:
	# ... Other tasks
	def weatherTask(agent,query):
    return Task(
        description = dedent(f"""
        Get the location from the user query and find the weather information about that location

        Here is the user query:
        {query}
        """),
        agent = agent,
        expected_output = "A weather information asked by user"
    )

Finally, create a node where we will run this agent:


class Nodes:
	# ... Other nodes
  def weatherNode(self,state):
    query = state["query"]
    weatherAgent = Agents.weatherAgent()
    weatherTask = Tasks.weatherTask(agent=weatherAgent,query=query)
    result = weatherTask.execute()
    messages = state["messages"]
    messages.append(result)
    return {"messages": messages}

Let’s create entry and reply node as well


class Nodes:
	# ... Other nodes
  def replyNode(self,state):
	  query = state["query"]
	  agent = openai_llm.invoke(f"""
	    {query}
	  """)
	  messages = state["messages"]
	  messages.append(agent.content)
	  return {"messages": messages}
  def entryNode(self,state):
    input = state["query"]
    agent = openai_llm.invoke(f"""
      User input
      ---
      {input}
      ---
      You have given one user input and you have to perform actions on it based on given instructions

      Categorize the user input in below categories
      email_query: If user want to generate a reply to given email
      weather_query: If user want any weather info about given location
      other: If it is any other query

      After categorizing your final RESPONSE must be in json format with these properties:
      category: category of user input
      email: If category is 'email_query' then extract the email body from user input with proper line breaks and add it here else keep it blank
      query: If category is 'weather_query' or 'other' then add the user's query here else keep it blank
    """)
    response = json.loads(agent.content)
    return {'email': response["email"], 'query': response['query'], 'category': response['category']}

And we have successfully created all the nodes for our workflow!

Now let’s define the condition function which will decide the conditional flow for our conditional edge based on the category


def where_to_go(state):
  cat = state['category']
  print("Category: ",cat)
  if cat == "email_query":
    return "email"
  elif cat == "weather_query":
    return "weather"
  else:
    return "reply"

And finally let’s create the stateful graph and define our nodes in it


from langgraph.graph import Graph, END, StateGraph
workflow = StateGraph(AgentState)
node = Nodes()
workflow.add_node('entryNode',node.entryNode)
workflow.add_node('weatherNode',node.weatherNode)
workflow.add_node("responder",node.replyNode)
workflow.add_node('emailNode',node.writerNode)

Let’s connect these nodes using edges


workflow.add_conditional_edges('entryNode',where_to_go,{
    "email": "emailNode",
    "weather": "weatherNode",
    "reply": "responder"
})
workflow.add_edge("weatherNode",END)
workflow.add_edge("responder",END)
workflow.add_edge("emailNode",END)

workflow.set_entry_point("entryNode")
app = workflow.compile()

And now it’s time to test our agent 👀!


query = """
Can you reply to this email

Hello,
Thank you for applying to xyz company
can you share me your previous CTC
Thanks,
HR
"""
inputs = {"query": query, "messages": [query]}
result = app.invoke(inputs)
print("Agent Response:",result['messages'][-1])

After running the above code, you will see the query got categorized as “email_query” and then using EmailCrew it will generate the reply for the extracted email which looks like this:


Agent Says: Subject: Re: Application to XYZ Company

Dear [HR's Name],

Thank you for considering my application for the position at XYZ Company.

As per your request, my previous CTC was [mention CTC]. I am open to any negotiation based on the job requirements and the value I can bring to your team at XYZ Company.

Thank you once again for the opportunity. I look forward to potentially furthering the application process.

Best Regards,
[Your Name]

Ofc you can make it better with better prompts but I will leave it on you so that you can do experiment with it.

You can also try with below queries for different use cases and agent will reply differently.


weather_query: 
```
Is there any chance of rain in delhi today?
```

email_query:
```
Hey can you generate a reply to this email

Hey man,
I just saw your portfolio and quite liked it. Can you tell me which languages do you use the most
Thanks
```

other:
```
Hello how are you?
```

And finally we have crated a multi-purpose agent which can give us weather info, categorize and replies to our emails and can even work as a simple chat bot 🚀! You can also add more nodes or crews of your choice to give it more power and make it more useful.

💡You can get all of the code shown in this blog from here.

Use Cases of Langgraph

As we already saw, langgraph can be very useful to create LLM workflows which are harder to make using normal agents as langgraph gives you more flexibility with agent runtime. There are so many possible workflows which you can make using graph as it is easy to visualize than making an agent and allows you to easily manage your code and separate node or workflow.

Let’s take a look at some use cases where you can use langgraph.

  • Agents can be more autonomous: Agents can run more autonomously by adding a new idle node where agent can wait for some time or you can add a node that can be triggered on a webhook event and then it will start a workflow. This could be used to study things like traffic flow, market dynamics, or social behavior.
  • Building Conversational Agents (Chatbots): LangGraph's ability to manage state and handle cycles makes it ideal for creating chatbots that can hold conversations that flow back and forth. The graph can track conversation history and use it to inform future responses, making the interaction more natural.
  • Workflow Automation:  LangGraph can automate complex workflows that involve multiple steps and decision points. By defining the steps as nodes in the graph and the decision logic as edges, LangGraph can handle complex tasks easily. Langgraph can also make complex RAG systems easy to visualize and implement.
  • Real-time Decision Making: LangGraph's cyclic nature allows for continuous evaluation and decision making. This could be beneficial for applications like fraud detection systems that need to analyze data streams in real-time and make immediate decisions.

Conclusion

As we discussed in blog, it became very easy to automate large and complex workflows with the use of langgraph as it allows you to visualize your entire architecture as a graph and you can focus in individual workflows and integrate them together. The decision making system gives you more flexibility for reasoning and accuracy.

By understanding its core concepts like nodes, state management, and conditional edges, you can leverage LangGraph's capabilities to create innovative projects. Remember to focus on cyclic workflows where LangGraph shines, and ensure your graphs have well-defined paths to avoid dead ends. With practice, LangGraph can become a valuable tool in your LLM development journey.

Want to Know How AI Automation can Help Your Business?

Whether you are a small business or a big industry, AI agents are performing and improving well to automate your all kind of business workflows. Do you have a business workflow which can be automated using AI agents or do you have an idea which can be a next business idea in AI Automation industry then feel free to book a call with us and we will be more than happy to convert your ideas into reality.

Thanks for reading 😄.

Book an AI consultation

Looking to build AI solutions? Let's chat.

Schedule your consultation today - this not a sales call, feel free to come prepared with your technical queries.

You'll be meeting Rohan Sawant, the Founder.
 Company
Book a Call

Let us help you.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Behind the Blog 👀
Shivam Danawale
Writer

Shivam is an AI Researcher & Full Stack Engineer at Ionio.

Pranav Patel
Editor

Good boi. He is a good boi & does ML/AI. AI Lead.