# LangGraph

**What is LangGraph?**\
LangGraph is a library for building stateful, multi-actor applications with LLMs, built on top of LangChain. It allows you to define flows as graphs, where nodes are processing steps (like LLM calls) and edges define the control flow. This is particularly useful for building agents, cyclic workflows, and complex conversational applications.

**Sample implementation using nexos.ai**

This example, written in Python and TypeScript demonstrates a simple "Chatbot" graph connected to nexos.ai Gateway.

* **State**: Defines a simple state containing a list of messages.
* **Graph**: Creates a StateGraph with a single node ('chatbot') that calls the LLM.
* **Custom Gateway**: The LLM is configured to point to the custom `NEXOS_BASE_URL`.
* **Execution**: Runs the graph with a user message and prints the response.

Python:

```
import os
from typing import Annotated
from typing_extensions import TypedDict

from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.messages import BaseMessage, HumanMessage
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages

# Load environment variables
load_dotenv()

# --- Configuration ---
NEXOS_BASE_URL = os.getenv("NEXOS_BASE_URL")
NEXOS_API_KEY = os.getenv("NEXOS_API_KEY")

if not NEXOS_BASE_URL or not NEXOS_API_KEY:
    raise ValueError("Please set NEXOS_BASE_URL and NEXOS_API_KEY in your .env file")

# --- 1. Define the State ---
class State(TypedDict):
    # The 'add_messages' reducer appends new messages to the existing list
    messages: Annotated[list[BaseMessage], add_messages]

# --- 2. Initialize the LLM ---
llm = ChatOpenAI(
    model="gemini-2.5-flash", # or any other OpenAI-compatible model ID available to you
    base_url=NEXOS_BASE_URL,  # e.g. "https://api.nexos.ai/v1"
    api_key=NEXOS_API_KEY,
    temperature=0.7,
)

# --- 3. Define Nodes ---
def chatbot_node(state: State):
    """
    Invokes the LLM with the current history of messages.
    Returns a dictionary with the new message to be added to the state.
    """
    response = llm.invoke(state["messages"])
    return {"messages": [response]}

# --- 4. Build the Graph ---
builder = StateGraph(State)

# Add nodes
builder.add_node("chatbot", chatbot_node)

# Add edges (Simple linear flow: Start -> Chatbot -> End)
builder.add_edge(START, "chatbot")
builder.add_edge("chatbot", END)

# Compile the graph
graph = builder.compile()

def main():
    print("--- Starting LangGraph Execution ---")

    # Initial input to the graph
    initial_input = {"messages": [HumanMessage(content="Hello! Explain the concept of a 'graph' in one sentence.")]}

    # Stream the execution
    # The stream yields events as the graph progresses
    for event in graph.stream(initial_input):
        for node_name, value in event.items():
            print(f"\n--- Output from node '{node_name}' ---")
            last_message = value["messages"][-1]
            print(last_message.content)
            print("--------------------------------------")

if __name__ == "__main__":
    main()
```

TypeScript:

```
import * as dotenv from "dotenv";
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage, BaseMessage } from "@langchain/core/messages";
import { StateGraph, START, END, Annotation } from "@langchain/langgraph";

// Load environment variables from .env file
dotenv.config();

// --- Configuration ---
const NEXOS_BASE_URL = process.env.NEXOS_BASE_URL; // e.g. "https://api.nexos.ai/v1"
const NEXOS_API_KEY = process.env.NEXOS_API_KEY;

if (!NEXOS_BASE_URL || !NEXOS_API_KEY) {
  throw new Error("Please set NEXOS_BASE_URL and NEXOS_API_KEY in your .env file");
}

// --- 1. Define the State ---
const StateAnnotation = Annotation.Root({
  messages: Annotation<BaseMessage[]>({
    reducer: (x: BaseMessage[], y: BaseMessage[]) => x.concat(y),
  }),
});

// --- 2. Initialize the LLM ---
const llm = new ChatOpenAI({
  model: "gemini-2.5-flash", // or any other OpenAI-compatible model ID available to you
  apiKey: NEXOS_API_KEY,
  configuration: {
    baseURL: NEXOS_BASE_URL,
  },
  temperature: 0.7,
});

// --- 3. Define Nodes ---
/**
 * Invokes the LLM with the current history of messages.
 * Returns an object with the new message to be added to the state.
 */
async function chatbotNode(state: typeof StateAnnotation.State) {
  const response = await llm.invoke(state.messages);
  return { messages: [response] };
}

// --- 4. Build the Graph ---
const builder = new StateGraph(StateAnnotation)
  .addNode("chatbot", chatbotNode)
  .addEdge(START, "chatbot")
  .addEdge("chatbot", END);

// Compile the graph
const graph = builder.compile();

async function main() {
  console.log("--- Starting LangGraph Execution ---");

  // Initial input to the graph
  const initialInput = {
    messages: [new HumanMessage("Hello! Explain the concept of a 'graph' in one sentence.")],
  };

  try {
    // Stream the execution
    const stream = await graph.stream(initialInput);
    
    for await (const event of stream) {
      for (const [nodeName, value] of Object.entries(event)) {
        console.log(`\n--- Output from node '${nodeName}' ---`);
        const stateValue = value as typeof StateAnnotation.State;
        const lastMessage = stateValue.messages[stateValue.messages.length - 1];
        console.log(lastMessage.content);
        console.log("--------------------------------------");
      }
    }
  } catch (e) {
    console.error(`\nError during graph execution: ${e}`);
  }
}

main();
```

You can use any open AI compatible model. To check what models are available for you, call [Gateway API | nexos.ai documentation](https://docs.nexos.ai/gateway-api#get-v1-models) You can use either `nexos_model_id`or `id` as model.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.nexos.ai/gateway-api/integrations/langgraph.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
