# LangChain

**What is LangChain?**\
LangChain is a comprehensive framework for developing applications powered by Large Language Models (LLMs). It provides:

* A unified interface to interact with various model providers
* Tools to manage conversation history
* Primitives for building complex chains and agents

**Sample implementation using nexos.ai**

Below is a minimal example written in Python and TypeScript showing how to use LangChain’s `ChatOpenAI` client with the [nexos.ai](http://nexos.ai/) Gateway (OpenAI-compatible) endpoint.

Python:

```
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage

# Load environment variables from .env file
load_dotenv()

# --- Configuration ---
NEXOS_BASE_URL = os.getenv("NEXOS_BASE_URL")
NEXOS_API_KEY = os.getenv("NEXOS_API_KEY")

if not NEXOS_BASE_URL or not NEXOS_API_KEY:
    raise ValueError("Please set NEXOS_BASE_URL and NEXOS_API_KEY in your .env file")

def main():
    # Initialize the ChatOpenAI client
    llm = ChatOpenAI(
        model="gpt-4.1",          # or any other OpenAI-compatible model ID available to you
        base_url=NEXOS_BASE_URL,  # e.g. "https://api.nexos.ai/v1"
        api_key=NEXOS_API_KEY,
    )

    # Create a simple message sequence
    messages = [
        SystemMessage(content="You are a helpful assistant."),
        HumanMessage(content="Hello world!"),
    ]

    try:
        response = llm.invoke(messages)
        print("\n--- Response from AI ---")
        print(response.content)
        print("------------------------")
    except Exception as e:
        print(f"\nError communicating with the API: {e}")

if __name__ == "__main__":
    main()

```

TypeScript:

```
import * as dotenv from "dotenv";
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage, SystemMessage } from "@langchain/core/messages";

// Load environment variables from .env file
dotenv.config();

// --- Configuration ---
const NEXOS_BASE_URL = process.env.NEXOS_BASE_URL;
const NEXOS_API_KEY = process.env.NEXOS_API_KEY;

if (!NEXOS_BASE_URL || !NEXOS_API_KEY) {
  throw new Error("Please set NEXOS_BASE_URL and NEXOS_API_KEY in your .env file");
}

async function main() {
  // Initialize the ChatOpenAI client
  const llm = new ChatOpenAI({
    model: "gemini-2.5-flash", // or any other OpenAI-compatible model ID available to you
    apiKey: NEXOS_API_KEY,
    configuration: {
      baseURL: NEXOS_BASE_URL,
    },
    temperature: 0.7,
  });

  // Create a simple message sequence
  const messages = [
    new SystemMessage("You are a helpful assistant."),
    new HumanMessage("Hello world!"),
  ];

  try {
    const response = await llm.invoke(messages);
    console.log("\n--- Response from AI ---");
    console.log(response.content);
    console.log("------------------------");
  } catch (e) {
    console.error(`\nError communicating with the API: ${e}`);
  }
}

main()
```

You can use any open AI compatible model. To check what models are available for you, call [Gateway API | nexos.ai documentation](https://docs.nexos.ai/gateway-api#get-v1-models) You can use either `nexos_model_id`or `id` as model.

&#x20;


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.nexos.ai/gateway-api/integrations/langchain.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
