# Langfuse

**What is Langfuse?**\
Langfuse is an open-source LLM observability and analytics platform. It provides comprehensive tracing, monitoring, and evaluation capabilities for LLM applications. Langfuse captures detailed traces of your LLM interactions including inputs, outputs, tool usage, latencies, and costs, enabling you to debug, analyze, and improve your AI applications.

**Sample implementation using nexos.ai**\
This example demonstrates how to integrate Langfuse tracing with LangChain when using nexos.ai gateway.

* **Langfuse Setup**: Initializes the Langfuse CallbackHandler for automatic tracing.
* **LangChain Integration**: Passes the callback handler to LangChain invocations.
* **Custom Gateway**: Uses the custom `NEXOS_BASE_URL` for LLM calls.
* **Observability**: All LLM interactions are automatically logged to Langfuse for analysis.

```
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage
from langfuse import get_client
from langfuse.langchain import CallbackHandler

# Load environment variables
load_dotenv()

# --- Configuration ---
NEXOS_BASE_URL = os.getenv("NEXOS_BASE_URL")
NEXOS_API_KEY = os.getenv("NEXOS_API_KEY")

LANGFUSE_SECRET_KEY = os.getenv("LANGFUSE_SECRET_KEY")
LANGFUSE_PUBLIC_KEY = os.getenv("LANGFUSE_PUBLIC_KEY")
LANGFUSE_HOST = os.getenv("LANGFUSE_HOST", "https://cloud.langfuse.com")

if not NEXOS_BASE_URL or not NEXOS_API_KEY:
    raise ValueError("Please set NEXOS_BASE_URL and NEXOS_API_KEY in your .env file")

if not LANGFUSE_SECRET_KEY or not LANGFUSE_PUBLIC_KEY:
    raise ValueError("Please set LANGFUSE_SECRET_KEY and LANGFUSE_PUBLIC_KEY in your .env file")

# --- Initialize Langfuse ---
# The Langfuse client is initialized automatically from environment variables
langfuse = get_client()

# Create the Langfuse callback handler for LangChain tracing
langfuse_handler = CallbackHandler()

def main():
    print("--- LangChain with Langfuse Tracing ---")

    # Initialize the ChatOpenAI client with nexos.ai gateway
    llm = ChatOpenAI(
        model="gemini-2.5-flash", # or any other OpenAI-compatible model ID available to you
        base_url=NEXOS_BASE_URL, # e.g. "https://api.nexos.ai/v1"
        api_key=NEXOS_API_KEY,
        temperature=0.7,
    )

    # Create a simple message sequence
    messages = [
        SystemMessage(content="You are a helpful assistant."),
        HumanMessage(content="What are the benefits of observability in LLM applications?"),
    ]

    try:
        # Invoke the model with Langfuse tracing
        # The callback handler automatically captures all interactions
        response = llm.invoke(
            messages,
            config={
                "callbacks": [langfuse_handler],
                "metadata": {
                    "langfuse_user_id": "demo-user",
                    "langfuse_session_id": "demo-session",
                    "langfuse_tags": ["demo", "observability"]
                }
            }
        )
        
        print("--- Response from AI ---")
        print(response.content)
        print("------------------------")
        
        print(f"\n✓ Trace logged to Langfuse")
        print(f"  View at: {LANGFUSE_HOST}")
        
    except Exception as e:
        print(f"\nError communicating with the API: {e}")
    
    finally:
        # Flush events to ensure they are sent to Langfuse
        langfuse.flush()

if __name__ == "__main__":
    main()
```

You can use any open AI compatible model. To check what models are available for you, call [Gateway API | nexos.ai documentation](https://docs.nexos.ai/gateway-api#get-v1-models) You can use either `nexos_model_id`or `id` as model.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.nexos.ai/gateway-api/integrations/langfuse.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
