# CrewAI

#### **What is crewAI?**

CrewAI is an open-source Python framework for orchestrating autonomous, role-playing AI agents that collaborate to complete complex tasks. It is designed to facilitate the development and management of multi-agent AI systems and is used in a variety of applications, from content creation to financial analysis.

**Key features:**

* Role-Based Architecture: Agents are assigned specific roles and backstories, defining their capabilities and responsibilities within the "crew".
* Agent Orchestration: The framework manages how agents interact, communicate, and work together seamlessly to achieve a common goal.
* Tools and Integrations: Agents can be equipped with tools, such as web search engines, to interact with the outside world and gather information. It integrates with multiple Large Language Model (LLM) providers, giving users flexibility in model choice.
* Flows: A feature that allows developers to create structured, event-driven workflows, manage state, and control execution flow for multi-step processes.
* Scalability and Flexibility: The framework is built to be fast, lean, and production-ready, suitable for developing both basic and complex applications.

&#x20;

Connecting **crewAI** with nexos.a&#x69;**’s models and assistants** gives teams instant, secure workflow intelligence. Every interaction uses authenticated **API‑key access**, ensuring sensitive company data remains safe within nexos.ai while providing full observability and control over API usage.

&#x20;

To connect your nexos.ai **API** and **crewAI UI** go through the following steps:

1. **Generate your personal API key**

Log in to your nexos.ai account and head over to the User profile section by clicking on the profile bubble in the top right corner of the UI.

<figure><img src="/files/H3ca3oHkU6XjfPrDJkrO" alt=""><figcaption></figcaption></figure>

&#x20;                                                         How to generate a personal API key

2. **Create LLM Connection**

Navigate to LLM Connections and create a connection that will route through nexos.ai. Add custom models and define them by UUID or Name.

* Fill in the **LLM Connection Name**.
* Select **custom-openai-compatible** as the provider.
* Set the `OPENAI_BASE_URL` to `https://api.nexos.ai/v1` .
* Fill in the `OPENAI_API_KEY` with the **team-api-key** or generated in the previous step.
* Create custom models to match nexos.ai model names or **UUIDs**.
* Submit by clicking the **Add Connection** button.

<figure><img src="/files/6anhZSCbLu6haGmESc6R" alt=""><figcaption></figcaption></figure>

3. **Configure and execute workflow**\
   With the configuration already set up, we can move to creating a fresh project and us nexos.ai as a proxy to the LLMs

<figure><img src="/files/E8j0FCCteY7poaW3XdvV" alt=""><figcaption></figcaption></figure>

Try out your workflow to see if the nodes have been configured correctly.

4. **CrewAI offers CLI as well**

This setup guides you through integrating CrewAI with NexosAI using a convenient command-line interface.

* Install dependencies.
* Define environment variables:\
  `NEXOS_API_KEY=team-api-key`\
  `NEXOS_BASE_URL=https://api.nexos.ai/v1`<br>

**Sample implementation using nexos.ai**

```
import os

from crewai import Agent, Task, Crew, LLM
from dotenv import load_dotenv

# Load environment variables from .env file
load_dotenv()
# --- Configuration ---
NEXOS_BASE_URL = os.getenv("NEXOS_BASE_URL")
NEXOS_API_KEY = os.getenv("NEXOS_API_KEY")
if not NEXOS_BASE_URL or not NEXOS_API_KEY:
    raise ValueError("Please set NEXOS_BASE_URL and NEXOS_API_KEY in your .env file")


# Configure the LLM via LangChain's client, pointing to Nexos
nexos_llm = LLM(
    model="GPT 5 mini", # or any other OpenAI-compatible model / assistant ID available at nexos.ai 
    base_url=NEXOS_BASE_URL, # e.g. "https://api.nexos.ai/v1"
    api_key=NEXOS_API_KEY
)

# Define an agent that uses the Nexos-configured LLM
researcher = Agent(
    role='Researcher',
    goal='Gather information about the most popular programming languages in 2023.',
    backstory='You are an AI researcher specializing in technology trends.',
    llm=nexos_llm,  # Pass the configured LLM
    allow_delegation=False #  Keep delegation off for a simple example
)

# Define a task:  Ask the researcher to gather information about programming languages.
task = Task(
    description='Research and summarize the five most popular programming languages in 2023, including their key features and use cases. Provide a concise paragraph for each language.',
    expected_output='A summary of the five most popular programming languages in 2023, including their key features and use cases.',
    agent=researcher
)

# Create a crew with the researcher agent and the task.
research_crew = Crew(agents=[researcher], tasks=[task], verbose=True)

# Kick off the crew to execute the task.
result = research_crew.kickoff()

# Print the results.
print(result)
print("CrewAI Agent configured to use Nexos LLM and completed a task.")

```

You can use any open AI compatible model. To check what models are available for you, call [Gateway API | nexos.ai documentation](https://docs.nexos.ai/gateway-api#get-v1-models) You can use either `nexos_model_id`or `id` as model.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.nexos.ai/gateway-api/integrations/crewai.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
