# Codex CLI

### What is Codex CLI? <a href="#what-is-codex-cli" id="what-is-codex-cli"></a>

Codex CLI is OpenAI's open-source command-line coding agent that runs locally from your terminal. It can read, change, and run code on your machine in the selected directory, providing a powerful AI-powered development assistant directly in your workflow. Built to help developers write features, answer questions about codebases, fix bugs, and propose pull requests for review, Codex CLI brings autonomous AI assistance to your local development environment.

**Key features:**

* **Local Code Execution**\
  Runs directly on your machine, reading and modifying files in your project directory with full access to your local environment.
* **Interactive Terminal Mode**\
  Supports interactive conversations, allowing you to resume previous sessions and iterate on tasks through natural-language interactions.
* **Multi-Model Support**\
  Works with various AI models and supports custom model providers via configuration file, enabling flexibility in choosing your preferred LLM backend.
* **Code Generation & Refactoring**\
  Instantly generates code snippets, refactors functions, and implements new features based on your prompts.
* **Intelligent Reasoning**\
  Leverages advanced reasoning capabilities to understand complex codebases and provide context-aware solutions.
* **Image Input Support**\
  Can process image inputs for visual context, enabling more comprehensive understanding of design specifications or diagrams.
* **Web Search Integration**\
  Built-in web search capability to find relevant documentation and solutions while coding.
* **Local Code Review**\
  Performs automated code reviews on your local changes before committing.
* **Version Control Integration**\
  Everything runs under version control, sandboxed and limited to the selected working directory for safety.
* **Custom Provider Configuration**\
  Supports custom model providers via config file without requiring source code modifications.

***

### How to Connect Codex CLI with nexos.ai <a href="#how-to-connect-codex-cli-with-nexos.ai" id="how-to-connect-codex-cli-with-nexos.ai"></a>

To connect **nexos.ai API** with **Codex CLI**, follow these steps:

#### 1. Install Codex CLI <a href="#id-1.-install-codex-cli" id="id-1.-install-codex-cli"></a>

**Option A: Using npm (All platforms)**

Install the Codex CLI globally using npm:

```
npm install -g @openai/codex
```

**Option B: Using Homebrew (macOS)**

On macOS, you can also install Codex CLI using Homebrew:

```
brew install codex
```

#### 2. Set Up Your API Key <a href="#id-2.-set-up-your-api-key" id="id-2.-set-up-your-api-key"></a>

Export your API key as an environment variable. Add this to your shell profile (e.g., `~/.bashrc`, `~/.zshrc`, or `~/.bash_profile`):

```
export NEXOS_AI_API_KEY="your-team-api-key-here"
```

Then reload your shell configuration:

```
source ~/.bashrc  # or ~/.zshrc depending on your shell
```

#### 3. Configure Codex CLI <a href="#id-3.-configure-codex-cli" id="id-3.-configure-codex-cli"></a>

Create or edit the configuration file at `~/.codex/config.toml`:

```
model = "<your-model-uuid-or-name>"

[model_providers.nexosai]
name = "nexos.ai"
base_url = "https://api.nexos.ai/v1"
env_key = "NEXOS_AI_API_KEY"
# Use "responses" for models supporting the responses endpoint (e.g., OpenAI models)
# Use "chat" for models that do not support the responses endpoint (e.g., some Claude models)
wire_api = "responses"
model_verbosity = "high"
```

**Configuration Options Explained:**

| **Option**        | **Description**                                                                                                                                     |
| ----------------- | --------------------------------------------------------------------------------------------------------------------------------------------------- |
| `model`           | The model UUID or name from nexos.ai that you want to use as default                                                                                |
| `name`            | Display name for the provider                                                                                                                       |
| `base_url`        | nexos.ai API endpoint: `https://api.nexos.ai/v1`                                                                                                    |
| `env_key`         | The environment variable name containing your API key                                                                                               |
| `wire_api`        | API format - use `"responses"` for models supporting the responses endpoint, or `"chat"` for models that only support the chat completions endpoint |
| `model_verbosity` | Level of model output detail (`"high"`, `"medium"`, or `"low"`)                                                                                     |

> **Note:** If you encounter issues with certain models, try switching `wire_api` from `"responses"` to `"chat"`. Some models (particularly non-OpenAI models) may not support the responses endpoint and require the chat completions API instead.

#### 4. (Optional) Configure Project Trust Levels <a href="#id-4.-optional-configure-project-trust-levels" id="id-4.-optional-configure-project-trust-levels"></a>

For specific project directories, you can set trust levels:

```
model = "<your-model-uuid-or-name>"

[projects."/Users/[user]/code/your-project"]
trust_level = "untrusted"

[model_providers.nexosai]
name = "nexos.ai"
base_url = "https://api.nexos.ai/v1"
env_key = "NEXOS_AI_API_KEY"
# Use "responses" for models supporting the responses endpoint (e.g., OpenAI models)
# Use "chat" for models that do not support the responses endpoint (e.g., some Claude models)
wire_api = "responses"
model_verbosity = "high"

```

**Trust levels:**

* `untrusted` - Codex will ask for confirmation before executing commands
* `trusted` - Allows automatic execution within the project scope

#### 5. Start Using Codex CLI <a href="#id-5.-start-using-codex-cli" id="id-5.-start-using-codex-cli"></a>

Navigate to your project directory and run Codex CLI with the nexos.ai provider:

```
codex --config model_provider="nexosai"
```

Or run with a specific prompt:

```
codex --config model_provider="nexosai" "explain this codebase structure"
```

> **Note:** The `--config model_provider="nexosai"` flag tells Codex CLI to use the [nexos.ai](http://nexos.ai/) provider defined in your configuration file.

***

### Using Different Models <a href="#using-different-models" id="using-different-models"></a>

#### Fetching Available Models <a href="#fetching-available-models" id="fetching-available-models"></a>

You can retrieve the list of all available models for you using the API:

```
curl -X GET "https://api.nexos.ai/v1/models" \
  -H "Authorization: Bearer $NEXOS_AI_API_KEY"
```

> **API Reference:** See the full API documentation at [Gateway API | nexos.ai documentation](https://docs.nexos.ai/gateway-api#get-v1-models)

#### Configuring Models <a href="#configuring-models" id="configuring-models"></a>

You can use **either the model name or UUID** in your configuration. Update the `model` value in your `~/.codex/config.toml`:

```
# Using model UUID
model = "6ff8398f-9276-4756-a9f2-f66b069f1d32"

# Or any other model available in your nexos.ai workspace
model = "claude-opus-4-1-20250805"
```

***

### Troubleshooting <a href="#troubleshooting" id="troubleshooting"></a>

| **Issue**               | **Solution**                                                                                                   |
| ----------------------- | -------------------------------------------------------------------------------------------------------------- |
| Authentication errors   | Verify `NEXOS_AI_API_KEY` is correctly set in your environment                                                 |
| Model not found         | Use the models API endpoint to fetch available models and confirm the model name/UUID exists in your workspace |
| Config not loading      | Ensure the config file is at `~/.codex/config.toml` with correct TOML syntax                                   |
| Provider not recognized | Make sure to use `--config model_provider="nexosai"` when running codex                                        |
| API format errors       | Try switching `wire_api` from `"responses"` to `"chat"` if your model doesn't support the responses endpoint   |

***

### Benefits of Using nexos.ai with Codex CLI <a href="#benefits-of-using-nexos.ai-with-codex-cli" id="benefits-of-using-nexos.ai-with-codex-cli"></a>

By routing Codex CLI through nexos.ai, you gain access to:

* **Multiple LLMs** - Switch between different models (Claude, GPT, and others) without changing configurations
* **Cost Tracking** - Monitor AI spend with usage visibility across teams
* **Intelligent Caching** - Reduce redundant API calls and optimize costs
* **Centralized Logging** - Full observability for every prompt and operation
* **Load Balancing** - Automatic failover to keep your development workflow reliable


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.nexos.ai/gateway-api/integrations/codex-cli.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
