CLI Reference
The Gambi CLI provides commands for managing hubs, rooms, and participants.
All commands support interactive mode — run without flags in a terminal and you’ll be guided through each option step by step. Flags still work for scripting and automation.
If you’re coming from the old package and binary names, read Migrate from Gambiarra.
Installation
Section titled “Installation”curl -fsSL https://raw.githubusercontent.com/arthurbm/gambi/main/scripts/install.sh | bashirm https://raw.githubusercontent.com/arthurbm/gambi/main/scripts/install.ps1 | iexnpm install -g gambi# orbun add -g gambiThe published gambi package is a wrapper that installs only the matching platform binary for your machine.
Uninstallation
Section titled “Uninstallation”curl -fsSL https://raw.githubusercontent.com/arthurbm/gambi/main/scripts/uninstall.sh | bashirm https://raw.githubusercontent.com/arthurbm/gambi/main/scripts/uninstall.ps1 | iexnpm uninstall -g gambi# orbun remove -g gambiInteractive Mode
Section titled “Interactive Mode”When you run any command without its required flags in a terminal (TTY), the CLI enters interactive mode and prompts you for each option:
┌ gambi join│◇ Room code:│ ABC123│◆ LLM Provider:│ ● Ollama (localhost:11434)│ ○ LM Studio (localhost:1234)│ ○ vLLM (localhost:8000)│ ○ Custom URL│◇ Select model:│ llama3.2│└ Joined room ABC123!Interactive mode is disabled when piping input (echo "x" | gambi create), so scripts work as before.
Commands
Section titled “Commands”Start a hub server.
# Interactive — prompts for port, host, mDNS:gambi serve
# With flags:gambi serve [options]Options:
| Option | Description | Default |
|---|---|---|
--port, -p | Port to listen on | 3000 |
--host, -h | Host to bind to | 0.0.0.0 |
--mdns, -m | Enable mDNS auto-discovery | false |
--quiet, -q | Suppress logo output | false |
Example:
gambi serve --port 3000 --mdnscreate
Section titled “create”Create a new room on a hub.
# Interactive — prompts for name and password:gambi create
# With flags:gambi create --name "Room Name" [options]Options:
| Option | Description | Default |
|---|---|---|
--name, -n | Room name | Required (prompted in interactive mode) |
--password, -p | Password to protect the room | None |
--hub, -H | Hub URL | http://localhost:3000 |
Examples:
# Create a room interactivelygambi create
# Create with flagsgambi create --name "My Room"
# Create on a custom hubgambi create --name "My Room" --hub http://192.168.1.10:3000
# Create a password-protected roomgambi create --name "My Room" --password secret123Join a room and expose your LLM endpoint.
# Interactive — select provider, model, set nickname:gambi join
# With flags:gambi join --code <room-code> --model <model> [options]Options:
| Option | Description | Default |
|---|---|---|
--code, -c | Room code to join | Required (prompted in interactive mode) |
--model, -m | Model to expose | Required (prompted in interactive mode) |
--endpoint, -e | Local LLM endpoint URL used for probing and inference | http://localhost:11434 |
--network-endpoint | Network-reachable URL to publish to the hub | Auto-detected when needed |
--nickname, -n | Display name | Auto-generated |
--header | Auth header in the format Header=Value | None |
--header-env | Auth header in the format Header=ENV_VAR | None |
--password, -p | Room password (if protected) | None |
--hub, -H | Hub URL | http://localhost:3000 |
--no-specs | Don’t share machine specs | false |
--no-network-rewrite | Disable automatic localhost-to-LAN rewrite for remote hubs | false |
The CLI automatically probes your local endpoint to detect available models and protocol capabilities (Responses API vs Chat Completions).
When the hub is remote and your local endpoint is loopback-only (for example http://localhost:11434), Gambi tries to publish a LAN-reachable URL automatically. In interactive mode, the CLI explains the rewrite and lets you confirm or override it. Use --network-endpoint when you want to publish a specific URL yourself.
In interactive mode, you’ll select your LLM provider from a list (Ollama, LM Studio, vLLM, or custom URL), optionally add auth headers, and then choose from the detected models.
Examples:
# Join interactively — guided through all optionsgambi join
# Join with Ollamagambi join --code ABC123 --model llama3
# Join with LM Studiogambi join --code ABC123 \ --model mistral \ --endpoint http://localhost:1234
# Join a remote hub and publish an explicit LAN URLgambi join --code ABC123 \ --hub http://192.168.1.10:3000 \ --model llama3 \ --endpoint http://localhost:11434 \ --network-endpoint http://192.168.1.25:11434
# Join with custom nicknamegambi join --code ABC123 \ --model llama3 \ --nickname "alice-4090"
# Join a remote provider securelyexport OPENROUTER_AUTH="Bearer sk-or-..."gambi join --code ABC123 \ --model meta-llama/llama-3.1-8b-instruct:free \ --endpoint https://openrouter.ai/api \ --header-env Authorization=OPENROUTER_AUTH
# Join a password-protected roomgambi join --code ABC123 \ --model llama3 \ --password secret123List available rooms on a hub.
# Interactive — prompts for hub URL and output format:gambi list
# With flags:gambi list [options]Options:
| Option | Description | Default |
|---|---|---|
--hub, -H | Hub URL | http://localhost:3000 |
--json, -j | Output as JSON | false |
Example:
gambi list# Output:# Available rooms:# ABC123 My Room# Participants: 3# XYZ789 Test Room# Participants: 1
gambi list --jsonmonitor
Section titled “monitor”Open the TUI to monitor rooms in real-time.
gambi monitor [options]Options:
| Option | Description | Default |
|---|---|---|
--hub, -H | Hub URL | http://localhost:3000 |
The monitor shows participants, their status (online/offline), and a live activity log of events (joins, requests, errors) via SSE.
Tip: The standalone CLI currently shows help when you run gambi without arguments. Use gambi monitor to open the TUI.
Supported Providers
Section titled “Supported Providers”Gambi works with any endpoint that exposes OpenResponses or OpenAI-compatible chat/completions:
| Provider | Default Endpoint | Protocols |
|---|---|---|
| Ollama | http://localhost:11434 | Responses API, Chat Completions |
| LM Studio | http://localhost:1234 | Responses API, Chat Completions |
| LocalAI | http://localhost:8080 | Responses API, Chat Completions |
| vLLM | http://localhost:8000 | Responses API, Chat Completions |
For cloud providers (OpenRouter, Together AI, Groq, etc.), see the Remote Providers guide.