SDK Reference
The Gambi SDK provides a Vercel AI SDK provider for using shared LLMs through a Gambi hub.
Installation
Section titled “Installation”npm install gambi-sdk# orbun add gambi-sdkcreateGambi(options)
Section titled “createGambi(options)”Creates a Gambi provider instance.
| Option | Type | Default | Description |
|---|---|---|---|
roomCode | string | — | Room code to connect to. Required. |
hubUrl | string | http://localhost:3000 | Hub URL |
defaultProtocol | "openResponses" | "chatCompletions" | "openResponses" | Protocol used by the top-level routing helpers |
import { createGambi } from "gambi-sdk";
const gambi = createGambi({ roomCode: "ABC123", hubUrl: "http://localhost:3000",});Protocol Selection
Section titled “Protocol Selection”The SDK defaults to openResponses. Both protocols are first-class:
// Default: Responses APIconst gambi = createGambi({ roomCode: "ABC123",});
// Chat Completionsconst gambi = createGambi({ roomCode: "ABC123", defaultProtocol: "chatCompletions",});You can also select per-call via namespaces:
gambi.openResponses.any(); // Responses APIgambi.chatCompletions.any(); // Chat CompletionsModel Routing
Section titled “Model Routing”Three routing methods are available. All return a Vercel AI SDK model instance.
gambi.any()
Section titled “gambi.any()”Routes to a random online participant.
const result = await generateText({ model: gambi.any(), prompt: "Hello",});gambi.participant(id)
Section titled “gambi.participant(id)”Routes to a specific participant by nickname or ID.
const result = await generateText({ model: gambi.participant("alice"), prompt: "Hello",});gambi.model(name)
Section titled “gambi.model(name)”Routes to the first online participant running the specified model.
const result = await generateText({ model: gambi.model("llama3"), prompt: "Hello",});All routing methods are also available under gambi.openResponses.* and gambi.chatCompletions.*.
Streaming
Section titled “Streaming”Use streamText from the Vercel AI SDK:
import { streamText } from "ai";
const stream = await streamText({ model: gambi.any(), prompt: "Write a story",});
for await (const chunk of stream.textStream) { process.stdout.write(chunk);}Generation Options
Section titled “Generation Options”Standard Vercel AI SDK options are supported:
const result = await generateText({ model: gambi.any(), prompt: "Explain recursion", temperature: 0.7, maxTokens: 500,});See the Vercel AI SDK docs for all available options.