Skip to content

SDK Reference

The Gambi SDK provides a Vercel AI SDK provider for using shared LLMs through a Gambi hub.

Terminal window
npm install gambi-sdk
# or
bun add gambi-sdk

Creates a Gambi provider instance.

OptionTypeDefaultDescription
roomCodestringRoom code to connect to. Required.
hubUrlstringhttp://localhost:3000Hub URL
defaultProtocol"openResponses" | "chatCompletions""openResponses"Protocol used by the top-level routing helpers
import { createGambi } from "gambi-sdk";
const gambi = createGambi({
roomCode: "ABC123",
hubUrl: "http://localhost:3000",
});

The SDK defaults to openResponses. Both protocols are first-class:

// Default: Responses API
const gambi = createGambi({
roomCode: "ABC123",
});
// Chat Completions
const gambi = createGambi({
roomCode: "ABC123",
defaultProtocol: "chatCompletions",
});

You can also select per-call via namespaces:

gambi.openResponses.any(); // Responses API
gambi.chatCompletions.any(); // Chat Completions

Three routing methods are available. All return a Vercel AI SDK model instance.

Routes to a random online participant.

const result = await generateText({
model: gambi.any(),
prompt: "Hello",
});

Routes to a specific participant by nickname or ID.

const result = await generateText({
model: gambi.participant("alice"),
prompt: "Hello",
});

Routes to the first online participant running the specified model.

const result = await generateText({
model: gambi.model("llama3"),
prompt: "Hello",
});

All routing methods are also available under gambi.openResponses.* and gambi.chatCompletions.*.

Use streamText from the Vercel AI SDK:

import { streamText } from "ai";
const stream = await streamText({
model: gambi.any(),
prompt: "Write a story",
});
for await (const chunk of stream.textStream) {
process.stdout.write(chunk);
}

Standard Vercel AI SDK options are supported:

const result = await generateText({
model: gambi.any(),
prompt: "Explain recursion",
temperature: 0.7,
maxTokens: 500,
});

See the Vercel AI SDK docs for all available options.