Skip to main content
Hebo Gateway provides a unified API for completions, embeddings, and model discovery. It normalizes provider differences so you can switch models without rewriting your client.

Why Gateway

  • OpenAI-compatible endpoints and normalized model IDs across providers
  • One way to configure reasoning across all models
  • Support for streaming and non-streaming responses
  • Compatible with common AI SDKs (Vercel AI SDK, TanStack AI, Langchain, OpenAI SDK, …)

Choose an SDK

If you havent installed any SDK yet, you’ll have to do that first. Most of our own examples in the documentation will refer to Vercel AI SDK (which we use ourselves internally) using TypeScript.
bun add ai @ai-sdk/openai-compatible
We recommend bun as your default JavaScript / TypeScript toolkit, but you can also use npm, pnpm or yarn.
You can use any other SDKs and languages (e.g. Python), as long as they can connect to an OpenAI-compatible endpoint. If you find any issues accessing Hebo Gateway with another library, please don’t hesitate to report on our issue tracker.

Your First Call

Now you’re already ready for your first call:
import { createOpenAICompatible } from "@ai-sdk/openai-compatible";
import { generateText } from "ai";

const hebo = createOpenAICompatible({
  apiKey: process.env.HEBO_API_KEY,
  baseURL: "https://gateway.hebo.ai/v1",
});

const { text } = await generateText({
  model: hebo("openai/gpt-oss-20b"),
  prompt: "Tell me a joke about monkeys",
});

console.log(text);
If you don’t have an API key yet, register for free in the Hebo Console.

What’s Next?