Quickstart

First request in under 2 minutes. If you have used the OpenAI SDK before, you already know how this works.

01

Get an API key

Contact us to receive your API key. Once provisioned, your key grants immediate access to all available models on the Continuum Inference platform.

Request API Access
02

Install the SDK

Continuum uses the standard OpenAI SDK. No proprietary library required. Install for your language:

Python
$ pip install openai
TypeScript / Node.js
$ npm install openai
cURL / HTTP

No installation required. Any HTTP client works.

03

Make your first request

Send a chat completion request. The API is identical to OpenAI — same request format, same response format, same SDK methods.

Python

hello.py
from openai import OpenAI
client = OpenAI(
api_key="your_continuum_key",
base_url="https://api.continuum.au/v1"
)
response = client.chat.completions.create(
model="deepseek-v4-flash",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of Australia?"}
]
)
print(response.choices[0].message.content)

TypeScript

hello.ts
import OpenAI from "openai"
const client = new OpenAI({
apiKey: "your_continuum_key",
baseURL: "https://api.continuum.au/v1"
})
const response = await client.chat.completions.create({
model: "deepseek-v4-flash",
messages: [
{ role: "user", content: "What is the capital of Australia?" }
]
})
console.log(response.choices[0].message.content)

cURL

terminal
curl https://api.continuum.au/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer your_continuum_key" \ -d '{ "model": "deepseek-v4-flash", "messages": [ {"role": "user", "content": "What is the capital of Australia?"} ] }'
04

Migrate from OpenAI

If you have an existing OpenAI integration, migration is two lines. Your prompts, tools, streaming, and response handling stay identical.

migrate_openai.py
# Before (OpenAI)
client = OpenAI(
api_key="sk-your-openai-key",
# base_url defaults to api.openai.com
)
model="gpt-4o"
# After (Continuum)
client = OpenAI(
api_key="your_continuum_key", # ← changed
base_url="https://api.continuum.au/v1" # ← added
)
model="deepseek-v4-flash" # ← changed
# Everything else stays the same:
# messages, tools, response_format, streaming, temperature...
05

Migrate from Anthropic

If you are using the Anthropic Python SDK, switch to the OpenAI SDK with Continuum as the base URL. The message format is the same (system, user, assistant roles). Tool definitions use the OpenAI format.

migrate_anthropic.py
# Before (Anthropic)
from anthropic import Anthropic
client = Anthropic(api_key="sk-ant-...")
response = client.messages.create(
model="claude-sonnet-4-6", ...
)
# After (Continuum)
from openai import OpenAI
client = OpenAI(
api_key="your_continuum_key",
base_url="https://api.continuum.au/v1"
)
response = client.chat.completions.create(
model="deepseek-v4-flash",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Summarise this document..."}
]
)

Note: The Anthropic SDK uses client.messages.create() while the OpenAI SDK uses client.chat.completions.create(). The message format (roles and content) is compatible. Tool definitions differ slightly — Anthropic uses input_schema while OpenAI uses parameters. See the tool calling guide for details.

Response format

Responses follow the standard OpenAI chat completion format. If your code already parses OpenAI responses, it works with Continuum without changes.

response.json
{ "id": "chatcmpl-abc123", "object": "chat.completion", "created": 1714387200, "model": "deepseek-v4-flash", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "The capital of Australia is Canberra." }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 24, "completion_tokens": 9, "total_tokens": 33 } }

Key fields

choices[0].message.contentThe model response text.
choices[0].message.reasoning_contentThe reasoning chain (only present when thinking mode is enabled).
choices[0].message.tool_callsTool call requests (only present when the model decides to call a tool).
choices[0].finish_reason"stop" (natural end), "length" (hit max_tokens), "tool_calls" (model wants to call a tool).
usage.total_tokensTotal tokens consumed (prompt + completion). This is what you are billed on.

Need help integrating?

Our team can help you migrate from Anthropic or OpenAI and optimise your deployment for cost and performance.