Skip to main content
Docs Examples

Code Examples

Copy-ready examples showing the most common patterns. All examples use the base URL https://ai.gabforge.ai/v1.

Python

Basic chat completion

Send a single user message and receive a complete response. The simplest possible integration.

from openai import OpenAI

client = OpenAI(
    base_url="https://ai.gabforge.ai/v1",
    api_key="YOUR_API_KEY",
)

response = client.chat.completions.create(
    model="gabforge-coder",
    messages=[
        {"role": "user", "content": "Hello!"},
    ],
)

print(response.choices[0].message.content)
Python

Streaming chat

Receive tokens as they are generated. Pass stream=True and iterate over the response for a real-time experience.

from openai import OpenAI

client = OpenAI(
    base_url="https://ai.gabforge.ai/v1",
    api_key="YOUR_API_KEY",
)

stream = client.chat.completions.create(
    model="gabforge-coder",
    messages=[
        {"role": "user", "content": "Write a quicksort in Python."},
    ],
    stream=True,
)

for chunk in stream:
    delta = chunk.choices[0].delta
    if delta.content is not None:
        print(delta.content, end="", flush=True)

print()  # newline after stream ends
JavaScript

Node.js with streaming

Use the openai npm package — the same configuration pattern works identically.

import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://ai.gabforge.ai/v1",
  apiKey: "YOUR_API_KEY",
});

const stream = await client.chat.completions.create({
  model: "gabforge-coder",
  messages: [{ role: "user", content: "Write a quicksort in JavaScript." }],
  stream: true,
});

for await (const chunk of stream) {
  const content = chunk.choices[0]?.delta?.content ?? "";
  process.stdout.write(content);
}
console.log();

Install with: npm install openai

Python

System prompt and conversation history

Set the model's persona with a system message, then maintain multi-turn conversation by passing the full message history on each call.

from openai import OpenAI

client = OpenAI(
    base_url="https://ai.gabforge.ai/v1",
    api_key="YOUR_API_KEY",
)

# Maintain conversation history in a list
messages = [
    {
        "role": "system",
        "content": (
            "You are a senior Python engineer. "
            "Give concise, production-quality answers. "
            "Prefer brevity over exhaustive explanation."
        ),
    },
    {"role": "user", "content": "What is a context manager?"},
]

response = client.chat.completions.create(
    model="gabforge-coder",
    messages=messages,
)

assistant_reply = response.choices[0].message.content
print(assistant_reply)

# Append reply and continue the conversation
messages.append({"role": "assistant", "content": assistant_reply})
messages.append({"role": "user", "content": "Show me an example with a database connection."})

follow_up = client.chat.completions.create(
    model="gabforge-coder",
    messages=messages,
)
print(follow_up.choices[0].message.content)
cURL

cURL — no dependencies

Test the API directly from your terminal or integrate it into shell scripts. No SDK installation required.

# Non-streaming request
curl https://ai.gabforge.ai/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gabforge-coder",
    "messages": [
      {"role": "user", "content": "What is a REST API?"}
    ],
    "temperature": 0.7
  }'

# Streaming request (add --no-buffer to see tokens as they arrive)
curl https://ai.gabforge.ai/v1/chat/completions \
  --no-buffer \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gabforge-coder",
    "messages": [
      {"role": "user", "content": "What is a REST API?"}
    ],
    "stream": true
  }'