Skip to content

Quick Start

This guide assumes you have Gatewyse running. If not, see Installation first.

Step 1: Start the Server

Using Docker:

Terminal window
docker compose -f docker/docker-compose.yml up -d

Or manually:

Terminal window
pnpm dev

Verify the server is running:

Terminal window
curl http://localhost:3000/health

Step 2: Log into the Admin Dashboard

Open the admin dashboard at http://localhost:3001.

Log in with the default super admin credentials:

FieldValue
Emailadmin@ai-gateway.local
PasswordThe value of SUPER_ADMIN_PASSWORD from your .env

The default seed password must meet complexity requirements: at least 12 characters with uppercase, lowercase, digit, and special character.

Step 3: Add Your First Provider

  1. Navigate to Providers in the sidebar.
  2. Click Add Provider.
  3. Select OpenAI as the provider type.
  4. Enter your OpenAI API key.
  5. Enable the capabilities you want (e.g., chat, embeddings).
  6. Save the provider configuration.

The gateway encrypts your API key at rest using AES-256-GCM before storing it in MongoDB.

Step 4: Create a Routing Configuration

  1. Navigate to Routing in the sidebar.
  2. Click Add Route.
  3. Set the capability to chat.
  4. Choose a routing strategy (start with priority for a single provider).
  5. Add your OpenAI provider to the provider list.
  6. Save the routing configuration.

Step 5: Create an API Key

  1. Navigate to API Keys in the sidebar.
  2. Click Create API Key.
  3. Give the key a name (e.g., “my-first-key”).
  4. Copy the generated key — it is shown only once.

Step 6: Make Your First API Call

Use the OpenAI-compatible chat completions endpoint:

Terminal window
curl http://localhost:3000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "gpt-4",
"messages": [
{"role": "user", "content": "Hello, world!"}
]
}'

You should receive a standard OpenAI-format response routed through your configured provider.

Step 7: Try Other Endpoints

Gatewyse supports multiple capabilities through the same unified API:

Terminal window
# Embeddings
curl http://localhost:3000/v1/embeddings \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{"model": "text-embedding-3-small", "input": "Hello, world!"}'
# List available models
curl http://localhost:3000/v1/models \
-H "Authorization: Bearer YOUR_API_KEY"

Using OpenAI SDKs

Since Gatewyse exposes an OpenAI-compatible API, you can use the official OpenAI SDK by changing the base URL:

from openai import OpenAI
client = OpenAI(
api_key="YOUR_GATEWAY_API_KEY",
base_url="http://localhost:3000/v1"
)
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'YOUR_GATEWAY_API_KEY',
baseURL: 'http://localhost:3000/v1',
});
const response = await client.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }],
});
console.log(response.choices[0].message.content);

Next Steps