Quick Start
This guide assumes you have Gatewyse running. If not, see Installation first.
Step 1: Start the Server
Using Docker:
docker compose -f docker/docker-compose.yml up -dOr manually:
pnpm devVerify the server is running:
curl http://localhost:3000/healthStep 2: Log into the Admin Dashboard
Open the admin dashboard at http://localhost:3001.
Log in with the default super admin credentials:
| Field | Value |
|---|---|
admin@ai-gateway.local | |
| Password | The value of SUPER_ADMIN_PASSWORD from your .env |
The default seed password must meet complexity requirements: at least 12 characters with uppercase, lowercase, digit, and special character.
Step 3: Add Your First Provider
- Navigate to Providers in the sidebar.
- Click Add Provider.
- Select OpenAI as the provider type.
- Enter your OpenAI API key.
- Enable the capabilities you want (e.g.,
chat,embeddings). - Save the provider configuration.
The gateway encrypts your API key at rest using AES-256-GCM before storing it in MongoDB.
Step 4: Create a Routing Configuration
- Navigate to Routing in the sidebar.
- Click Add Route.
- Set the capability to chat.
- Choose a routing strategy (start with priority for a single provider).
- Add your OpenAI provider to the provider list.
- Save the routing configuration.
Step 5: Create an API Key
- Navigate to API Keys in the sidebar.
- Click Create API Key.
- Give the key a name (e.g., “my-first-key”).
- Copy the generated key — it is shown only once.
Step 6: Make Your First API Call
Use the OpenAI-compatible chat completions endpoint:
curl http://localhost:3000/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{ "model": "gpt-4", "messages": [ {"role": "user", "content": "Hello, world!"} ] }'You should receive a standard OpenAI-format response routed through your configured provider.
Step 7: Try Other Endpoints
Gatewyse supports multiple capabilities through the same unified API:
# Embeddingscurl http://localhost:3000/v1/embeddings \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{"model": "text-embedding-3-small", "input": "Hello, world!"}'
# List available modelscurl http://localhost:3000/v1/models \ -H "Authorization: Bearer YOUR_API_KEY"Using OpenAI SDKs
Since Gatewyse exposes an OpenAI-compatible API, you can use the official OpenAI SDK by changing the base URL:
from openai import OpenAI
client = OpenAI( api_key="YOUR_GATEWAY_API_KEY", base_url="http://localhost:3000/v1")
response = client.chat.completions.create( model="gpt-4", messages=[{"role": "user", "content": "Hello!"}])print(response.choices[0].message.content)import OpenAI from 'openai';
const client = new OpenAI({ apiKey: 'YOUR_GATEWAY_API_KEY', baseURL: 'http://localhost:3000/v1',});
const response = await client.chat.completions.create({ model: 'gpt-4', messages: [{ role: 'user', content: 'Hello!' }],});console.log(response.choices[0].message.content);Next Steps
- Configuration — customize environment variables and feature flags
- Docker Deployment — production Docker setup
- Environment Variables — full reference of all settings