AI API
OpenAI-compatible AI endpoints for chat and image generation
Overview
The AI API provides OpenAI-compatible endpoints for AI-powered chat completions and image generation. Use the same code you use with OpenAI SDK, just change the base URL to LikeDo.
Playground
You can try the API endpoints in the AI API Playground.
Authentication
All endpoints require API key authentication. Use your API key in one of two ways:
-
Authorization Header (Recommended):
Authorization: Bearer YOUR_API_KEY -
Query Parameter:
?key=YOUR_API_KEY
Rate Limits
Rate limits are configured based on your subscription tier:
- Free Plan: 200 requests per hour
- Pro Plan: 2000 requests per hour
- Lifetime Plan: 2000 requests per hour
Credit System
AI API requests consume credits based on token usage:
- Input tokens: 1 credit per 1000 tokens
- Output tokens: 3 credits per 1000 tokens
- Minimum: 1 credit per request
For image generation:
- Standard images (1024x1024): 10 credits per image
- HD images: 15 credits per image
Endpoints
Chat Completions
POST /api/v1/ai/chat/completionsGenerate AI-powered chat completions using various AI models. Compatible with OpenAI SDK.
Supported Models
- OpenAI: gpt-4, gpt-4-turbo, gpt-3.5-turbo
- Google Gemini: gemini-pro, gemini-1.5-pro
- DeepSeek: deepseek-chat, deepseek-coder
- OpenRouter: Access to multiple open-source models
Request Body
| Parameter | Type | Required | Description |
|---|---|---|---|
model | string | Yes | AI model to use |
messages | array | Yes | Array of message objects |
temperature | number | No | Randomness (0.0-2.0, default: 0.7) |
max_tokens | number | No | Maximum tokens in response (1-32000) |
stream | boolean | No | Enable streaming response (default: false) |
provider | string | No | Force specific provider (openai, gemini, deepseek, openrouter) |
Message Object
{
role: 'system' | 'user' | 'assistant',
content: string
}Example Request
curl -X POST "https://like.do/api/v1/ai/chat/completions" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Explain quantum computing in simple terms."
}
],
"temperature": 0.7,
"max_tokens": 500
}'Success Response (200 OK)
{
"id": "chatcmpl-1234567890",
"object": "chat.completion",
"created": 1704369600,
"model": "gpt-4",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Quantum computing is a type of computing that uses quantum mechanics..."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 25,
"completion_tokens": 150,
"total_tokens": 175
},
"credits_consumed": 1
}Streaming Response
When stream: true, the response is sent as Server-Sent Events (SSE):
data: {"id":"chatcmpl-1234","object":"chat.completion.chunk","created":1704369600,"model":"gpt-4","choices":[{"index":0,"delta":{"content":"Quantum"},"finish_reason":null}]}
data: {"id":"chatcmpl-1234","object":"chat.completion.chunk","created":1704369600,"model":"gpt-4","choices":[{"index":0,"delta":{"content":" computing"},"finish_reason":null}]}
data: [DONE]Image Generation
POST /api/v1/ai/images/generationsGenerate images from text descriptions using AI models.
Supported Models
- OpenAI: dall-e-3, dall-e-2
- Replicate: stable-diffusion, flux-pro, flux-schnell
- Stability AI: stable-diffusion-xl
Request Body
| Parameter | Type | Required | Description |
|---|---|---|---|
model | string | Yes | Image generation model |
prompt | string | Yes | Text description of desired image |
size | string | No | Image size (256x256, 512x512, 1024x1024, default: 1024x1024) |
n | number | No | Number of images to generate (1-4, default: 1) |
quality | string | No | Image quality: standard or hd (default: standard) |
Example Request
curl -X POST "https://like.do/api/v1/ai/images/generations" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type": "application/json" \
-d '{
"model": "dall-e-3",
"prompt": "A serene Japanese garden with cherry blossoms, koi pond, and traditional tea house at sunset",
"size": "1024x1024",
"quality": "standard",
"n": 1
}'Success Response (200 OK)
{
"created": 1704369600,
"data": [
{
"url": "https://storage.like.do/ai-images/abc123.png",
"revised_prompt": "A tranquil Japanese garden featuring blooming cherry blossom trees..."
}
],
"credits_consumed": 10
}OpenAI SDK Compatibility
The AI API is fully compatible with the OpenAI SDK. Just change the base URL:
JavaScript (OpenAI SDK)
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: process.env.LIKEDO_API_KEY,
baseURL: 'https://like.do/api/v1/ai',
});
// Chat completion
const completion = await client.chat.completions.create({
model: 'gpt-4',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' },
],
});
console.log(completion.choices[0].message.content);
// Image generation
const image = await client.images.generate({
model: 'dall-e-3',
prompt: 'A beautiful sunset over mountains',
size: '1024x1024',
});
console.log(image.data[0].url);Python (OpenAI SDK)
from openai import OpenAI
import os
client = OpenAI(
api_key=os.getenv('LIKEDO_API_KEY'),
base_url='https://like.do/api/v1/ai',
)
# Chat completion
completion = client.chat.completions.create(
model='gpt-4',
messages=[
{'role': 'system', 'content': 'You are a helpful assistant.'},
{'role': 'user', 'content': 'Hello!'},
],
)
print(completion.choices[0].message.content)
# Image generation
image = client.images.generate(
model='dall-e-3',
prompt='A beautiful sunset over mountains',
size='1024x1024',
)
print(image.data[0].url)Error Responses
All error responses follow a consistent format:
{
"error": {
"message": "Error message describing what went wrong",
"type": "invalid_request_error",
"code": "invalid_api_key"
}
}Common Error Codes
| Status Code | Description |
|---|---|
| 400 | Bad Request - Invalid parameters or missing required fields |
| 401 | Unauthorized - Missing or invalid API key |
| 402 | Payment Required - Insufficient credits |
| 429 | Too Many Requests - Rate limit exceeded |
| 500 | Internal Server Error - Something went wrong on our end |
Model Selection Guide
Chat Models
For General Tasks (gpt-4):
- Best overall quality
- Great for complex reasoning
- Higher cost per token
For Fast Responses (gpt-3.5-turbo):
- Fast and cost-effective
- Good for simple tasks
- Lower quality than GPT-4
For Coding (deepseek-coder):
- Optimized for code generation
- Supports multiple programming languages
- Cost-effective
For Long Context (gemini-1.5-pro):
- Supports very long contexts (up to 1M tokens)
- Good for document analysis
- Competitive pricing
Image Models
For High Quality (dall-e-3):
- Best image quality
- Prompt rewriting for better results
- Higher cost
For Speed (flux-schnell):
- Very fast generation
- Good quality
- Lower cost
For Open Source (stable-diffusion-xl):
- Open source alternative
- Good customization options
- Lower cost
Best Practices
- Use System Messages: Set clear instructions in the system message for better results
- Manage Token Limits: Monitor
max_tokensto control costs and response length - Handle Streaming: Use streaming for better user experience with long responses
- Retry Logic: Implement exponential backoff for rate limit errors
- Cache Responses: Cache common requests to save credits and improve speed
- Monitor Credits: Track your credit usage to avoid unexpected costs
Credit Consumption Examples
Chat Completions
Request:
- Prompt: 100 tokens
- Response: 300 tokens
Credits consumed:
- Input: 100 / 1000 = 0.1 credits
- Output: 300 / 1000 * 3 = 0.9 credits
- Total: 1 credit (rounded up to minimum)Image Generation
Request:
- Model: dall-e-3
- Size: 1024x1024
- Quality: standard
- Images: 1
Credits consumed: 10 creditsStreaming Best Practices
When using stream: true, follow these practices:
const stream = await client.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Tell me a story' }],
stream: true,
});
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content || '';
process.stdout.write(content);
}Provider Override
You can force a specific AI provider using the provider parameter:
const completion = await client.chat.completions.create({
model: 'gpt-4',
provider: 'openai', // Force OpenAI provider
messages: [{ role: 'user', content: 'Hello!' }],
});Available providers:
openai- OpenAI modelsgemini- Google Gemini modelsdeepseek- DeepSeek modelsopenrouter- OpenRouter models
Support
Need help with the AI API?
- Interactive Playground: Try it now
- API Overview: View all APIs
- Model Pricing: Check our pricing page for credit costs
- Contact Support: Reach out to our team for technical assistance
LikeDo Docs