4.8 KiB
4.8 KiB
API v2 Usage Example
Overview
API v2 提供了简化的聊天完成接口,与 v1 接口共享核心逻辑,确保功能一致性和代码维护性。
Endpoint
POST /api/v2/chat/completions
Description
This is a simplified version of the chat completions API that only requires essential parameters. All other configuration parameters are automatically fetched from the backend bot configuration API.
Code Architecture (重构后的代码结构)
1. 公共函数提取
process_messages(): 处理消息列表,包括[ANSWER]分割和语言指令添加create_agent_and_generate_response(): 创建agent并生成响应的公共逻辑create_project_directory(): 创建项目目录的公共逻辑extract_api_key_from_auth(): 从Authorization header中提取API key
2. 不同的鉴权方式
-
v1接口: Authorization header中的API key直接用作模型API密钥
Authorization: Bearer your-model-api-key -
v2接口: 需要有效的MD5哈希令牌进行认证
# 生成鉴权token token=$(echo -n "master:your-bot-id" | md5sum | cut -d' ' -f1) Authorization: Bearer ${token}
3. 接口设计
/api/v1/chat/completions: 处理ChatRequest,直接使用请求中的所有参数/api/v2/chat/completions: 处理ChatRequestV2,从后端获取配置参数
4. 设计优势
- ✅ 最大化代码复用,减少重复逻辑
- ✅ 保持不同的鉴权方式,满足不同需求
- ✅ 清晰的函数分离,易于维护和测试
- ✅ 统一的错误处理和响应格式
- ✅ 异步HTTP请求,提高并发性能
- ✅ 使用aiohttp替代requests,避免阻塞
Request Format
Required Parameters
bot_id: string - The target robot IDmessages: array of message objects - Conversation messages
Optional Parameters
stream: boolean - Whether to stream responses (default: false)tool_response: boolean - Whether to include tool responses (default: false)language: string - Response language (default: "ja")
Message Object Format
{
"role": "user" | "assistant" | "system",
"content": "string"
}
Example Request
Basic Request
curl -X POST "http://localhost:8001/api/v2/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-api-key" \
-d '{
"bot_id": "1624be71-5432-40bf-9758-f4aecffd4e9c",
"messages": [
{
"role": "user",
"content": "Hello, how are you?"
}
],
"language": "en",
"stream": false
}'
Streaming Request
curl -X POST "http://localhost:8001/api/v2/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-api-key" \
-d '{
"bot_id": "1624be71-5432-40bf-9758-f4aecffd4e9c",
"messages": [
{
"role": "user",
"content": "Tell me about yourself"
}
],
"language": "ja",
"stream": true
}'
Backend Configuration
The endpoint automatically fetches the following configuration from {BACKEND_HOST}/v1/agent_bot_config/{bot_id}:
model: Model name (e.g., "qwen/qwen3-next-80b-a3b-instruct")model_server: Model server URLdataset_ids: Array of dataset IDs for knowledge basesystem_prompt: System prompt for the agentmcp_settings: MCP configuration settingsapi_key: API key for model server access
Authentication
v2 API Authentication (Required)
The v2 endpoint requires a specific authentication token format:
Token Generation:
# Method 1: Using environment variables (recommended)
export MASTERKEY="your-master-key"
export BOT_ID="1624be71-5432-40bf-9758-f4aecffd4e9c"
token=$(echo -n "${MASTERKEY}:${BOT_ID}" | md5sum | cut -d' ' -f1)
# Method 2: Direct calculation
token=$(echo -n "master:1624be71-5432-40bf-9758-f4aecffd4e9c" | md5sum | cut -d' ' -f1)
Usage:
curl -X POST "http://localhost:8001/api/v2/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer ${token}" \
-d '{
"bot_id": "1624be71-5432-40bf-9758-f4aecffd4e9c",
"messages": [
{
"role": "user",
"content": "Hello"
}
]
}'
Authentication Errors:
401 Unauthorized: Missing Authorization header403 Forbidden: Invalid authentication token
Response Format
Returns the same response format as /api/v1/chat/completions:
Non-Streaming Response
{
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Response content here"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 10,
"completion_tokens": 20,
"total_tokens": 30
}
}
Streaming Response
Returns Server-Sent Events (SSE) format compatible with OpenAI's streaming API.