跳转到内容

AI Service

此内容尚不支持你的语言。

Product Factory provides a unified AI service through the worker-ai service. It acts as a gateway between your application and multiple AI providers, handling model routing, usage tracking, and credit deduction.

Architecture

Frontend (React Island)
→ Gateway (/ai/*)
→ worker-ai
→ Cloudflare AI Gateway / Direct Provider API
→ OpenAI / Anthropic / Google / etc.

All AI requests go through the gateway. The frontend never connects directly to AI providers.

Providers and Models

AI providers and models are managed through the admin console. Each provider has:

FieldDescription
nameProvider identifier (e.g., “openai”, “anthropic”)
api_keyEncrypted API key
base_urlProvider API endpoint
enabledWhether the provider is active

Models are registered per provider:

FieldDescription
model_idModel identifier (e.g., “gpt-4”, “claude-3-opus”)
provider_idParent provider
credit_costCredits per 1K tokens
enabledWhether the model is available to users

Chat Completions

The primary AI endpoint is /ai/chat, which accepts messages and returns completions:

Terminal window
curl -X POST http://localhost:8787/ai/chat \
-H "Content-Type: application/json" \
-H "Cookie: better-auth.session_token=..." \
-d '{
"model": "gpt-4",
"messages": [
{"role": "user", "content": "Hello"}
]
}'

The service handles:

  1. Session verification (via gateway)
  2. Credit preflight check
  3. Model routing to the correct provider
  4. Response streaming (when supported)
  5. Usage tracking and credit deduction

Usage Tracking

Every AI request is logged with:

FieldDescription
user_idRequesting user
model_idModel used
prompt_tokensInput token count
completion_tokensOutput token count
credit_costCredits deducted
duration_msRequest duration

API Endpoints

See the AI API Reference for the complete endpoint documentation.