Pisces
Pisces Gateway
API Docs Hub

一个入口,兼容 OpenAI / Claude / Gemini

网关统一了鉴权、计费、模型路由、日志追踪与流式协议转换。你可以保留原有客户端 SDK, 仅修改 Base URL 与 API Key,即可快速接入多模型能力。

OpenAI 兼容

支持 `POST /v1/chat/completions` 与工具调用、流式返回,适配绝大多数 OpenAI SDK。

Claude 兼容

支持 `POST /v1/messages`。自动转换消息体、工具定义与 SSE 流式结构。

Gemini 兼容

支持 `generateContent` / `streamGenerateContent`,并兼容 `x-goog-api-key` 与 `?key=` 鉴权方式。

Quick Start

默认使用 `Authorization: Bearer pisces-xxxx`。对于 Anthropic/Gemini 风格客户端,也可用 `x-api-key` 或 `x-goog-api-key`。

curl -X POST '/v1/chat/completions' \
  -H 'Authorization: Bearer pisces-xxxx' \
  -H 'Content-Type: application/json' \
  -d '{"model":"[M]claude-sonnet-4","messages":[{"role":"user","content":"Hello"}]}'

Unified Output & Error Contract

平台现在统一了三套协议的输出与报错结构。每个响应都会附带 `pisces` 扩展字段(请求追踪、渠道、时间戳等), 不影响官方 SDK 兼容性。

{
  "id": "chatcmpl-xxx",
  "object": "chat.completion",
  "created": 1730000000,
  "model": "[M]claude-sonnet-4",
  "choices": [{"index":0,"message":{"role":"assistant","content":"hello"},"finish_reason":"stop"}],
  "usage": {"prompt_tokens": 10, "completion_tokens": 20, "total_tokens": 30},
  "pisces": {"gateway":"Pisces","protocol":"openai","request_id":"req_xxx","channel":"default-poe","timestamp":1730000000}
}

Endpoint Matrix

下表由服务端实时接口描述生成(`/api/public/spec`)。

Protocol Method Path Streaming Description
Loading...

Commercial Integration Guide

面向生产接入的关键约定:鉴权、限流重试、幂等、防重复扣费、请求追踪与标准错误码。

Authentication

支持 OpenAI / Anthropic / Gemini 常见鉴权方式。

  • Loading...

Rate Limit & Retry

建议对 429/5xx 使用指数退避重试。

  • Loading...

Idempotency

建议对付费写操作传 `Idempotency-Key`。

  • Loading...

Tracing

统一通过 `X-Request-Id` + `pisces` / 响应头追踪。

Loading...
HTTP Error Type Code
Loading...

cURL Examples

按照协议切换示例,直接复制执行。

Loading...

Python SDK Snippets

你现有项目里的示例代码已内置到文档,便于直接调试。

import base64

import openai

# Pisces OpenAI Demo
api_key = "pisces-xxxx"
base_url = "https://api.pisces.ink"

client = openai.OpenAI(
    api_key=api_key,
    base_url=base_url+"/v1/",
)


# Text 2 Text
model = "gpt-4o-mini"

# Non-Streaming Example
response = client.chat.completions.create(
    model=model,
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Hello!"}
    ],
)
print(response.choices[0].message.content)


# Streaming Example
stream = client.chat.completions.create(
    model=model,
    messages=[
        {"role": "user", "content": "write a short poem."}
    ],
    stream=True,
)
for chunk in stream:
    if chunk.choices[0].delta.content is not None:
        print(chunk.choices[0].delta.content, end="")



# Text 2 Image
# Image Generation Example
image_model = "playground-v3"
result = client.images.generate(
    model=image_model,
    prompt="A cute kitten",
    n=1,  # The number of images to generate
)
image_url = result.data[0].url
print(f"Image URL: {image_url}")


# Image Edit Example
image_model = "GPT-Image-1"
result = client.images.edit(
    model=image_model,
    prompt="上传的doro狗手中举着一个牌子,上面写着'I'm Doro'",
    image=[open('doro.png', 'rb')],  # The image to edit
    n=1,  # The number of images to generate
)
image_base64 = result.data[0].b64_json
image_bytes = base64.b64decode(image_base64)

# Save the image to a file
with open("doro_new.png", "wb") as f:
    f.write(image_bytes)