LLM API 公開価格

モデル

Model Detail

DeepSeek V3.2 (thinking)

DeepSeek · Text · チャット補完

Lightweight model for cost-sensitive chat, basic generation, and batch text tasks.

Updated 2026-03-21

Input

0.28 USD / 1M tokens

Output

0.42 USD / 1M tokens

Context window

128,000 tokens

Max output

Not provided

Capabilities

Chat

Knowledge cutoff

Not provided

Reasoning model

Yes

Primary unit

1M トークン

Official and future third-party prices are shown in a shared unit.

Evidence rows

1

Each price row keeps source site, observed time, and excerpt.

Quick verify

Verify on Artificial Analysis

Start from this summary, then cross-check official sources or leaderboards.

Use cases

Good for

低成本チャット軽量テキスト生成バッチ要約とリライト

Not ideal for

複雑な推論ワークフロー重量级エージェントオーケストレーション要求の厳しいマルチモーダルタスク

Artificial Analysis snapshot

Fields below come from the AA model page for quick cross-checking.

AA Intelligence

41.71

AA Input price

3 USD / 1M tokens

AA Output price

4.5 USD / 1M tokens

Input modalities

Text

Output modalities

Text

Pricing details

Structured official and third-party prices with extensible dimensions.

Official pricing

Input0.28 USD / 1M tokens
Output0.42 USD / 1M tokens
CacheNot provided
BatchPending

Third-party provider pricing

No third-party provider pricing yet
Reserved dimensions: Input / Output / Cache / Batch
统一单位:1M トークン

Sources and evidence

Each price line includes a source chain for trust and verification.

  • Field入力 / 出力

    Scenarioチャット補完

    Captured value0.28 USD / 1M tokens

    Source domainapi-docs.deepseek.com

    Observed at2026-03-21

    Excerpt1M INPUT TOKENS (CACHE MISS) — deepseek-reasoner (per 1M tokens, vendor table) --- 1M INPUT TOKENS (CACHE HIT) — deepseek-reasoner (per 1M tokens, vendor table) --- 1M OUTPUT TOKENS — deepseek-reasoner (per 1M tokens, vendor table)