GPT Token Calculator

Count tokens instantly for GPT-5.4, Claude Opus 4.6, Gemini 3.1 Pro, Claude Sonnet 4.6, DeepSeek V3.2. See context window usage, estimated cost, and a visual token breakdown.

0tokens
0 characters·0.0% of context window
0 / 1,000,000 tokens
Input
$0.0025/1K tokens
$0.00
Output
$0.015/1K tokens · same length as input
$0.00
Total
$0.00

How It Works

This tool uses the open-source gpt-tokenizer library to run OpenAI's Byte Pair Encoding (BPE) algorithm entirely in your browser. When you type or paste text, the tokenizer splits it into subword units — the same way GPT models process text during inference. GPT-5.4 uses the o200k_base encoding, giving an exact token count. Claude, Gemini, and DeepSeek use proprietary tokenizers that are not available as client-side libraries; this tool uses cl100k_base as a close approximation for those models. All processing is local — zero network requests are made when counting tokens.

Understanding API Costs

Every LLM API provider charges per token. Here's a quick reference for the models in this tool:

  • GPT-5.4: $0.0025 per 1K input tokens, $0.015 per 1K output tokens. 1,000,000 token context window.
  • Claude Opus 4.6: $0.015 per 1K input tokens, $0.075 per 1K output tokens. 200,000 token context window.
  • Gemini 3.1 Pro: $0.002 per 1K input tokens, $0.012 per 1K output tokens. 1,000,000 token context window.
  • Claude Sonnet 4.6: $0.003 per 1K input tokens, $0.015 per 1K output tokens. 200,000 token context window.
  • DeepSeek V3.2: $0.00028 per 1K input tokens, $0.00042 per 1K output tokens. 130,000 token context window.

Context Window & Limits

The context window is the maximum number of tokens a model can process in a single request — including both your input (prompt + history) and the model's output. Exceeding it causes an error. Key limits to know:

  • GPT-5.4: 1,000,000 tokens. Input: $0.0025/1K · Output: $0.015/1K.
  • Claude Opus 4.6: 200,000 tokens. Input: $0.015/1K · Output: $0.075/1K.
  • Gemini 3.1 Pro: 1,000,000 tokens. Input: $0.002/1K · Output: $0.012/1K.
  • Claude Sonnet 4.6: 200,000 tokens. Input: $0.003/1K · Output: $0.015/1K.
  • DeepSeek V3.2: 130,000 tokens. Input: $0.00028/1K · Output: $0.00042/1K.

Frequently Asked Questions

What is a token in AI models?
A token is the basic unit language models process text in. Roughly 1 token = 4 characters = 0.75 words in English. Words can be one token or split into multiple — 'tokenization' may become 'token' + 'ization'. Numbers and punctuation often get their own tokens.
Why does token count matter?
APIs charge per token. Knowing your token count lets you estimate costs, avoid exceeding context window limits (which causes API errors), and optimise your prompts for better cost efficiency.
Is my text private?
Completely. All tokenization runs in your browser. No text is sent to servers, stored, or logged. Open DevTools Network tab while typing to verify — you'll see zero outbound requests.
How accurate is the count for Claude, Gemini, and DeepSeek?
GPT-5.4 is exact — it uses OpenAI's o200k_base tokenizer directly. Claude, Gemini, and DeepSeek use proprietary tokenizers, so this tool uses cl100k_base as a proxy, typically within 5–10% of the actual count. For exact counts use Anthropic's API token counter, Google AI Studio, or DeepSeek's API.

Send Feedback