LLM Token & Cost Calculator

Paste any prompt to see the estimated token count and what it would cost across major AI providers.

Runs entirely in your browser
Your Prompt
Prompt / Input Text
0 Est. Input Tokens
0 Words
0 Characters
0 Lines
Model Context Input cost Output cost Total cost Queries / budget

Token count is estimated using the ~4 chars/token rule of thumb (±15% for English). Prices are approximate and sourced from provider documentation — always verify at OpenAI, Anthropic, Google AI, Mistral, and Together AI pricing pages before making cost decisions. Prices shown are per-request (no batching discounts).

Model Pricing Reference $ per 1 million tokens
Model Provider Input $/1M Output $/1M Context

Prices last verified mid-2025. Check provider pages for the latest rates.

LLM Token Calculator — FAQ

How are tokens counted?
We use the standard approximation of 1 token per 4 characters for English text. Actual token counts depend on the model's tokenizer and can vary by ±15%.
Which models are included?
GPT-4o, GPT-4o mini, GPT-4 Turbo, o1, o1-mini, Claude 3 Opus, Claude 3.5 Sonnet, Claude 3.5 Haiku, Gemini 2.0 Flash, Gemini 1.5 Pro, Gemini 1.5 Flash, Mistral Large, Mistral Small, and Llama models via Together AI.
Are prices accurate?
Prices are approximate and sourced from provider documentation. AI API pricing changes frequently — always verify on the provider's official pricing page.
What is "output tokens"?
Output tokens are generated by the model in response to your input. By default we estimate it equals your input token count, but you can override it.