Average number of tokens consumed by a single execution (including both your prompt and the modelβs response). You can calculate tokens for a typical execution in the estimator below.
Your input should include your full prompt and custom instructions if they exist.
0 characters0 words0 tokens
This is the output returned by the AI. Note that for reasoning models, the amount of output tokens can actually be significantly higher than your visible output because of reasoning tokens.
Tokens per execution
0
You can estimate token counts for various providers with our tokenizer tool.
Detailed execution estimator
Function
0 characters0 words0 tokens$0
=GPT(prompt, [value])
0 tokens$0
0 characters0 words0 tokens$0
0 characters0 words0 tokens$0
This is the response returned by the AI. Note that it will usually be slightly different from what you see in your spreadsheet as our functions do some post-processing to improve formatting.
1
$0
Notes
Without an API key: The estimated cost covers everything and is deducted from your balance. Models that support prompt caching will get a 75% discount on cached input tokens, so the estimated cost can be higher than your actual cost.
With an API key: You pay GPT for Work the estimated cost (deducted from your balance) and also pay the AI provider directly for API usage.
For more information about the models, see AI providers and models. Note that token counts can vary from one AI provider to another. For some models (such as Sonar, Mistral, and Deepseek) the token count is approximate. (The OpenAI tokenizer is used)
Unlimited BYOK usage subscription
Annual plan with custom pricing. Pay per user, not per token, with unlimited usage. Requires your own API key or endpoint.