Set the cut-off limit for GPT functions in GPT for Sheets
Set a cut-off limit for the responses of the GPT functions in the current spreadsheet. This won't shape the response, but if the response goes beyond this limit, it will be truncated. This setting doesn't affect bulk AI tools.
Term | Definition |
---|---|
Token | Tokens can be thought of as pieces of words. During processing, the language model breaks down both the input (prompt) and the output (result) texts into smaller units called tokens. Tokens generally correspond to ~4 characters of common English text. So 100 tokens are approximately worth 75 words. See how text is split into tokens. |
Context window | Total amount of tokens that the model can consider at one time, including input (prompt) and output (result). The context window size depends on the model used. |
Max output | Maximum number of tokens that can be generated in the output by a given model. Max output is typically much lower than Context window. |
Cut-off limit | Maximum size of the result in GPT for Sheets, measured in tokens. If the result is larger than this limit, it will be truncated. Helps control cost and speed. Cut-off limit is set 200 tokens below Max output. |
In the sidebar, select GPT functions, and click Model settings.
Set Cut-off limit as follows:
If you expect short responses, lower the limit to get faster responses.
If you expect long responses, increase the limit to make sure they are not truncated.
If your response is truncated, increase the limit.
GPT for Sheets now applies the cut-off limit to all GPT formula results.
When cache is enabled, existing GPT formulas will not automatically update to the new cut-off limit when you re-execute them.
To re-execute existing formulas with a different cut-off limit, you can either:
Change a parameter in the formulas and press Enter.
Disable the cache, select the formulas, and regenerate their results.
What's next
Configure other settings to customize how the language model operates.