Skip to main content

Output issues (GPT for Docs)

Response is cut or incomplete

This response was truncated. Choose a model with a larger context window.

Problem: The response you get from GPT for Docs is truncated.

Solution

Models have a maximum token limit, which is the maximum total number of tokens that can be used in both the input and the response. This means models may not provide complete responses to long queries or when a large portion of text is selected as input.

To stay within the token limit, you can:

  • Highlight a smaller portion of text in your document.

  • Reduce the value of Max response tokens.

  • Select a model with a higher limit.