Skip to main content

Reduce repetition with OpenAI models

Set presence and frequency penalties to reduce the tendency of OpenAI models towards repetition.

ParameterDefinition
Presence penaltyPenalizes new tokens based on whether they appear in the text so far. Higher values encourage the model to use new tokens, that are not penalized.
Frequency penaltyPenalizes tokens based on their frequency in the text so far. Higher values discourage the model from repeating the same tokens too frequently .
Token

Tokens can be thought of as pieces of words. During processing, the language model breaks down both the input (prompt) and the output (completion) texts into smaller units called tokens. Tokens generally correspond to ~4 characters of common English text. So 100 tokens are approximately worth 75 words. Learn more with our token guide.

Prerequisites
You have opened a Google document and selected Extensions > GPT for Sheets and Docs > Launch.
  1. In the GPT for Docs sidebar, click Model settings.

    .
  2. Set Presence penalty and Frequency penalty from 0 to 2.

    The default level is set to 0 by default.

You've set the Presence penalty and Frequency penalty. GPT for Docs now uses the new Presence penalty and Frequency penalty values for generating all responses.

What's next

Select other settings to customize how the language model operates.