Skip to main content

Reduce repetition with OpenAI models

Set Presence and Frequency penalties to reduce the tendency of OpenAI models towards repetition.

TermDefinition
Presence penaltyPenalizes new tokens based on whether they appear in the text so far - Higher values encourage the model to use new tokens, that are not penalized.
Frequency penaltyPenalizes tokens based on their frequency in the text so far - Higher values discourage the model from repeating the same tokens too frequently, as the more they appear in the text, the more penalized they get.
Token

Tokens can be thought of as pieces of words. During processing, the language model breaks down both the input (prompt) and the output (completion) texts into smaller units called tokens. Tokens generally correspond to ~4 characters of common English text. So 100 tokens are approximately worth 75 words. Learn more with our token guide.

Prerequisites
You have opened a Google document and selected Extensions > GPT for Sheets and Docs > Launch.
  1. In the GPT for Docs sidebar, click Model settings.

    .
  2. Set Presence penalty and Frequency penalty from 0 to 2.

    The default level is set to 0 by default.

You can now submit a prompt in the current document with the new penalties. Once a prompt is submitted, the Presence penalty and the Frequency penalty are saved along with all Model settings values, and are used for all prompts executed from Google documents.

What's next

Select other settings to customize how the language model operates.