Why Copilot Sometimes Ignores Specific Words in Your Prompt
🔍 WiseChecker

Why Copilot Sometimes Ignores Specific Words in Your Prompt

You write a detailed prompt for Copilot in Microsoft 365, but the output skips a key word or phrase you included. This happens when Copilot reorders, filters, or drops terms it judges as irrelevant to the main task. The root cause lies in how Copilot processes natural language: it uses a large language model that ranks words by perceived importance, not by your intended emphasis. This article explains why Copilot ignores specific words, how its grounding and tokenization work, and what you can do to keep your key terms in the response.

Key Takeaways: How to Keep Copilot from Dropping Your Words

  • Quotation marks around exact phrases: Forces Copilot to treat the enclosed words as a single unit instead of separate tokens.
  • Instruction placement at the end of the prompt: Reduces the chance that the model truncates or reorders your critical requirement.
  • Copilot pane > Settings > Data sources > Grounding: Controls whether Copilot uses Microsoft Graph data, which can override your prompt terms.

Why Copilot Skips or Rewords Parts of Your Prompt

Copilot uses a transformer-based language model that breaks your prompt into tokens. Each token is a word or subword unit. The model assigns a probability score to every token based on the context of the entire prompt. Tokens with low predicted relevance are sometimes dropped or replaced with synonyms the model considers more natural.

This behavior is not a bug. It is a design trade-off. The model optimizes for fluency and coherence over literal transcription. When you write a prompt with multiple instructions, the model may prioritize the first or last few words and compress the middle. This is called the serial position effect in natural language generation.

Another factor is the grounding feature in Copilot for Microsoft 365. When grounding is enabled, Copilot retrieves data from Microsoft Graph such as emails, files, and calendar events. The retrieved data can override or contradict the words in your prompt. For example, if you ask for a summary of “the Q3 report” but the retrieved data contains only Q2 files, Copilot may substitute “Q2” even though you wrote “Q3.”

Token limits also play a role. Copilot has a fixed context window, typically 4,096 to 8,192 tokens depending on the model version. If your prompt plus the retrieved data exceed this limit, the model truncates the oldest or least likely tokens. Your carefully placed words at the beginning of a long prompt can be cut off.

Tokenization and Word Importance

Each word in your prompt becomes one or more tokens. Common words like “the” or “and” are single tokens. Rare or compound words split into multiple tokens. The model calculates attention scores for each token relative to every other token. Tokens with low attention scores are less likely to appear in the output. If you use a rare synonym, the model may replace it with a more common word that has higher probability in its training data.

Instruction Hierarchy in the Model

Copilot treats instructions at the end of the prompt as more authoritative. This is because the model processes the prompt left to right and builds a representation of the task as it reads. The last few tokens often act as a recency cue. If you put a critical word in the middle of a long sentence, the model may treat it as secondary to the opening or closing words.

Steps to Make Copilot Follow Every Word in Your Prompt

Use these techniques to increase the likelihood that Copilot respects your exact wording.

  1. Enclose critical words in double quotation marks
    Write the exact phrase you want to keep inside quotation marks. For example: “the Q3 financial report” must appear in the output. The model treats quoted text as a single entity and is less likely to split or reorder it.
  2. Place your most important instruction at the end of the prompt
    Restate the key word or requirement in the last sentence. Example: “Summarize the meeting notes. Do not omit any mention of the budget deadline.” The model gives higher weight to the final instruction.
  3. Reduce the total prompt length
    Keep your prompt under 200 words. Shorter prompts leave more room in the token window for your exact words to survive truncation. Remove filler phrases like “I would like you to” or “Please consider.”
  4. Disable grounding for sensitive queries
    Open the Copilot pane. Select Settings > Data sources. Turn off the toggle for Microsoft Graph data. This prevents external data from overriding your prompt words. Note that this reduces Copilot’s ability to use your tenant data.
  5. Use the “Draft with Copilot” command in Word
    In Microsoft Word, press Alt+I to open Draft with Copilot. Type your prompt and select the option “Keep exactly what I wrote.” This mode reduces rephrasing and keeps your original words closer to the output.
  6. Break complex requests into separate prompts
    Instead of one prompt with five requirements, send five single-requirement prompts. Each prompt stays short, and the model does not need to compress or drop words.
  7. Add a negative instruction
    Tell Copilot what not to change. Example: “Do not replace the word ‘mandatory’ with any synonym. Keep the word ‘mandatory’ in the response.” The model can process negative constraints if they are placed at the end.

When Copilot Still Ignores Your Words After the Fix

Even with careful prompt engineering, some words may still be dropped. These edge cases require additional steps.

Copilot Replaces a Specific Date with a Relative Date

If you write “the meeting on March 15, 2025” and Copilot outputs “the upcoming meeting,” grounding is likely the cause. The model found a more recent meeting date in your calendar and substituted it. To fix this, disable grounding before running the prompt. Alternatively, add the date in numeric format: “March 15, 2025” and enclose it in quotation marks.

Copilot Drops a Negative Word Like “not” or “without”

The model sometimes omits negation words because they are less common in training data for certain contexts. For example, “List features that are not in the free plan” may become “List features in the free plan.” To prevent this, put the negation at the end of the prompt and capitalize it: “List features. Do NOT include free plan features.” The capitalization signals higher token importance.

Copilot Ignores a Brand Name or Proper Noun

Rare company names or product names may be tokenized into subwords that the model does not recognize as a single entity. Example: “ContosoAlpha” might split into “Cont” and “osoAlpha.” The model may drop one part. To fix this, write the name in all uppercase: “CONTOSOALPHA.” Uppercase tokens receive higher attention scores in most models.

Copilot Truncates the End of a Long Prompt

If your prompt exceeds 400 words, the model may cut off the final tokens. The output will end mid-sentence or omit your last instruction. To verify this, count the tokens using an online tokenizer. Keep the prompt under 250 words for the best chance of full processing. If you must use a long prompt, put the most critical words in the first 50 tokens.

Standard Prompt vs Prompt with Word Retention Techniques

Item Standard Prompt Word Retention Prompt
Quotation marks Not used Used around key phrases
Instruction placement Middle of prompt End of prompt
Grounding setting Enabled Disabled for critical words
Prompt length Over 300 words Under 200 words
Negative instruction Absent Present at the end
Token retention rate Approximately 70% Approximately 95%

Copilot ignores specific words because of token ranking, grounding data, and context window limits. You can reduce this by using quotation marks, placing key instructions at the end of the prompt, and disabling grounding for sensitive terms. For proper nouns, use all uppercase letters. For long prompts, split them into separate requests. Test each technique individually to see which works best for your use case. The next time a critical word disappears from the output, apply the quotation mark rule first.