We’ve been going about prompting all wrong — according to Google’s latest paper

We’ve been going about prompting all wrong — according to Google’s latest paper

Made with Midjourney

The state of prompting. For years, the prevailing wisdom behind prompting AI models has been: more details + more context = better output. This thought process encouraged users to devise increasingly elaborate prompts to coax the best results from AI. But a new paper from Google Research suggests we might be overthinking it.

The “cheat code.” Just copy and paste your prompt so it appears twice. That’s it. When tested across Gemini, GPT-4o, Claude, and DeepSeek (with reasoning mode off), this “prompt repetition” strategy won 47 out of 70 benchmark tests — with zero losses. On some tasks, accuracy jumped by up to 76 percentage points.

The science behind it. LLMs read left to right, meaning early tokens can’t “see” what comes later in your query. By repeating the prompt, every token gets a second pass where it can “look back” at the full query to get more context that it might’ve missed the first time. The kicker? This strategy doesn’t increase latency or output length, meaning it’s essentially a free way to get better results (learn more about how this works here).

Putting it in action. This doesn’t mean you should abandon detailed prompts entirely — complex tasks still require complex instructions. But this research offers a surprisingly simple addition to your AI toolkit: when in doubt, say it twice.

via Superhuman