The Question Every AEO Team Asks
You published the case study. The benchmark is live. The comparison page is up. So when does ChatGPT start citing it?
The honest answer: not on a fixed schedule, and not all at once. New proof reaches AI answers on a range that runs from days to months, and the range depends on which model you mean and which surface the proof lives on. Treating "AI-answer latency" as one number is the first mistake teams make.
Why the Lag Varies
Two mechanisms decide how fast a model can reflect new content.
Retrieval vs. trained-in knowledge
When a model answers by retrieving live web results — Perplexity does this by default, and ChatGPT and Gemini do it when they browse — newly published proof can surface within days, as soon as it is crawled and indexed. When a model answers from knowledge baked into its training, that content was frozen at a cutoff months in the past, and publishing this week changes nothing until the next training cycle.
Most real answers blend both. That is why the same published proof can move a Perplexity answer quickly and leave a non-browsing ChatGPT answer unchanged for far longer.
The surface the proof lives on
Proof on your own domain has to be discovered and trusted before it counts. Proof that gets picked up by a source the model already leans on — an analyst, a review platform, a credible third party — can surface faster, because the model already weights that surface. Where you publish changes the latency as much as when.
Daily-Noisy, Weekly-Meaningful
Even once an answer starts to move, it moves unevenly. Re-run the same prompt three days running and you will see your brand named, then not, then named again — without publishing anything new. AI-answer position is noisy day to day.
The signal is in the week, not the day. A position that holds across a week of reads is a real shift; a single good read is not. This is why TrendsCoded reads daily but plans weekly — the daily reads catch the movement, the weekly cadence filters the noise.
One Piece Rarely Flips an Answer
The expectation worth correcting: a single published asset will rarely, by itself, change how a model answers. Models weigh a body of evidence. One case study adds one data point to that body — it nudges, it does not flip.
What changes answers is accumulation. Each published piece of proof joins the corpus the model lifts from, and the next read starts from a slightly stronger base. Position moves as the proof compounds — which is why the gain shows up in the trend over six or eight weeks, not in the days after any one publish.
How to Read It Without Fooling Yourself
- Don't judge a publish on day two. Give it a week of reads before you decide it worked or it didn't.
- Separate the models. A win on Perplexity and silence on Claude is normal, not a contradiction — they update on different mechanisms.
- Watch the trend line, not the data point. Six weeks of position movement is the result; any single read is weather.
- Expect retrieval surfaces to move first. If nothing has moved anywhere after several weeks, the issue is usually the proof itself — not the wait.
The Takeaway
AI answers change on a range, not a date — days on retrieval-driven models, longer where knowledge is trained in, and never on a single piece of content alone. Publish proof, give it a week before you read the result, judge it on the trend, and let the gains compound. That patience is built into the weekly loop on purpose.