TrendsCoded reads ChatGPT, Claude, Gemini, and Perplexity as four distinct surfaces — same prompt set, four separate answer streams — because each model retrieves and weights brand signals differently. The Signal Desk shows you what each engine names, which rivals it co-mentions, and where you sit by model.
AI assistants don't agree on rankings. ChatGPT tends to commit to a single recommendation when sources align; Claude hedges and names more rivals per answer; Gemini favors entity-rich, structured content; Perplexity's answers are citation-heavy and live-link-dependent. Treating them as one averaged "AI search" number throws away the signal that matters most — where your buyer is researching and which engine your category is being shaped on.
TrendsCoded reads each engine daily on the same buyer prompts you've configured. The Signal Desk surfaces per-engine deltas — if Claude starts naming a rival it didn't last week, you see that move on its own row, not buried in an average. Position Score breaks down by engine so you can answer "are we losing on Perplexity specifically?" without re-cutting the data.
The weekly AEO Strategic Plan flags which engine the gap is showing up on and what proof structure that engine rewards — long-form benchmarks for Claude, structured FAQ for ChatGPT, citation-friendly pages for Perplexity. Same gap, different ship — because each model takes different evidence.
$500 · 7 days · Friday delivery · founder-configured · fixed price · no subscription. Capped at the first 15 pilots.