Most teams optimizing for AI answers are still counting mentions. The Trends Desk doesn't. It tracks the brand-configured trends moving your AI-answer position each week — and qualifies the evidence inside each trend across four pillars until what's left are the signals you can ship against.
Trends are the unit of attention. Signals are the qualified evidence. Three moves a week is the output. Everything else is noise.
Key terms in one place
- Trend:
- A movement in how your category gets explained inside an AI answer this week — a new buying criterion, a rival capability, an analyst reframe, a shift in alternatives.
- Signal:
- Qualified evidence inside a trend. Tagged to the trend it proves, scored for strength, mapped to the engine it appeared in, and the position it impacts.
- Pillar:
- A lens for pulling evidence about a trend. The Trends Desk reads four of them. Each pillar answers a different question and returns different evidence.
- The four pillars:
- Direct AEO Strategies · Primary Brand Amplification · Rival Competitors · Analyst Stats and Thought Leaders.
- The three moves:
- For each top trend, the Desk produces one of three: close a gap, defend a strength, or amplify a signal.
1. Brand-configured trends
The trends the Desk surfaces are not a generic feed. They are configured around your brand — your category boundaries, your named rivals, your buyer types, your positioning. A trend is anything moving how your category gets explained inside an AI answer this week:
- A new buying criterion appearing in answers ("must support SOC 2," "must integrate with X").
- A rival launching a competing capability the model is starting to cite.
- An analyst report becoming the load-bearing citation for the category.
- A category re-frame from a credible thought leader.
- A shift in which alternatives the model lists when someone asks for one.
Trends are the unit of attention because they are the smallest thing your team can act on. A flat mention list is too small. A quarterly market shift is too big. A weekly trend is the operator-grade unit.
2. The four pillars of evidence
For any trend the Desk surfaces, evidence is pulled across four pillars. Each pillar reads a different surface of the market and returns a different kind of evidence. The qualified evidence inside each pillar is what we call a signal.
| Pillar | What it reads | What the pillar asks |
|---|---|---|
| 1. Direct AEO Strategies | Your team's proactive AEO work — per-engine content, structured data, comparison pages, response artifacts. | What did we ship this week, and is it producing position lift inside the trends our buyers see? |
| 2. Primary Brand Amplification | Your brand's organic signal in the open — launches, founder posts, PR, customer wins, category framings. | What proof of ours can the model already see, and is it strong enough to anchor our position? |
| 3. Rival Competitors | What named rivals are publishing, claiming, and earning inside the answer — new capabilities, new descriptors, new comparison surfaces. | How are rivals positioning on this trend, and what buying language are they taking? |
| 4. Analyst Stats and Thought Leaders | External authority voices and numbers the model reads — Gartner, Forrester, IDC, plus practitioner and operator voices. | Which stats are anchoring this trend, and which voices are reframing it? |
1. Direct AEO Strategies
The proactive AEO work your team is shipping to move position inside AI answers. Per-engine content drops, structured data, citation-grade case studies, comparison pages, response artifacts. This pillar reads what you are actively doing to compound your position week over week — and whether it is landing.
2. Primary Brand Amplification
What the model is reading from your brand in the open, outside of explicit AEO work. Launches, founder posts, PR, customer wins, category framings. The model reads the surface of the web; this pillar reads what it picks up from your brand on that surface.
3. Rival Competitors
The model puts you next to a fixed set of brands. That set is your competitive surface. When a rival moves on a trend — a new capability, a new claim, a new descriptor like "the open-source one" or "the enterprise default" — they take buying language inside the answer that you cannot easily reclaim.
4. Analyst Stats and Thought Leaders
External authority voices the model treats as load-bearing — analysts (Gartner, Forrester, IDC, public benchmarks) and the founders, operators, and writers whose framings shape how the category is explained. When a stat becomes a citation ("85% of enterprises adopting X by 2027"), it shifts the bar. When a thought leader reframes the problem, the model reframes with them.
3. One trend, four pillars: a worked example
To make the model concrete, here is a single week's read for a hypothetical Series B security platform. Trend: AI-native security stacks are emerging as a distinct category, with runtime visibility as the gating capability.
| Pillar | Evidence this week |
|---|---|
| Direct AEO Strategies | Team shipped a comparison page (your brand vs. the dominant rival, runtime-detection lens) with structured data; first ChatGPT answer pickup recorded by Thursday. |
| Primary Brand Amplification | A practitioner posted a Hacker News thread citing your case study replacing a legacy SIEM. Your founder posted a thread reframing "audit gaps" as a runtime problem — 200+ comments. |
| Rival Competitors | Wiz launched a new AI-audit feature; cited across ChatGPT and Perplexity within 48 hours. Lacework slipping out of two comparison listicles. CrowdStrike published an enterprise case study with analyst quotes. |
| Analyst Stats and Thought Leaders | Gartner: "federated identity adoption up 52% YoY across regulated enterprises." Bruce Schneier reframed the category in his newsletter — runtime, not config-time. Both flowing into AI answers as load-bearing citations. |
Qualified signals: Four signals roll up into one trend — AI-native security stacks is emerging as a category, with runtime visibility as the gating capability. The model already prefers you on the runtime descriptor.
The move: Defend Strength. Ship a customer story that ties your runtime detection to Schneier's reframe, before Wiz earns the runtime language. One trend, one move, anchored to four pillars of evidence. That is the operating output.
4. Signals are the qualified evidence
A blog post on its own is not a signal — it is noise. A signal is evidence that has been pulled from a pillar, attached to a specific trend, and qualified against the impact it has on your position. The Trends Desk does this qualification weekly so your team is not sifting through raw mentions to figure out what matters.
Every signal arrives tagged: which trend it belongs to, which pillar it came from, which engines it has appeared in, whether it raises or threatens your position, and what the recommended response is.
5. What the Desk produces weekly
For each top trend, the Trends Desk hands your team one of three moves to ship:
| Move | When the Desk recommends it | What you ship |
|---|---|---|
| Close a gap | A rival is opening a position on you inside the answer. | The proof or the framing that closes it — a comparison page, a case study, a response artifact. |
| Defend a strength | The model already prefers you on this trend, and a rival is approaching. | Reinforcing proof that keeps the position locked — fresh evidence, founder framing, customer wins. |
| Amplify a signal | The model is already picking up something of yours, but the signal is under-fed. | More of the same signal, in the right shape, where the model can find it next week. |
Three moves a week, anchored to evidence inside specific brand-configured trends, qualified across the four pillars. Nothing else compounds.
6. Trends Desk vs. mention dashboards
The Trends Desk is not a mention dashboard with new vocabulary. It produces a different operating output because it answers a different question.
| Mention dashboards | Trends Desk | |
|---|---|---|
| Question answered | Are we mentioned? | What's moving our position, and is the gap closing? |
| Unit of attention | The mention | The brand-configured trend |
| How evidence is read | Volume across raw mentions | Qualified across four pillars (Direct AEO, Primary Brand, Rivals, Analyst Stats and Thought Leaders) |
| Cadence | Continuous noise | Weekly read |
| Output | A metric | Three moves per top trend |
| Action it produces | None directly | Close a gap · Defend a strength · Amplify a signal |
If you only need a leaderboard, a mention dashboard is fine. If you need a weekly operating surface that hands your team specific moves to ship, you need a Trends Desk.
7. Who runs the Trends Desk
The Trends Desk is built for Series B+ marketing teams selling to enterprise buyers who research in AI. The configuration is per-brand — your category, your rivals, your buyer types, your positioning — and the weekly read is shaped to the decisions your team actually owns: what to ship next, what to defend, what to deprecate.
Three roles, one Desk, three operating decisions:
- Product Marketing: reads the position and the framings; owns "are we still the brand the model anchors on?"
- GTM / Demand: reads the response decisions; owns "what does the field hear, and how do we brief it this week?"
- Content & SEO: reads the shipped artifacts; owns "did the proof land in answers, and what gets shipped next?"
Same Trends Desk, same weekly trend set, three operating decisions — so the team is shipping against the same picture instead of three private versions of it.
See it on your category
We are running founder-led pilots with the first 15 marketing teams to install the Trends Desk into their weekly cadence. See your category before we talk, or book a pilot conversation.
