Discussion about this post

User's avatar
Hans van Gent's avatar

Point 4 is the one people will underestimate the most.

The “AI SEO basics” are easy, the hard part is building a repeatable way to observe what these systems reward and then close the gaps.

One thing that helped me make this more actionable is treating it like a monitoring loop, not a one-off experiment.

Run a fixed set of real customer questions weekly, capture (1) what got cited, (2) what sub-questions the model expanded into, and (3) what answer format it preferred.

That gives you a backlog that is way more concrete than “optimize for AI”.

I’ve actually shipped a feature in my SEO browser extension, Sprout SEO, a couple of weeks ago that surfaces these query fan-outs + cited sources for ChatGPT/Gemini outputs.

Mainly because doing this manually was such a pain and you don’t automatically need to start paying big bucks for tools if you can surface this information yourself.

It’s been useful to spot “oh, we’re missing a whole subtopic cluster” or “we keep losing citations to the same 3 domains”.

No posts

Ready for more?