How it works
Enter your brand name and up to 20 of the prompts your buyers actually ask an AI assistant. The checker simulates citation probability across ChatGPT, Claude, Perplexity and Gemini using a deterministic scoring model based on entity strength, prompt shape and the cited-sources graph we track for GEO packages. It's a first pass - the real paid scan includes live prompts - but it tells you where the gaps cluster.
Why prompt-level visibility matters
- Aggregate share-of-voice hides the gaps - a brand can hit 40% overall visibility while being totally absent from the 5 prompts that drive 80% of revenue.
- Different models cite different sources - Claude leans academic, Perplexity leans recent news, ChatGPT leans Wikipedia. You need per-model tracking.
- Primary vs passing mentions are 10x different - being named as the "first recommendation" is worth 10x a passing mention in a list of 7.
Close the gaps
For every prompt where you're absent, run the Citation Probability Calculator on the page you'd want cited, then use the Entity Schema Builder and FAQ schema to lock in the entity + Q/A pairing. Full playbook: how to track AI search visibility.