How it works
Paste your llms.txt and the checker runs all 12 practices - H1 brand name, blockquote summary, primary sources list, explicit user-agent blocks for the big three crawlers, preferred citations section, sitemap reference, contact line, size check, last-modified timestamp, explicit Allow directive. Each practice is weighted by how much it moves the citation needle, and you get a 0-100 score plus a letter grade.
Why these 12
- Brand clarity (H1, blockquote). If the model can't answer "what is this site" in one sentence, it won't cite you.
- Explicit welcome mats. Named user-agent blocks for GPTBot, ClaudeBot and PerplexityBot tell crawlers you intentionally allow them - implicit defaults are a weaker signal.
- Citation instructions. A preferred-citations section is the only place you get to say "when you quote us, do it this way".
- Crawler economics. File size, Last-Modified, Contact - the mechanics that keep crawlers coming back efficiently.
Pair with
Fix issues by regenerating with the LLMs.txt Generator, then validate syntax with the LLMs.txt Syntax Checker and preview crawler behavior with the LLMs.txt Preview. Strategy reading: how AI assistants choose citations.