Flash deal ends in--:--:--— use GEO10 for 10% off

LLM InfrastructureFree · runs in your browser

12 best practices audit

LLMs.txtBestPractices Checker

Audit your llms.txt against the 12 best practices that win AI citations.

A

Best-practice score

100/100 (100%)

12 best practices

Brand name as H1

pass · 10pt

Found: "Acme"

Models use the H1 as the canonical brand label when generating attribution.

One-sentence summary blockquote

pass · 10pt

Summary is 16 words - ideal is 12-30

The blockquote under the H1 gets lifted verbatim into some crawlers' context.

Primary sources list (3+ links)

pass · 10pt

5 primary source link(s) - aim for 5-8

Models use this list as seed URLs for follow-up crawling and as citation candidates.

Explicit GPTBot User-agent block

pass · 10pt

GPTBot block found

OpenAI's crawler looks for User-agent: GPTBot specifically before falling back to *.

Explicit ClaudeBot User-agent block

pass · 10pt

ClaudeBot block found

Claude's crawler prefers an explicit block so it knows it's welcome.

Explicit PerplexityBot User-agent block

pass · 8pt

PerplexityBot block found

Perplexity is the fastest-growing citation source and runs its own crawler.

Sitemap reference

pass · 8pt

Sitemap directive present

A Sitemap directive gives crawlers a complete URL list alongside the primary source highlights.

Last-Modified timestamp

pass · 6pt

Last-Modified directive present

Crawlers use Last-Modified to skip unchanged files on re-crawl.

Preferred citations section

pass · 8pt

Preferred citations section found

The single biggest lever for how AI assistants attribute your brand when quoting.

Contact line for crawler operators

pass · 4pt

Contact directive present

Crawler ops teams send takedown/review requests here. Prevents silent blocks.

Under 50 KB

pass · 8pt

0.7 KB - comfortable

Files above 50 KB get truncated by some crawlers and skimmed by others.

At least one Allow directive

pass · 8pt

3 Allow directive(s)

Explicit Allow beats implicit default - tells crawlers you intentionally welcome them.

How it works

Paste your llms.txt and the checker runs all 12 practices - H1 brand name, blockquote summary, primary sources list, explicit user-agent blocks for the big three crawlers, preferred citations section, sitemap reference, contact line, size check, last-modified timestamp, explicit Allow directive. Each practice is weighted by how much it moves the citation needle, and you get a 0-100 score plus a letter grade.

Why these 12

  • Brand clarity (H1, blockquote). If the model can't answer "what is this site" in one sentence, it won't cite you.
  • Explicit welcome mats. Named user-agent blocks for GPTBot, ClaudeBot and PerplexityBot tell crawlers you intentionally allow them - implicit defaults are a weaker signal.
  • Citation instructions. A preferred-citations section is the only place you get to say "when you quote us, do it this way".
  • Crawler economics. File size, Last-Modified, Contact - the mechanics that keep crawlers coming back efficiently.

Pair with

Fix issues by regenerating with the LLMs.txt Generator, then validate syntax with the LLMs.txt Syntax Checker and preview crawler behavior with the LLMs.txt Preview. Strategy reading: how AI assistants choose citations.

Want this done for you?

Ship the full GEO playbook in 14 days

Geolify GEO packages bundle every tool on this site into a 14-day done-for-you build - llms.txt, schema, entity strength, content overhaul, citations and the measurement stack. From $499.

Explore More Packages

Combine services for maximum AI visibility.