How it works
Because browsers can't fetch arbitrary third-party URLs (CORS), this tester runs two steps. First, paste your llms.txt content into the box below and the tool parses every directive, checks structure, counts user-agent blocks, and flags anything missing. Second, copy the generated curl command and run it from your terminal to confirm the file is actually reachable with the correct headers - the curl line impersonates GPTBot so you see exactly what OpenAI's crawler sees.
What the tester checks
- Correct location. llms.txt must sit at the domain root, not a subdirectory or CDN subpath.
- Valid structure. H1 brand name, blockquote summary, primary sources, user-agent blocks - these are the pieces models actually consume.
- Directive parity. Counts Allow/Disallow rules and names every user-agent mentioned so you can spot gaps (missing ClaudeBot, no Perplexity block).
- Crawler-safe size. Files above 100 KB get truncated by some crawlers, so the tester flags bloat early.
Pair with
After testing, validate syntax with the LLMs.txt Validator, check hosting with the AI Bot Access Checker, and cross-reference your robots.txt vs llms.txt to make sure they don't contradict. Strategy reading: what is llms.txt.