How the checker works
Paste both files and we check every major AI crawler against each. The result is a 3-state matrix (allowed / blocked / not-mentioned) for every bot in every file. Conflicts come in two flavours:
- Errors are self-contradictions that kill visibility: robots.txt blocks the bot but llms.txt welcomes it. The crawler never even reads your llms.txt.
- Warningsare weaker signals: robots.txt allows the bot but llms.txt blocks it, or llms.txt welcomes a bot that robots.txt doesn't explicitly mention. Fix these for consistency.
Why both files exist
robots.txt is the crawl-level gate (can the bot fetch the URL at all). llms.txt is the context-level signal (once the bot has the content, here's the canonical summary, primary entry pages, and citation preferences). You need both. Robots.txt without llms.txt means the assistant indexes you but misses your preferred framing. Llms.txt without a matching robots.txt means you're sending prose to a bot that can't actually read the page. Our llms.txt primer explains the full workflow.
Fix pattern
Generate a fresh pair with our LLMs.txt Generator (which picks a bot policy) and the matching robots.txt audit. Keep the bot lists identical between the two files and this checker will go green.