Crawl access.

If AI crawlers can’t fetch your pages, you effectively don’t exist to them. SEODiff flags crawl blocking in the report (robots.txt, WAF, blocklists, unusual status codes) and gives a reason when possible.

Common causes

Crawl access issues are usually policy/config problems, not content problems.

Robots + bot policy

Robots.txt disallows, or bot-specific policies that block GPTBot / ClaudeBot. Fix by allowing the relevant user-agents where appropriate.

WAF / CDN behavior

Challenge pages, geo blocks, rate limits, or unusual 403/429 patterns. Fix by allowinglist, caching, or serving a stable HTML response.

What to do next

Start by opening the canonical report and checking the blocked flag + reason. Then verify with a direct fetch from the bot user-agent if needed.

Confirm the issue

Use /radar/domains/DOMAIN?format=json for the machine-readable blocked reason and for automation gates.

Fix + monitor

After you fix access, use monitoring to prevent regressions (accidental re-blocking after security changes).

Related tools

Explore more AI-readiness metrics for your site.

Token Bloat Checker

Measure how much of your HTML is useful content vs. boilerplate markup that wastes LLM tokens.

Schema Coverage

Audit structured data presence and discover which schema.org types to add for richer AI citations.

ACRI Leaderboard

See how the world's top sites rank on AI-Crawler visibility in the live ACRI leaderboard.

SEO Radar

Continuous monitoring of 100k+ domains. Compare performance, track trends, benchmark against competitors.