Crawl access.

If AI crawlers can’t fetch your pages, you effectively don’t exist to them. SEODiff flags crawl blocking in the report (robots.txt, WAF, blocklists, unusual status codes) and gives a reason when possible.

Common causes

Crawl access issues are usually policy/config problems, not content problems.

Robots + bot policy

Robots.txt disallows, or bot-specific policies that block GPTBot / ClaudeBot. Fix by allowing the relevant user-agents where appropriate.

WAF / CDN behavior

Challenge pages, geo blocks, rate limits, or unusual 403/429 patterns. Fix by allowinglist, caching, or serving a stable HTML response.

What to do next

Start by opening the canonical report and checking the blocked flag + reason. Then verify with a direct fetch from the bot user-agent if needed.

Confirm the issue

Use /radar/domains/DOMAIN?format=json for the machine-readable blocked reason and for automation gates.

Fix + monitor

After you fix access, use monitoring to prevent regressions (accidental re-blocking after security changes).