Robots + bot policy
Robots.txt disallows, or bot-specific policies that block GPTBot / ClaudeBot. Fix by allowing the relevant user-agents where appropriate.
Crawl access issues are usually policy/config problems, not content problems.
Robots.txt disallows, or bot-specific policies that block GPTBot / ClaudeBot. Fix by allowing the relevant user-agents where appropriate.
Challenge pages, geo blocks, rate limits, or unusual 403/429 patterns. Fix by allowinglist, caching, or serving a stable HTML response.
Start by opening the canonical report and checking the blocked flag + reason. Then verify with a direct fetch from the bot user-agent if needed.
Use /radar/domains/DOMAIN?format=json for the machine-readable blocked reason and for automation gates.
After you fix access, use monitoring to prevent regressions (accidental re-blocking after security changes).
Explore more AI-readiness metrics for your site.
Measure how much of your HTML is useful content vs. boilerplate markup that wastes LLM tokens.
Audit structured data presence and discover which schema.org types to add for richer AI citations.
See how the world's top sites rank on AI-Crawler visibility in the live ACRI leaderboard.
Continuous monitoring of 100k+ domains. Compare performance, track trends, benchmark against competitors.