Interpretation
If crawl access is blocked, other improvements (content, schema, extractability) won’t matter until bots can fetch the HTML.
Common causes
robots.txt disallow rules for bot user-agents.
- CDN/WAF challenge pages or geo/rate limits (403/429 patterns).
- Blocked by bot management rules intended for scrapers.
Fixes
- Decide which bots you want to allow, then update robots/WAF policy intentionally.
- Serve a stable HTML response for crawlers (avoid challenge interstitials).
- Monitor to prevent accidental re-blocking after security changes.
Where it appears
In report JSON, look for crawl_blocked and crawl_block_reason (field names may evolve).