SEODiff docs

API-first: the dashboard and automation are clients of the same API. Use SEODiff to catch regressions before deploy and monitor drift after deploy.

Getting started

Create an account, generate an API key, run your first validation scan, and share the report link with your team.

API reference (v1)

Auth, endpoints, request/response shapes, and how pass/fail works in automation.

CI/CD (before deploy)

GitHub Actions example that calls the API, blocks regressions, and posts a PR comment.

Monitoring (after deploy)

Nightly scans, incident history, and template drift timelines powered by the API.

Concepts

AI visibility, readiness scores, extractability, ghost ratio, token bloat, schema coverage, and more.

Metrics

Bot Access, Rendering, Structure, Schema, Tech Stack, Crawl Cost, Multimodal — how each sub-score is calculated.

Issues

25+ issue types grouped by severity (Critical → Low) with thresholds, score impact, and suppression rules.

Fix-It Guides

Step-by-step remediation guides for blocked bots, thin content, missing schema, rendering failures, and more.

Tools

Deep Audit, Crawler Health, AI Chunking, Entity Schema, Training Data, Answer Format, and Guardian.

Glossary

Short definitions for the terms used across reports, monitoring, and CI/CD.

Feature status (implemented vs planned)

These docs include some planned items. Current planned-but-not-implemented items are:

  • Baseline regression gate via API (regression-only CI gating).
  • Monitoring alerts (email / Slack / webhooks).
  • Long-lived public share links for incidents/reports.

Looking for template drift?

In the dashboard, open a project and click Template Drift. Or jump straight to /app/timeline (it can autocomplete templates after monitoring runs).