spprev.sp.gov.br 30 F
🛡️ SEO 14 🤖 GEO 23 ⚡ Perf 80 🏗️ Arch 46

spprev.sp.gov.br — Global SEODiff Score 30/100

spprev.sp.gov.br
📊

With a 44/100 ACRI, spprev.sp.gov.br has a moderate AI visibility profile — functional but not yet optimized for AI crawlers. Client-side rendering pushes the ghost ratio to 75%, creating extractability risk for bots that do not execute scripts. The 11.1× token bloat ratio falls within the normal range, though there is room to trim navigation, footer, and script overhead. The complete absence of JSON-LD schema is a missed opportunity: even basic Organization markup would improve how AI crawlers understand this domain. All major AI bot user-agents (GPTBot, ClaudeBot, CCBot, Google-Extended) are permitted by robots.txt, ensuring broad AI crawler access.

30
F — Global SEODiff Score
Comprehensive search visibility assessment
Critical gaps across the board. Start with Traditional SEO (14).
🎯 Top Fix: Enable server-side rendering (ghost ratio 75%) → +8–15 pts
🔬 Automated SEODiff Assessment · Snapshot: Feb 26, 2026 · 📋 API
Does your site score higher than spprev.sp.gov.br?
Run the same 40-signal audit on your own domain — free, instant results.
Scan Your Site Free →
🧮 Score Transparency — How is this calculated?
🛡️ Traditional SEO (25% weight)14 × 0.25 = 3.5
🤖 AI Readiness / GEO (40% weight)23 × 0.40 = 9.2
⚡ Performance (20% weight)80 × 0.20 = 16.0
🏗️ Architecture & Trust (15% weight)46 × 0.15 = 6.9
Weighted sum = 3.5 + 9.2 + 16.0 + 6.9
Global SEODiff Score = 30 (F)
📊 ACRI Sub-Scores (AI Readiness Detail)
100
Bot Access
avg 92
35
Rendering
avg 93
0
Structure
avg 35
0
Schema
avg 10
50
Tech Stack
avg 64
🔀
Visibility Delta: Google vs AI
Google (Tranco)
Top 22%
Rank #217866
+35 pts
Gap
AI (ACRI)
Top 56%
Score 44/100

spprev.sp.gov.br punches above its weight in AI — AI visibility exceeds Google ranking. This is a competitive moat worth protecting. ACRI measures technical crawler readiness. Read the methodology →

Why spprev.sp.gov.br ranks here

Tech stackCustom / Proprietary
Industry
RenderingCSR
Schema coverage0 blocks
Token bloat11.1×

Fastest improvements

  • Add basic Organization and WebSite JSON-LD to fix “0 schema blocks” (see Schema Coverage).
  • Reduce token bloat (navigation/footer/code) so agents reach your main content faster (see Token Bloat).
  • Create an llms.txt file so AI crawlers can discover your content structure without heavy crawling. Generate llms.txt →
  • Run a full entropy audit to find which DOM regions waste the most tokens. Run Entropy Audit →
🧪

JavaScript Rendering Check

We check what AI crawlers miss when they skip JavaScript execution.

Running headless browser to simulate AI extraction…
🛡️

Traditional SEO

14/100 25 % of Global Score 🔴 Low Confidence

📝 Title Tag

0 chars
Too short

Optimal range: 30–60 characters for SERP display.

📋 Meta Description

0 chars
Missing

Optimal range: 120–160 characters for snippet control.

🔤 Heading Hierarchy

  • ✗ Exactly 1 <h1> tag — found 0
  • ✗ Has <h2> headings — found 0
  • ✓ <h2> not before <h1>

🔍 Indexability

  • ✗ Canonical tag present
  • ✓ No noindex directive
  • ✓ Meta viewport set
  • ✓ HTML lang attribute → en
  • ✗ Hreflang tags
  • ✓ Googlebot allowed by robots.txt

🌐 Social / OpenGraph

  • ✗ og:title
  • ✗ og:description
  • ✗ og:image
  • ✗ twitter:card
📐 How the SEO Pillar score is calculated

SEO Pillar = Title (20 pts) + Meta Desc (20 pts) + Heading Hierarchy (20 pts) + Indexability (20 pts) + Social/OG (20 pts)

Each sub-score is derived from the checks above. Canonical tag, lang attribute, og:image, and a single H1 are the highest-impact items.

🤖

AI Readiness / GEO

23/100 40 % of Global Score 🟢 High Confidence

This pillar aggregates citation share, hallucination risk, bot access, schema health, and content extractability. The individual diagnostic sections below contribute to this score.

🚨

Hallucination Risk

Research

Is AI lying about your brand? This panel measures how likely LLMs are to hallucinate facts when extracting information from your page.

👻
Shadow Content Detected: 75% of your page token budget is trapped in non-rendered regions (JavaScript-dependent content invisible to AI crawlers). Combined with 11.1x token bloat, AI models are using most of their context window on noise instead of your real content. This dramatically increases hallucination probability — models fill the gap with made-up facts.
Analyzing hallucination risk…

🤖 Bot Access Matrix

GPTBot (OpenAI)
Allowed
ClaudeBot (Anthropic)
Allowed
CCBot (Common Crawl)
Allowed
Google-Extended
Allowed
Googlebot
Allowed

👻 Rendering (Ghost Ratio) Docs

Ghost Ratio 75%
0% — Safe 50% 100% — Risk
Status Partially JS-Dependent
Rendering Type CSR
💡High ghost ratio means AI crawlers may miss content. Consider server-side rendering (SSR) or pre-rendering for critical pages.

📊 Structure & Information Density Docs

Structure Grade 0/100 — Poor
Structured Elements 0 elements (0 lists, 0 rows, 0 headers)
Total Words26
Raw Density0.0%
💡Low structure score (0/100). Very little extractable text detected (26 words). AI crawlers may be blocked from accessing the real page content — check the Ghost Ratio and Bot Access sections.

🏷️ Schema Health Docs

Organization Schema ❌ Missing
Product / Service Schema ⚠️ Not Found
Total Schema Blocks0 — No JSON-LD detected

Schema Coverage Map

0/7 schema types detected
❌ Organization
❌ Product/Service
❌ Breadcrumb
❌ FAQ
❌ Article
❌ WebSite
💡Organization schema missing. AI models cannot identify your brand entity. Without it, your brand won't appear in Knowledge Panels or be associated with your content.
💡Product / Service schema missing. AI models don't know this is a SaaS product. Add Product or SoftwareApplication schema so AI understands what you offer and can surface pricing/features.
💡BreadcrumbList schema missing. AI cannot understand your site hierarchy or how pages relate to each other.
💡FAQ schema missing. Adding FAQPage schema lets AI models directly extract Q&A pairs for Featured Snippets and chatbot answers.
💡WebSite schema missing. Add WebSite + SearchAction so Google can generate a Sitelinks Search Box for your brand in AI results.

📐 AI Efficiency Metrics Docs

18
AI Extractability
Low
Crawl Cost
None
Blocklist Risk
Extractability18/100 — AI models can barely extract answers from this page
Crawl CostLow (10/100) — efficient for AI crawlers to process
Blocklist RiskNone — 0 of 5 AI crawlers blocked

Token Bloat Research

9%
🗑️ 91%
Useful Content (182 B)Bloat (1.8 KB)
Token Bloat Ratio11.1× — Normal

Multimodal Readiness

Visual ContextNo images detected
Image Alt Coverage0 / 0 images have alt text

TDM Rights

TDM-Reservation HeaderNot set
X-Robots-Tag: noaiNot set

🔥 Structural Entropy Check Research

0 Entropy
Poor Token Bloat: High
Noise Ratio: 91.1% · SNR: 0.10 · Signal: 45 / Noise: 459 tokens

🔬 AI-Crawler Simulation

See your website the way AI crawlers do. CSS stripped, structure labeled, content chunked.

🌐
This is what humans see — styled, branded, visual.
Toggle to "AI Agent View" to see what GPTBot, ClaudeBot, and other AI crawlers actually extract from this page.
🤖

AI Answer Preview

NEW

See how AI models summarize your site. Left: your actual content. Right: what the LLM extracts and says about you.

Simulating AI extraction…

🔧 Tech Stack

AI-Readiness Score50/100
ServerCloudFront
CDNcloudfront
HTTP Status202
Load Time567 ms
Raw HTML Size2.0 KB
Visible Text Size182 B

Performance & Speed

80/100 20 % of Global Score 🟢 High Confidence

⏱️ Time to First Byte

567 ms
Acceptable — room for improvement

Google considers <200 ms "good". AI crawlers may have even shorter timeouts.

📦 Page Weight

12
DOM nodes
2 KB
HTML payload
Lean page — fast for bots and users

🗄️ Cache & CDN

  • ✓ Cache-Control header → no-store, max-age=0
  • ✗ CDN cache status
  • ✓ CDN detected → cloudfront

🔬 Tracker Tax

0
tracker scripts
0
third-party domains
0.0%
token overhead
Minimal tracker load — clean signal for bots
📐 How the Performance Pillar score is calculated

Perf Pillar = TTFB (35 pts) + Page Weight (25 pts) + Cache/CDN (20 pts) + Tracker Tax (20 pts)

TTFB <200 ms = full marks. DOM >3000 or payload >300 KB incurs heavy penalties. Tracker scripts beyond 5 reduce score.

🏗️

Architecture & Trust

46/100 15 % of Global Score 🔴 Low Confidence

🗺️ Sitemap & Robots

  • ✗ Sitemap declared in robots.txt
  • ✓ Googlebot allowed
  • ✓ GPTBot allowed
  • ✓ ClaudeBot allowed

🔗 Linking

0
internal links
0
external links
Very few internal links — crawlers may miss important pages

🔒 Security & Trust

  • ✗ HSTS header (Strict-Transport-Security)
  • ✗ Content-Security-Policy header
  • ✗ HTTP status 200 OK (got 202)

♿ Accessibility Signals

  • ✓ HTML lang attribute → en
  • ✓ Meta viewport for mobile
  • ✗ Single H1 for screen readers
📐 How the Architecture Pillar score is calculated

Arch Pillar = Sitemap & Robots (30 pts) + Linking (25 pts) + Security (25 pts) + Accessibility (20 pts)

Having a valid sitemap, allowing AI bots, HSTS, and a good internal link count are the highest-impact items.

🏅 AI-Verified Trust Badge

Your site scores 20/100. Reach 80+ to unlock the green "AI-Verified" badge. Fix the issues below to improve your score.

AI-Verified badge for spprev.sp.gov.br
Pending Audit — score below 80 threshold
<a href="https://seodiff.io/radar/domains/spprev.sp.gov.br" rel="noopener"><img src="https://seodiff.io/api/v1/badge?domain=spprev.sp.gov.br" alt="AI-Verified by SEODiff" width="280" height="52"></a>

💡 Paste in your site footer, GitHub README, or email signature. Badge updates automatically as your score changes.

🔗 Similar Sites

Domains with a similar tech stack, industry, and AI readiness profile to spprev.sp.gov.br. Compare side-by-side.

Domain ACRI AI Score Tech Stack Token Bloat Schema
spprev.sp.gov.br (this site) 20 44 Custom / Proprietary 11.1× 0
optikseis.com 20 44 Custom / Proprietary 11.1× 0 Compare →
setlist.fm 20 44 Custom / Proprietary 11.1× 0 Compare →
juniqe.com 20 44 Custom / Proprietary 11.1× 0 Compare →
findatopdoc.com 20 44 Custom / Proprietary 11.1× 0 Compare →
booking.cn 20 44 Custom / Proprietary 11.1× 0 Compare →
Compare All 5 Similar Sites →
🩹

Remediation Patches

COPY-PASTE

Auto-generated code fixes tailored to spprev.sp.gov.br. Copy and paste these into your codebase to improve AI visibility. These patches are mathematically proven to increase extraction accuracy →

Add Organization JSON-LD
High Impact ⏱ 5 min
AI models cannot identify your brand entity without Organization schema. This is the #1 fix for AI visibility.
html
<script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@type": "Organization",
  "name": "Gov",
  "url": "https://spprev.sp.gov.br",
  "logo": "https://spprev.sp.gov.br/logo.png",
  "sameAs": []
}
</script>
Add WebSite + SearchAction JSON-LD
High Impact ⏱ 5 min
Enables the Sitelinks Search Box in Google and allows AI to understand your site structure.
html
<script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@type": "WebSite",
  "name": "Gov",
  "url": "https://spprev.sp.gov.br",
  "potentialAction": {
    "@type": "SearchAction",
    "target": "https://spprev.sp.gov.br/search?q={search_term_string}",
    "query-input": "required name=search_term_string"
  }
}
</script>
Fix Client-Side Rendering
High Impact ⏱ 2–4 hrs
75% of your content is invisible to AI crawlers. Wrap critical content in server-rendered HTML using your framework.
html
<!-- Ensure main content is in the initial HTML response -->
<main id="content">
  <!-- Server-rendered content goes here -->
  <h1>Gov</h1>
  <p>Your key content should render without JavaScript.</p>
  <!-- Move <script> tags to the bottom of <body> -->
</main>

<!-- In your framework config, enable SSR/prerendering: -->
<!-- your framework -->
Reduce Token Bloat
Medium Impact ⏱ 1–2 hrs
Only 9% of your HTML is useful content. AI crawlers waste context window tokens on bloat.
html
<!-- Move inline CSS to external stylesheets -->
<link rel="stylesheet" href="/css/main.css">

<!-- Move inline scripts to external files with defer -->
<script src="/js/app.js" defer></script>

<!-- Remove duplicate navigation blocks -->
<!-- Keep only ONE <nav> in the <header> -->

<!-- Ensure <main> wraps your primary content -->
<main>
  <!-- Your content here — this is what AI sees first -->
</main>
📈

Projected Impact

ROI EST.

If you apply the patches above, here's the estimated improvement for spprev.sp.gov.br:

Current Score
44
Projected Score
69
Improvement
+25 pts
Add Organization schema +6 pts
Add WebSite schema +4 pts
Fix client-side rendering +10 pts
Reduce token bloat +5 pts

*Estimates based on SEODiff's scoring model. Actual results depend on implementation quality.

📋 Data Export

Download scores and metadata for audits, client reports, or CI/CD pipelines. Exports contain computed metrics only (no copyrighted content).

All data is generated automatically and updated with each crawl. JSON exports contain scores and metadata only (no copyrighted content).

Is this your company?

Monitor your AI visibility score weekly and get alerted when changes happen.

Start Free →

🧭 Self-Diffing (Private Layer)

For owned domains, combine this world snapshot with private drift + regression history.
Template Drift
Track in My Site
Drift → Traffic Impact
In development coming soon
Regression Incidents
Track in My Site
Internal Linking
Deep Audit graph
Semantic Structure
GEO view in Deep Audit
Content Quality
Thin/duplicate tracking

🕒 History

Score over timeAvailable in My Site history
Drift eventsTemplate timeline + incidents
Drift → Revenue AttributionComing soon
Schema/rendering/extractability changesTracked per scan in project history