🦜 SEODiff as a LangChain Tool

Add SEODiff evaluation as a custom tool in your LangChain agent pipeline.

LangChain’s tool system lets you give any LLM agent the ability to call external APIs. By wrapping SEODiff’s evaluation endpoint as a LangChain tool, your agent can autonomously test and fix SEO issues as part of a larger content pipeline.

System Prompt / Configuration

Copy this prompt and paste it into your IDE’s AI configuration:

# SEODiff Tool Description for LangChain
# Use this as the tool description in your LangChain agent

Tool Name: seodiff_evaluate
Description: Evaluates web pages for SEO quality. Checks for H1 tags,
meta descriptions, JSON-LD schema, placeholder leaks, content depth,
and AI-readiness. Returns pass/fail verdict with per-page diagnostics.
Use this after generating or editing any web page to ensure SEO quality.

Input: JSON with "urls" (list of URLs) and "assertions" (list of rules)
Output: Evaluation result with pass_rate, ACRI score, and failing pages

Setup

from langchain.tools import StructuredTool
import requests, os

def seodiff_evaluate(urls: list[str], assertions: list[dict] = None) -> dict:
    """Evaluate web pages for SEO quality using SEODiff."""
    if assertions is None:
        assertions = [
            {"rule": "has_h1"},
            {"rule": "has_schema"},
            {"rule": "no_placeholders"},
            {"rule": "min_word_count", "value": 300},
        ]

    response = requests.post(
        "https://seodiff.io/api/v1/agent/evaluate",
        headers={
            "Authorization": f"Bearer {os.environ['SEODIFF_API_KEY']}",
            "Content-Type": "application/json",
        },
        json={"urls": urls, "assertions": assertions, "wait": True},
    )
    return response.json()

seodiff_tool = StructuredTool.from_function(
    func=seodiff_evaluate,
    name="seodiff_evaluate",
    description="Evaluate web pages for SEO quality (H1, schema, placeholders, word count, ACRI score).",
)

# Add to your agent:
# agent = create_react_agent(llm, tools=[seodiff_tool, ...])

Example Interaction

# LangChain agent in a content pipeline:
agent.run("Generate landing pages for 10 cities and ensure SEO quality")

Agent:
1. Generates 10 city landing pages
2. Calls seodiff_evaluate tool with all 10 URLs
3. Result: 8/10 pass, 2 failed no_placeholders
4. Fixes the 2 pages
5. Re-evaluates: 10/10 pass, avg ACRI: 71
6. "Generated 10 city pages. All pass SEO validation (avg ACRI: 71)."

Assertion Rules to Use

The best assertion rules for AI agent workflows:

has_h1

Ensure every page has exactly one H1 heading tag.

has_schema

Ensure every page has valid JSON-LD schema markup for rich results.

no_placeholders

Find template variables like {{city}} or [TBD] that leaked into production HTML.

max_token_bloat

Detect when boilerplate overwhelms useful content for LLM crawlers.

max_js_ghost_ratio

Flag pages where content is rendered client-side and invisible to crawlers.

min_word_count

Prevent thin content by requiring a minimum number of words per page.

Other Agent Integrations

Start testing in 30 seconds

Get an API key and run your first evaluation with a single cURL command.

Get API Key or Read full API docs