radar lite — ai-ready website audit
radar lite — ai-ready website audit
RADAR Lite — 33-check website audit for AI visibility. Scores Foundation, Discoverability, Comprehension, Citability.
RADAR — Readiness, Audit, Discovery & Agent Rating
A 7-layer website audit system for the Claude Code ecosystem. Measures CRO and GEO performance together — the only CLI-native tool that does both.
Available as Lite (L0-L3, 33 checks, free) and PRO (L0-L6, 72 checks, $39 one-time).
Quick Start
# Install Lite
myclaude install @myclaude/radar-lite
# Run audit in Claude Code
"audit https://yoursite.com"
# With options
"audit https://yoursite.com --profile saas --depth 20"
# Install PRO
myclaude install @myclaude/radar-pro
# Run full audit
"audit https://yoursite.com --profile ecommerce"
# Export JSON data
"audit https://yoursite.com --export json"
What is RADAR?
Most GEO tools monitor AI engine visibility after the fact. RADAR audits the structural reasons why a site is or isn't visible, citable, and conversion-ready — and tells you what to change.
The model is layered, not flat. Each layer is a prerequisite for the one above it. If Foundation fails (TTFB > 5s, client-only SPA, SSL errors), Discoverability improvements won't help. If crawlers can't find you, Comprehension improvements won't help. Fixing in layer order is not arbitrary — it reflects how AI crawlers and engines actually work.
RADAR produces a scored, prioritized, actionable report in Markdown. PRO adds JSON export, code snippets, and per-page analysis.
What RADAR is not:
- Not a monitoring service — one-shot audit, not continuous tracking
- Not an SEO tool — GEO operates on a different axis than traditional SEO
- Not an auto-fix tool — audits and prioritizes, implementation is yours
- Not dependent on external APIs — runs standalone, no Ahrefs or Moz required
7-Layer Model
| Layer | Name | Checks (Lite) | Checks (PRO) | Weight (Default) | What It Measures |
|---|---|---|---|---|---|
| L0 | Foundation | 9 | 9 | 15% | CWV, SSL, TTFB, JS rendering, security headers |
| L1 | Discoverability | 7 | 7 | 12% | Crawler access, sitemap quality, indexation, link density |
| L2 | Comprehension | 6 | 6 | 15% | JSON-LD, heading hierarchy, semantic HTML, OG tags |
| L3 | Citability | 11 | 11 | 20% | FAQ, statistics, freshness, E-E-A-T, list density |
| L4 | Conversion Readiness | — | 26 | 18% | LIFT model + Baymard UX: CTAs, forms, trust, accessibility |
| L5 | Authority | — | 5 | 10% | Social proof, reviews, press mentions, external links |
| L6 | Agent Readiness | — | 8 | 10% | llms.txt, agent-card, OpenAPI, MCP, markdown endpoints |
Gating rules: L0 score < 30 or L1 score < 20 caps the composite grade at D. The report states when gating is active and which threshold triggered it.
Total checks: 33 (Lite) / 72 (PRO)
Lite vs PRO
| Feature | Lite | PRO |
|---|---|---|
| Pages analyzed | Homepage + 5 discovered | Up to 50 |
| Layers covered | L0-L3 | L0-L6 |
| Check count | 33 | 72 |
| Per-layer scores | Yes | Yes |
| Composite grade (A-F) | Yes | Yes |
| Top 10 recommendations | Yes | Yes |
| All gaps ranked (ICE/PXL) | — | Yes |
| Code snippets in recommendations | — | Yes |
| JSON data export | — | Yes |
| Executive summary | — | Yes |
| Per-page breakdown | — | Yes |
| Site-type profiles | 1 (Default) | 8 profiles |
| Squad architecture (parallel agents) | 1 agent | 4 agents |
| L4: Conversion Readiness (26 checks) | — | Yes |
| L5: Authority (5 checks) | — | Yes |
| L6: Agent Readiness (8 checks) | — | Yes |
| Execution time | ~30 seconds | 2-3 minutes |
| Price | Free | $39 one-time |
Scoring
Per-layer: Each check scores Pass (2), Partial (1), or Fail (0). Weighted sum normalized to 0-100.
Composite: Weighted sum of layer scores using site-type profile weights.
RADAR Score = (L0 × w0) + (L1 × w1) + (L2 × w2) + (L3 × w3)
+ (L4 × w4) + (L5 × w5) + (L6 × w6)
[weights sum to 100; vary by site-type profile]
Grade scale:
| Score | Grade | Label |
|---|---|---|
| 90-100 | A | Agent-Ready |
| 80-89 | B | Well Optimized |
| 65-79 | C | Needs Work |
| 50-64 | D | Significant Gaps |
| 0-49 | F | Critical Issues |
N/A handling: Checks inapplicable to your site size (e.g., breadcrumb navigation on a 3-page site) are excluded from the denominator rather than scored as failures.
Site-Type Profiles (PRO)
Different sites have different priorities. Profiles adjust layer weights by up to ±10 percentage points.
| Profile | L0 | L1 | L2 | L3 | L4 | L5 | L6 | Rationale |
|---|---|---|---|---|---|---|---|---|
| Default | 15 | 12 | 15 | 20 | 18 | 10 | 10 | Balanced baseline |
| Ecommerce | 15 | 12 | 15 | 15 | 25 | 10 | 8 | Baymard: 70.19% cart abandonment |
| Content | 15 | 12 | 15 | 25 | 8 | 12 | 13 | Citation IS the product |
| SaaS | 18 | 12 | 15 | 18 | 20 | 8 | 9 | Performance + trial signup |
| Agency | 12 | 10 | 12 | 15 | 22 | 14 | 15 | Lead gen + credibility |
| Academic | 12 | 12 | 18 | 25 | 5 | 15 | 13 | Citation primary |
| Portfolio | 18 | 10 | 12 | 12 | 25 | 15 | 8 | Contact/hire CTA is the goal |
| Documentation | 15 | 15 | 18 | 15 | 5 | 7 | 25 | Docs are the primary AI agent use case |
Research Basis
RADAR's check weights and scoring thresholds are derived from published research, not heuristics.
| Source | Contribution | Confidence |
|---|---|---|
| Princeton GEO (KDD 2024) | 9 GEO optimization techniques; citation impact data; content freshness stats (+33% from statistics, 2.5x from tables, 71% vs 18% citation rate for fresh vs stale content) | High |
| AutoGEO / CMU (ICLR 2026) | RL-based optimization across 3 AI engines; universal content rules with 78-84% cross-engine consistency; ablation: rule compliance is the #1 training signal (18.5% performance impact); 4 new L3 checks | High |
| Baymard Institute | 771 ecommerce UX guidelines; 10 L4 checks; 70.19% average cart abandonment; 35.26% conversion lift from checkout UX redesign | High |
| Geoptie competitive analysis | 6-dimension scoring model; weight calibration reference | Medium |
| Kairo weight validation | Evidence-backed redistribution from original to validated weights; 6 biases identified with mitigations | High |
Known limitations of the research basis:
- AutoGEO tested on GPT-4, Gemini 1.5, Claude 3 — English-only corpora
- CLS measurement requires real browser rendering; RADAR's HTTP-only approximation is directional
- Content freshness relies on visible dates and schema, not actual server modification timestamps
- L6 checks audit standards (llms.txt, agent-card, MCP) that are nascent as of 2026 — low adoption is normal
Every RADAR report includes a Scoring Transparency section that states the research basis used, site-type profile applied, size classification, and which bias mitigations were active.
FAQ
Does RADAR query AI engines to test visibility? No. RADAR cannot programmatically query ChatGPT, Gemini, or Perplexity to check if they cite your site. The audit is proxy-based: it checks the structural, content, and technical factors that research shows correlate with AI citation and discoverability. This is the same approach used by every GEO tool — none have direct API access to AI engine citation indices.
Will RADAR penalize my React/Vue/Angular site? The JS Rendering Detection check (F6) uses spectrum scoring rather than binary pass/fail. Full SSR and SSG configurations score the same as traditional server-rendered sites. Client-only SPAs score lower because AI crawlers do not execute JavaScript — this is not a bias, it is the technical reality of how AI engines crawl the web. Prerender and hybrid configurations score in between.
My site is 3 pages. Will it score poorly just because it's small? No. RADAR classifies sites as micro (1-5 pages), small (6-20), medium (21-100), or large (100+). Checks that are inapplicable to your size — internal link density, sitemap cross-referencing, search functionality — are marked N/A and excluded from scoring rather than counted as failures.
How does RADAR compare to running Lighthouse? Lighthouse covers L0 Foundation (performance, CWV, accessibility). RADAR covers L0 plus three additional layers in Lite, and seven total in PRO. If you need a deep CWV analysis, Lighthouse gives you more granularity on L0. If you want to understand why AI engines aren't citing your content, RADAR covers the layers Lighthouse doesn't touch.
The L6 Agent Readiness score is 8/100. Is my site broken? No. L6 audits standards that are still being adopted across the industry in 2026. llms.txt has no formal RFC yet. agent-card.json is an emerging A2A protocol. MCP integration is relevant only to sites that expose tools. A low L6 score reflects where the industry is, not a deficiency in your site. RADAR frames L6 findings as an opportunity roadmap for 2026-2027 readiness, not as current-state failures.
Can I run RADAR in CI/CD? Lite is single-command and can be scripted. PRO's JSON export is designed for pipeline integration. Formal CI/CD integration is on the v2 roadmap.
Does RADAR work on non-English sites?
Structural checks (HTML analysis, schema validation, heading hierarchy, JS rendering) are language-agnostic. Content-density checks (word count thresholds, entity density) use language-adjustable parameters via the --locale flag. AutoGEO's research basis is English-only — content scoring results on non-English sites should be treated with lower confidence.
License
MIT
Built for the Claude Code ecosystem. Distributed via myClaude Marketplace.
Reviews (0)
Loading reviews...