
Questions this page should answer
- Is AI-readiness improving or stalling over time?
- Which technical area is holding us back right now?
- What should we fix first this sprint?
Before you analyze
- Keep the same date range you use in AI Search pages.
- Compare the newest run with at least one prior run.
- Open the newest run detail before creating tickets.
What this page gives you
- Five top scores:
Performance,Accessibility,Best practices,SEO,Content Audit historyso you can compare runs over time- A full report preview with crawlability, schema, content structure, NLP, and Lighthouse-based diagnostics
How to read the list view
Performance: speed and technical execution qualityAccessibility: structural clarity for users and machinesBest practices: implementation hygiene and safety checksSEO: search-engine technical healthContent: clarity and structure of on-page content for model understanding
- Low
Contentwith highSEOusually means classic SEO is fine, but model parsing quality is weak. - Low
Performancecan reduce crawl reliability and increase processing friction. - Flat scores for months usually mean no active technical improvement cycle.
How these scores are calculated (simple)
Category score (Performance, Accessibility, Best practices, SEO, Content)
Category scores are weighted pass rates of checks in that category, shown on a 0-100 scale.
Higher score means fewer critical and major failures in that category.
How to use audit history
UseAudit history as your change log:
- Open the latest completed row.
- Compare against the previous row.
- Validate which fixes moved scores and which had no effect.
LLM audit result preview: read it in this order
Start from the top and move block-by-block. This keeps the analysis focused and prevents random fixing.1) LLM performance and crawlability

LLM performancecards:PerformanceAccessibilityBest practicesSEO
LLM crawlabilitystatus checks:llms.txt statusrobots.txt statusSitemap status
LLM Bots in robots.txtallow/deny table.
- If
robots.txtorsitemapis not valid, fix that first. - If important bots are blocked, adjust rules before content improvements.
- If score cards are weak and crawlability is healthy, move to deeper content and diagnostics sections.
2) Entity trust signals and schema validation

Entity trust signalsscore.Schema validationchecks, including:HowToArticleFAQPageOrganizationBreadcrumbList
- A recommendation list for missing or weak schema areas.
- Add missing high-impact schema first (
Organization,Article,BreadcrumbList,FAQPagewhere relevant). - Keep schema aligned with actual page content.
- Use recommendation bullets as implementation tickets for dev/content teams.
3) Content structure and semantic coverage

Content scoreReadabilityEntity keyword coverageReadability grade levelContent recommendationsNLP analysisterms and topic distribution- Start of
Initial HTML preview
- Low readability with good keyword coverage means your content has topics, but clarity is weak.
- Weak entity coverage means key topics/entities are not explicit enough.
- NLP clusters show what your page is actually about from a machine perspective.
- Simplify language in key sections.
- Improve heading hierarchy (
H2/H3) and paragraph structure. - Ensure important entities and terms appear naturally in primary sections.
4) HTML preview, performance tabs, and diagnostics

Initial HTML previewfor source-level inspection.- Score panel with tab views:
AllFCPLCPTBTCLS
Opportunitieswith estimated savings.Diagnosticswith implementation details.
- Start with the highest estimated savings items.
- Fix render-blocking CSS/JS issues first.
- Track if performance changes improve both UX and crawl efficiency.
FCP(First Contentful Paint): time until first visible content appears.LCP(Largest Contentful Paint): time until the main content block appears.TBT(Total Blocking Time): how long scripts block user interaction.CLS(Cumulative Layout Shift): visual instability while the page loads.
5) Accessibility and best-practices findings

Accessibilitychecks (for exampleContrast,Names and labels).Best practiceschecks:Trust and SafetyGeneral
- Resolve contrast and labeling issues that block readability and machine interpretation.
- Fix recurring best-practice warnings to reduce technical fragility.
- Re-run audits after changes to confirm warning reduction.
6) SEO, content, and crawling/indexing checks

SEOscore and related checks.Contentchecks (title, description, and other core signals).Crawling and indexingchecks (status/response-level validations).
- Confirm core SEO and metadata checks pass on important pages.
- Fix crawl/index warnings before scaling content output.
- Use this section to verify technical readiness after implementation.
If you see this, do this next
| What you see in the report | What it usually means | What to do next |
|---|---|---|
robots.txt or Sitemap is not valid | AI crawlers cannot access or trust your crawl map | Fix crawl directives and sitemap first |
| Entity trust score is low | Important trust schema is missing | Add Organization, Article, and BreadcrumbList schema first |
| Readability is low | Content is hard to parse and summarize | Simplify language and improve section structure |
LCP/TBT problems in performance tabs | Rendering path is heavy | Remove blocking CSS/JS and optimize loading order |
| Accessibility warnings stay high | Structural quality debt remains | Fix labels, contrast, and semantic structure before scale |
| SEO/content checks fail | Core metadata/indexability issues remain | Resolve title/description/index checks before publishing more pages |
Quick weekly checklist
- Review list-view score direction.
- Open latest audit details and verify crawlability first.
- Prioritize entity/schema and content-structure gaps.
- Fix top Lighthouse opportunities and accessibility warnings.
- Validate SEO/content/crawling checks before marking complete.
What to fix first
| Pattern in LLM audit | What it usually means | Recommended action |
|---|---|---|
Crawlability status fails (robots/sitemap) | AI engines cannot access content correctly | Fix access rules and sitemap integrity first |
| Entity trust score is low | Structured trust signals are missing | Add/fix Organization, Article, FAQ, Breadcrumb schema |
| Readability low, entity coverage weak | Content is hard to parse semantically | Simplify language and strengthen entity-rich sections |
| Performance opportunities remain high | Technical performance debt affects render quality | Implement highest-savings fixes first |
| SEO/content checks pass but visibility is weak | Off-page and prompt-level coverage gap | Pair fixes with AI Search prompt and citation work |
Team routine
- Weekly: run list review + one detailed audit deep dive.
- Bi-weekly: validate whether shipped fixes changed detail checks.
- Monthly: report recurring technical blockers and closure rate.
Keep in mind
- One score alone is not enough; read section-level diagnostics.
- Not every warning has equal impact. Prioritize crawlability and trust signals first.
- High classic SEO quality does not automatically mean strong AI discoverability.

