Is your site configured to be cited by AI engines?
We audit the seven structural signals AI search engines actually look for — llms.txt, Organization / LocalBusiness / FAQ / WebSite schema, SpeakableSpecification, meta description quality, and which AI crawlers your robots.txt allows. Drop a URL and you'll see exactly what ChatGPT, Claude, Perplexity, and Gemini can read.

AI engines now answer questions without sending traffic to Google.
When someone asks ChatGPT “best wedding venue near Nashville” or Perplexity “who does small business websites in Tennessee,” the AI answers directly. It cites a handful of businesses. If you aren't one of them, you lose the customer before Google even sees the query.
Being “cited by AI” is the new top-of-funnel. The signals that get you there:
- llms.txt — a structured business summary at /llms.txt. The audit above tells you whether one exists.
- Schema graph — Organization, LocalBusiness, FAQPage, WebSite JSON-LD identifying who you are and what you do.
- AI crawler permissions — your robots.txt must let GPTBot, ClaudeBot, PerplexityBot, and Google-Extended through. Many sites accidentally block them.
- Citation-worthy content — concrete facts, FAQs, comparison data. Vague marketing copy doesn't get cited.
- ↪ /llms.txt — is there a clean, AI-readable summary at the canonical location?
- ↪ Organization / LocalBusiness JSON-LD — does your schema identify the business entity?
- ↪ FAQPage JSON-LD — preferred by answer engines (Perplexity, Bing Copilot)
- ↪ WebSite JSON-LD — canonical-entity signal for Google + AI
- ↪ SpeakableSpecification — voice/AI engines know which copy to read aloud
- ↪ Meta description — present + within Google's 150–160 char rule
- ↪ robots.txt AI crawler permissions — GPTBot, ClaudeBot, PerplexityBot, Google-Extended, Applebot-Extended, and 3 more
The audit is deterministic — same inputs always produce the same score. No third-party API calls, no rate limits, no spend.