How to Check If AI Search Engines Mention Your Brand
A step-by-step guide to auditing your brand's visibility across ChatGPT, Perplexity, and Gemini. Learn what to search, what to look for, and how to interpret the results.
You might already be invisible
Here's something most businesses haven't done: open ChatGPT, type a question their customers would ask, and see if their brand shows up in the answer.
Try it right now. Go to ChatGPT, Perplexity, or Gemini and ask something like "best [your category] for [your audience]." If your brand isn't in the response, that's the experience your potential customers are having every day.
The good news is that checking your AI visibility is straightforward. The bad news is that doing it thoroughly takes more effort than most people expect.
Step 1: Build your query list
Start with the questions your customers actually ask before they buy. These fall into a few patterns:
Category searches are the broadest. "Best CRM for small businesses." "Top project management tools." "Affordable accounting software." These are the queries where AI engines recommend a shortlist of brands, and yours needs to be on it.
Comparison searches pit you directly against competitors. "Notion vs Asana." "HubSpot vs Salesforce for startups." AI engines handle these by summarizing each product's strengths and weaknesses.
Problem searches describe a pain point without naming a category. "How do I track employee time across multiple projects." "My team keeps missing deadlines." If the AI engine suggests a tool in its response, that's an AEO opportunity.
Reputation searches are about you specifically. "Is [your brand] any good." "Reviews of [your product]." What the AI says here is your de facto brand reputation for everyone who asks.
Write down 5-10 queries across these categories. These are your audit keywords.
Step 2: Search each engine separately
AI engines don't share an index. ChatGPT, Perplexity, and Gemini each pull from different sources and generate different answers. A brand can be prominently recommended by Perplexity and completely absent from ChatGPT.
For each query on your list, run it through all three engines and note:
- Mentioned? Does the engine name your brand anywhere in the response?
- Position? Are you the first brand mentioned, or buried at the end of a list?
- Cited? Does it link to your website as a source? (Perplexity does this most often; ChatGPT rarely does.)
- Framing? Is the mention positive, neutral, or hedged? There's a big difference between "a popular choice" and "some users report issues with..."
Don't rush this. Actually read the full response each engine gives. The context around your mention matters as much as the mention itself.
Step 3: Run the same query more than once
This is where manual checking gets tedious, but it matters. AI answers aren't deterministic. Ask the same question twice and you might get a different set of brand recommendations.
Run each query at least 3 times per engine. You're looking for consistency:
- Always mentioned: Your brand shows up every time. Strong position.
- Sometimes mentioned: You appear in some responses but not others. Inconsistent visibility.
- Never mentioned: You don't show up at all. This is where the biggest opportunity (or problem) lives.
A brand that shows up in 2 out of 3 tries has meaningfully different visibility than one that shows up 3 out of 3 times. Single-query checks miss this entirely.
Step 4: Record your competitors too
While you're running these queries, note which competitors get mentioned. This gives you two things:
Share of voice. If an AI engine mentions 4 brands in response to "best CRM for startups" and you're one of them, you have roughly 25% share of voice for that query. If your competitor shows up in every response and you show up in half, the gap is clear.
Competitor patterns. You'll start to notice which competitors dominate which engines. One competitor might own the ChatGPT recommendations while another does better on Perplexity. These patterns tell you where the gaps are.
Step 5: Check what they say about you directly
Beyond category and comparison queries, search for your brand by name. Ask each engine:
- "What is [your brand]?"
- "Tell me about [your product]"
- "Is [your brand] good for [use case]?"
The answers reveal how AI engines understand your brand. Common issues:
- Outdated information. The AI describes a product version from two years ago.
- Wrong category. It describes what you do inaccurately.
- Missing features. It knows about your product but omits your key differentiator.
- Competitor framing. "Similar to [competitor] but..." when you'd rather be described on your own terms.
These are all fixable, but you have to know about them first.
What to do with the results
After your audit, you'll have a clear picture of where you stand. The most common scenarios:
Strong visibility, one engine. You're well-represented on Perplexity but absent from ChatGPT, or vice versa. This usually means one engine has indexed your content well while others haven't. Focus your optimization on the weaker engines.
Mentioned but not cited. AI engines know about your brand but don't link to your site. This means they're pulling information from third-party sources (reviews, articles, forums) rather than your own content. Improve your on-site content so engines have a direct source to cite.
Absent from category queries. You don't show up when people ask about your category. This is the most common finding and the most urgent to fix. It usually means you're not represented in the comparison articles, review roundups, and authoritative sources that AI engines rely on.
Negative or hedged framing. You're mentioned, but the AI qualifies it with warnings or criticism. Check what sources it might be drawing from. Outdated negative reviews or a single critical article can shape the AI's framing of your brand.
Why manual audits are a starting point, not a strategy
Running through these steps once gives you a useful baseline. But there are real limitations to doing this by hand:
It doesn't scale. Checking 10 queries across 3 engines with 3 trials each is 90 individual searches. For a thorough audit, you'd want more queries and more trials.
It's a snapshot. AI answers change as models update and new content gets indexed. A one-time check tells you where you are today, not whether your visibility is improving or declining.
It's hard to be systematic. When you're manually copying and pasting queries, it's easy to miss patterns or forget to check a specific engine.
Get a structured baseline with QuickAEO
A QuickAEO report automates the process described above. You pick your keywords, and QuickAEO runs them across ChatGPT, Perplexity, and Gemini with multiple independent trials per engine.
You get back your mention rate, citation tracking, engine-by-engine breakdown, and the full AI responses so you can see exactly what each engine says about you in context.
It costs $5 per keyword, takes a few minutes to set up, and requires no account. If you've done the manual audit above and want a more complete picture, it's the fastest way to get one.
Whether you check manually or use a tool, the important thing is to check. Most brands have never looked at how AI engines talk about them. That's the gap. Knowing where you stand is the first step to improving your position.