How to Check if ChatGPT Recommends Your Store
Here's a question most store owners haven't thought to ask: if someone asks ChatGPT to recommend a product you sell, does it mention you?
Not your category. Not your competitors. You, by name, with your products.
I built a tool that answers that question in about 75 seconds. It's free, no signup, and it'll probably surprise you. Here's how to use it and what the results actually mean.
Step 1: Go to the diagnostic
Head to seomelon.com/llm-check. You'll see a single input field asking for your store URL.
Enter your homepage URL -- your main .com, not a specific product page. The tool needs to crawl your site to understand what you sell before it can test whether AI knows about you.
Step 2: Wait about 75 seconds
The diagnostic does a lot in that time. Here's what's happening behind the scenes:
- Site crawl. It reads your homepage, product pages, and collection pages. It pulls your structured data (JSON-LD), your meta descriptions, your FAQ content, and your product descriptions.
- Question generation. Based on what you sell, it generates buyer-intent questions -- the kind of questions a real person would ask ChatGPT. Not "what is [product category]" but "Can you recommend a [specific product type] for [specific use case]?"
- Dual-model test. It runs those questions through both GPT-4o-mini and DeepSeek. Two different AI models, two independent checks on whether your store shows up.
- Scoring. It analyzes the responses to see if the AI mentioned your brand, your products, or your store by name.
Step 3: Read your score
You'll get a score from 0 to 100. Here's how to interpret it:
0-20: Invisible. AI assistants don't know your store exists. When someone asks for a product recommendation in your category, you're not in the conversation. This is where most stores land. It's not a reflection of your products -- it's a reflection of your site structure.
21-50: Partially visible. The AI might know your brand name but can't recommend specific products. Or it knows one product line but not others. There's something to work with, but there are gaps.
51-75: Competitive. AI can recommend your products in relevant contexts. Allbirds scores around 75 -- ChatGPT knows their product lines, can describe their materials, and recommends them for specific use cases. You're in the game at this level.
76-100: Dominant. AI actively recommends your products and can have an informed conversation about your brand. Heatonist scores here. When someone asks ChatGPT for a hot sauce recommendation, Heatonist comes up by name with specific product suggestions.
For context: Bombas scores about 56. Most stores I've tested score 0. Zero. The AI has literally no idea they exist.
What the breakdown tells you
Below the score, you'll see specifics. The diagnostic shows you:
- Which questions the AI could answer about your store -- and which ones it couldn't.
- Whether it mentioned your brand by name or just referenced your product category generically.
- What structured data the AI found on your site -- product schema, FAQ schema, review markup.
- Specific gaps -- missing product descriptions, absent FAQ content, incomplete JSON-LD.
The gap analysis is the actionable part. A score of 15 doesn't tell you much on its own. But knowing that the AI can't find any FAQ content on your site, or that your product descriptions are too thin to generate recommendations from -- that's specific enough to fix.
Why two AI models?
We run the diagnostic against both GPT-4o-mini and DeepSeek because different AI systems process information differently. A store might show up in one model's recommendations but not the other.
If you score well on both, your content structure is solid. If you score well on one but not the other, there's something model-specific going on -- usually around how each model weighs different content signals.
Using two models also gives you a more honest picture. One model might be generous. Two models agreeing that your store is invisible is a stronger signal than one model saying the same thing.
What to do with a low score
If you scored below 30, here's the priority list:
First, fix your product descriptions. If they're one or two sentences copied from a supplier, AI has nothing to work with. Write descriptions that answer the questions a buyer would ask: What is this for? Who is it for? How is it different from alternatives? What does it feel like, taste like, look like?
Second, add FAQ content to your product and collection pages. Not hidden in schema-only markup. Actual visible FAQ sections with real buyer questions. "What's the best [your product] for [specific use case]?" is exactly the format people use when asking AI for recommendations.
Third, complete your structured data. Make sure your product JSON-LD includes name, description, brand, price, availability, reviews, and images. Most platforms auto-generate some of this, but it's usually incomplete.
These three changes move the needle more than anything else I've tested. They're mechanical fixes, not creative ones. You don't need to reinvent your brand voice. You need to give AI systems enough structured information to work with.
Run it again after you make changes
The diagnostic isn't a one-time thing. Make improvements, wait for AI models to re-crawl your site (usually a few weeks), then run it again. Track your score over time.
Some of the stores I've been working with have gone from single digits to 60+ in one round of improvements. The fixes are concrete and the feedback loop is fast.
The free diagnostic at seomelon.com/llm-check is the starting point. 75 seconds to find out where you stand. Everything after that is execution.
Matt Harris is the founder of SEOMelon, an AI-powered SEO and AEO optimization tool for Shopify and WooCommerce merchants. He's building in the TinyFish Accelerator.
Check your store's AI visibility
Free, 75 seconds, no signup. See how your store scores when real buyers ask AI for recommendations.
Run the free diagnostic