OK so something has been on my mind the past couple of months. We know there’s a lot of thought leadership to suggest that LLMs models like ChatGPT love structured content. Articles that use bullet points, numbered lists, simple heading structures tend to get cited by LLMs more. However, I haven’t seen a lot of data around it so wanted to get some of our own.
The one thing I’ve been REALLY curious about is the usage of tables. It’s extremely common where I’ll see something get cited on ChatGPT and when looking at the page, it has a nice structured table somewhere in the content. For example, here’s one on a Salesforce article that shows up for a prompt of “What is the best lead capture software?”

As a trend, this has come up over and over again in my research. It really seems like having tables on the page makes a big difference but I didn’t have any data to support that. So I wanted to figure out a way to actually analyze if tables are more relevant for ChatGPT results.
Methodology
So basically want I wanted to analyze is how common tables in standard Google results as compared to ChatGPT citations. That would give a benchmark of the number of instances we see tables in on a regular page across the Web that we could then compare it to for ChatGPT. I also wanted to use similar industries to ensure that datasets were as close to each as humanly possible. Since Nectiv specializes in SEO/GEO for SaaS and technology companies, I thought that would be a good industry to use.
For Google results, I was able to extract the top queries for Capterra, the known marketplace for SaaS. I then used Ahrefs Keyword Explorer to extract all the pages associated with those queries. In total, I was about to get around 25K pages from Google search. Huge shout out to Patrick Stox for helping me out here!

Next I needed to get the results from ChatGPT citations. Jason my co-founder at Nectiv has built us our own AI Tracker. To better understand AI search visibility, we track Capterra’s query set as prompts. It gives us a good industry view of how AI search visibility is trending at any given time for B2B/SaaS prompts.
![]()
As part of the tool, we also have a “Top Cited Pages” section. This extract the top cited pages from that AI model at scale. We were able to export this data to give us insights as to the pages that ChatGPT is referencing the most.

Next, we then needed to identify what type of content was in both datasets. Fortunately, Screaming Frog makes this really easy to do. We created a custom extraction that pulled down any table that it found in the content. We were then able to crawl both datasets in order to see if Screaming Frog found any <table> element or not.

The Results
We then analyzed and compared both the Google search index and ChatGPT citations datasets. When looking at the Google search data, table elements were pretty rare. In fact of the 25K URLs we crawled, only 13% of had a table element included in the content. 
Conversely, the ChatGPT data told a very different story. It wasn’t the majority but 30% of the ChatGPT citations included a table in the content.

Comparing the percentages side by side was extremely interesting. Based on this data, ChatGPT results are 2.3x more likely to include a result with a table element as opposed to Google search.

Examples Of Effective Tables
So looking at the data, we can see several examples of sites using tables effectively. For instance, the site SoftwareConnect is one of the most cited domains in the ChatGPT dataset with 250 recorded citations over the past 90 days.

SoftwareConnect.com uses tables at the top of all of their roundup articles that segment each product recommendation by name, feature and starting price.

When looking at the ChatGPT output, it using examples from these pages:

Here’s another even more spammy example but one that’s quite telling. The site Cotocus.com is one the most top cited domains with 443 citations in the past 90 days. It’s literally beating out authoritative results such as Forbes, SoftwareAdvice and EVEN CAPTERRA in the software space.

If you go to one of their URLs, they are the most basic HTML pages you can find. It’s basically a text document with headings, bulleted lists and you guessed it…TABLES.

In fact, I’m guessing that this content is literally just copied and pasted directly from ChatGPT’s own output. So literally AI written content influencing AI results. But that doesn’t stop them from being a source. In fact, Ahrefs has them getting cited for 300+ ChatGPT prompts which is quite a few considering the nominal amount of organic traffic they get.

Some other examples of sites with strong structured tables include:
1. ClickUp
2. Nextiva
3. Peerspot
4. TechRadar
5. Zapier
Takeaways For SEOs
So this definitely leads to some interesting insights. ChatGPT definitely seems much more likely to rewards content that’s included in some type of table format. If LLMs crawlers are a lot more basic than Google search crawlers, they might need extra support determining content structure and might be more likely to “chunk” something from an existing structured format. So reformatting your content to table structures or adding them to your existing pages is certainly a worthwhile test for most companies. This is doubly true if you believe LLMs might already be having issues extracting your content.
Of course, correlation isn’t causation here. Tables are pretty dense from a contextual standpoint, so that could very well be what LLMs are gravitating towards. As well, they appear more often on comparison content so it could be a natural bias of a format that’s more likely to be included.
However, I think exploring table structures for improved visibility in ChatGPT and other LLMs is certainly something for marketers to keep exploring.
