AI vs Human Content in Legal SEO: What 1.89 Million Words Revealed

  • Study analyzed 2,435 law firm rankings across 24 U.S. cities.
  • AI content showed no significant link to higher Google rankings.
  • Readability and editorial quality mattered more than AI authorship.

"Content is king" has been the rallying cry of digital marketing for the better part of two decades. It's in every agency pitch deck, every conference keynote, every SEO blog post that needs a punchy opening line. At this point the phrase has been repeated so many times it's almost lost its meaning. But the underlying idea, that the quality of what you publish determines your visibility online, is still sound.

What's changed is the question underneath it. In 2026, nobody is debating whether content matters. The debate is whether it matters who, or what, writes it. We went and measured that.

We tested the AI content question with actual data

For the past two years, the loudest conversation in legal marketing has been whether AI-generated content helps or hurts your Google rankings. Agencies have planted their flags on both sides. Some say AI content is a shortcut to the top. Others say Google will penalize you for it. Neither camp has been showing their math.

So we did the math ourselves. Through our research platform, we pulled the top 5 organic Google results for 28 legal keywords across 24 major U.S. cities. We ran every ranking page through Winston AI to score what percentage of the content appeared to be machine-generated. We also captured word count and readability scores for each URL. The final dataset covered 2,435 law firm ranking appearances, 1,618 unique URLs, and 1,021 unique domains across 8 practice areas including personal injury, criminal defense, family law, estate planning, and employment law.

All told, we processed 1,889,828 words. Of those, 615,934 (32.6%) came back flagged as AI-generated.

If you want the full breakdown, you can read the complete study on how is AI content ranking in Google?

The headline finding: Google doesn't care who wrote it

The Spearman correlation between AI content percentage and organic ranking position came back at r = 0.065, with a p-value of 0.138. For anyone who hasn't taken a stats class in a while, that p-value needs to be below 0.05 to indicate a real relationship. Ours wasn't close.

In plain language: there is no statistically significant connection between how much AI content is on a law firm's page and where that page ranks in Google. The algorithm does not reward AI content. It does not penalize it either. It just doesn't factor into the equation.

That's the finding. Not dramatic. Not scary. Just a flat nothing.

But the distribution tells a more interesting story

AI content shows up on nearly every ranking law firm page at this point, but the way it distributes is worth paying attention to. Roughly 54.7% of ranking pages have 5% or less AI-detected content. Most of the pages at the top of Google are still predominantly human-written.

On the other end, 18.2% of Position 1 results for personal injury keywords came from pages with 70% or more AI content. That sounds like a contradiction until you look at which firms those pages belong to. They're a handful of large firms with enormous domain authority and deep backlink profiles behind them. The AI content isn't doing the ranking work. The domain strength is. Those firms could probably rank a page of Lorem Ipsum text for a few weeks before Google caught on.

Geography matters too. Columbus, Ohio came in as the biggest outlier with a mean AI content score of 59% and a median of 76%, most likely because national firms dominate those rankings. On the low end, San Antonio (16.9%), Jacksonville (18.3%), and Houston (19.3%) had the least AI content, reflecting markets where regional firms still rely on traditionally written pages.

The real problem with AI content isn't that Google penalizes it

Here's where it gets practical. We found a correlation of r = -0.233 (p < 0.0001) between AI content percentage and readability score. That's a strong, statistically significant relationship. The more AI content on a page, the harder that page is to read.

And readability does connect to performance. Word count came back with a correlation of r = -0.089 (p = 0.042) to ranking position, which is modest but still a stronger statistical tie to rankings than AI percentage itself. Pages that are long and poorly edited tend to perform worse.

So the problem isn't that Google is detecting and punishing AI content. The problem is that AI tools tend to produce text that's bloated, repetitive, and harder to read than what a competent human writer produces, and those readability issues have their own consequences. Firms that are generating AI content at volume without putting it through a serious editorial pass may be undermining their own performance through a quality problem they've never bothered to measure.

What "Content is King" actually means now

The original idea behind "content is king" wasn't wrong. Content was going to be the primary commodity of the internet, and it was. But in 2026, the question is no longer whether content matters. Obviously it does. The question is what kind of content matters, and the answer has less to do with who wrote it than with how well it was written.

Google's E-E-A-T framework (experience, expertise, authoritativeness, trustworthiness) has been the quality benchmark for a few years now. AI answer engines like ChatGPT and Perplexity enforce the same criteria when they decide which sources to cite in their responses. Neither system is asking "Did a human write this?" They're asking "Does this read like it was written by someone who actually knows the subject? Is it specific? Is it clear? Can I verify the claims?"

A human who writes garbage content that's thin, generic, and full of filler will get outranked by an AI-assisted page that's been carefully edited, fact-checked, and enriched with real expertise. And an AI page that reads like it was generated in bulk and never touched by a human will get outranked by a hand-written page that actually answers the question.

The tool doesn't determine the quality. The editorial process does.

What this means for how you spend your money

If you're running a business and someone tells you that AI content is a shortcut to the top of Google, the data says they're wrong. If someone tells you AI content will get you penalized, the data says they're also wrong. If someone tells you the answer is to just keep doing things the way you were doing them in 2019, they're probably the most wrong of the three.

AI tools are part of the content production process now. That's not going away. What gets you from Position 5 to Position 1 in competitive markets has not changed: domain authority, content that actually says something, pages people can read without a headache, strong E-E-A-T signals, and solid local SEO. AI content does not appear anywhere on that list. But neither does "human content." What appears on the list is quality, and quality is an editorial decision, not a production method.

Content is still king. The king just doesn't care about your ghostwriter's species.

This article was contributed by Jason Bland. He is Co-Founder and CEO of Custom Legal Marketing, a law firm marketing agency and the creator of the CLM Sequoia AI marketing platform.

Jay Bats

Welcome to the blog! Read more posts to get inspiration about designs and marketing.

Sign up now to claim our free Canva bundles! to get started with amazing social media content!