How to Spot AI Writing, According to Wikipedia

Distilling the wisdom of WikiProject AI Cleanup into a breezy listicle of the strongest indicators what you're reading was ghostwritten by an LLM

A robot hand holding a pen

Deep in the recesses of Wikipedia is an advice page called "Signs of AI writing" which exhaustively catalogues every telltale indication that Wikipedia editors have observed across thousands of AI-generated submissions. And I do mean exhaustive: at nearly 15,000 words, it’s also way too long to recommend as light reading.

But it certainly makes for a useful field guide. Though created specifically for Wikipedia editors, most of it applies anywhere you encounter text on the internet: in marketing copy, emails from colleagues, even—I hope you're sitting down—LinkedIn posts. Once you know what to look for, you start seeing it everywhere.

So, I went through and pulled out the patterns I think are most recognizable in a general business setting. "No fluff", as a wise LLM once told me. Here are seven that are pretty much dead giveaways not only that an AI wrote this, but the writer didn’t bother to review their work:

  1. Negative parallelisms: "It’s not X, it’s Y." As in: "It's not a product launch. It's a paradigm shift." AI loves this rhetorical construction, and uses it constantly. Human writers use it too, because it can be helpful, but not to the extent AI thinks it is.

  2. Rule of threes: "Innovative, transformative, and groundbreaking." AI defaults to triplets when listing anything: adjectives, benefits, takeaways. One of these is no biggie. But when every list has exactly three items and every description stacks three adjectives, you know what’s going on.

  3. Em dash overkill: Deployed for punchy emphasis where a comma would do—like this. This is probably the most infamous tell, and woe to anyone who likes inserting subordinate clauses. Meanwhile, for some reason AI skips en dashes entirely, often using hyphens for ranges like dates (1990–2000) or scores (3–2) where an en dash belongs.

  4. Formatting overkill: Excessive bolding of key terms like everything is a damn textbook. Every bulleted list looks like: "Term: Definition of that term". Numbered lists where paragraphs would work. Emojis in headers. 🚀 (kill me now). It's not that any one of these is wrong, but AI reaches for formatting when it should just write a sentence.

  5. AI vocabulary: RIP words like: delve, intricate, tapestry, pivotal, underscore, landscape, foster, testament, enhance, crucial. Sorry if you ever found them useful, these words are now cursed objects.

  6. False ranges: "From intimate gatherings to global movements." "From technical expertise to creative vision." The structure implies a spectrum, but there's no actual spectrum. They're just two loosely related things dressed up to sound comprehensive.

  7. Compulsive summaries: "Overall," "In conclusion". I.e, the tendency to restate what was just said, even when the passage is too short to require it. Human writers sometimes do this in long documents, but AI does it reflexively.

The advice page has many more patterns, including some more applicable to Wikipedia itself. But in my experience, these seven show up all the time in professional contexts. And once you recognize them, you can’t unsee it.

And before someone asks: did you use AI to help write this? Of course I did. I used it to look for patterns and similarities, to help me focus on what I wanted to write. And then I wrote it.

Previous
Previous

Is time running out for Wikipedia?

Next
Next

Wikipedia is old enough to rent a car