Disclaimer: this is NOT a doom-and-gloom, "AI will kill us all!" rant.
We're gonna talk about AI Slop today.
Yes, you read it right. "AI Slop." I use the term advisedly, because it defines a particular type of "Large Language Model" (LLM/AI) generated writing. How do I know this?
Well, in part because "AI Slop" was Merriam-Webster's 2025 Word of the Year. In the PBS article linked in the previous sentence, Webster's defines AI Slop as: "digital content of low quality that is produced usually in quantity by means of artificial intelligence."
From the article:
'It's such an illustrative word,' said Greg Barlow, Merriam-Webster's president, in an exclusive interview with The Associated Press ahead of Monday's announcement. 'It's part of a transformative technology, AI, and it's something that people have found fascinating, annoying and a little bit ridiculous.'
But before we dig in to this topic, let me just say that I find AI to be a welcome and powerful tool for all manner of things: especially assistance with organizational problems.
![]() |
| It's pretty nifty at generating historical images, too. |
AI can organize, systematize, collate, quantify, inventory, lay out, define, compare and contrast a wide variety of data at the snap of your fingers.
But it can’t think for itself (no matter how well it may have been “trained” to fake cognition), and it’s not been able to bridge the so-called “uncanny valley.”
And obviously I think using any variety of AI to attempt to replace human cognition in the workplace is both shortsighted and bound to fail long-term. (#layoffs)
![]() |
| This one may seem familiar to those who have read any of my recent posts here at SS. |
I am also in no hurry to read the “fiction” it will generate upon command.
If you have spent time reading Kindle Unlimited content, or on writing sites such as Medium or Substack, you have without doubt encountered AI Slop that someone is attempting to pass off as human-written. And if you, like me, don't want to be bothered with investing any time in "faked human" fiction, there are "tells" you can keep an eye out for.
And who better to help flag these sorts of giveaway electronic tics, than one of the "good" AIs?
Talking about Claude.
(Full disclosure: Anthropic, Claude's parent company is currently putting the finishing touches on a $1.5 billion copy-write infringement settlement with authors whose works were used to help build Anthropic's LLM. A legal action to which I, and damned near every other author I know, are all parties. And boy, do I have big plans for MY coming settlement! All kidding aside, while I don't find Anthropic's attempt to address copy-write concerns raised by the manner in which LLMs have tended to be built perfect, I also don't see the likes of Open AI, Palantir, NVIDIA, Microsoft, Alphabet, Amazon, or Meta stepping up to try to reimburse the authors who produced the content they used-without-permission to build their insanely profitable LLMs, either.)
I guess you could say I went to the well. I mentioned one of the most common "tells" I've seen signifying AI writing (the so-called "not because X, but because Y" rhetorical structure) and asked Claude about other common tells.
I find what I got back very much worth sharing:
Yes, I'm familiar with this critique. It's become a recognizable pattern that readers and critics have started flagging as a kind of AI stylistic fingerprint. The core observation is that these constructions share a few telltale qualities:
The corrective reframe — "not because X, but because Y" — artificially dramatizes interiority by presenting a false binary and then resolving it. Human writers tend to trust readers to infer motivation; this construction over-explains while trying to look like depth.
The fragmented "twist" sentence — "She smiled at him: not with kindness. With liberation." — uses punctuation to manufacture a pause that signals profundity. The colon-then-fragment structure tries to replicate the rhythm of a revelation, but it often feels mechanical because it's applied indiscriminately, regardless of whether the moment actually earns that weight.
What both share is a kind of performed literary quality — they're doing the gestures of literary fiction (subverting expectation, revealing inner contradiction) without the organic necessity that makes those moves land in genuinely strong prose. Human writers use these constructions too, but AI tends to reach for them constantly, because they pattern-match to "good writing" in training data without understanding when they're appropriate.
There's also a broader critique underneath this: AI fiction tends to be emotionally explanatory rather than emotionally evocative. It tells you what the feeling means rather than making you feel it. These sentence structures are almost a microcosm of that tendency.
And then Claude went on to elaborate further:
A few other tells that tend to cluster with those patterns, in my observation:
Emotional labeling dressed as showing. "A wave of grief washed over her" or "he felt a complicated mixture of anger and relief." It gestures at complexity while doing the opposite of demonstrating it.
The wisdom-summary sentence. A paragraph of action or dialogue, then a standalone sentence that packages its meaning: "Some wounds, she understood now, never fully closed." These appear constantly and almost always at the end of sections or chapters.
Over-varied dialogue tags. Characters don't just say things — they murmur, breathe, exhale, rasp, offer. Human writers vary tags too, but AI does it with a kind of aggressive consistency that draws attention to itself.
Sensory cataloguing. Descriptions that hit smell, sound, sight, and texture in rapid succession in a way that reads like a checklist rather than a scene.
The reflective gerund opener. "Watching him leave, she realized..." or "Turning away from the window, he wondered..." These appear with suspicious frequency as scene transitions.
Oh yeah, these all seem really familiar, come to think of it!
And of course, there's the most obvious tell of all:
![]() |
| Pulled from a novel published and available on Amazon. Highlights mine. |
| Is it any wonder we're beginning to see "badges" such as this one on fiction offered for sale? |
So hey, use AI for what it's good for: organization, systematization, categorization, copy-editing?
And, of course: helping you, as the reader/customer, avoid wasting time/resources on AI Slop attempting to masquerade as "human-written" fiction.
Lastly, see below for what ChaptGPT came up with in response to my prompt requesting a "comprehensive visual image embodying AI Slop."
See you in two weeks!
![]() |
| Don't say I didn't warn you! |




No comments:
Post a Comment
Welcome. Please feel free to comment.
Our corporate secretary is notoriously lax when it comes to comments trapped in the spam folder. It may take Velma a few days to notice, usually after digging in a bottom drawer for a packet of seamed hose, a .38, her flask, or a cigarette.
She’s also sarcastically flip-lipped, but where else can a P.I. find a gal who can wield a candlestick phone, a typewriter, and a gat all at the same time? So bear with us, we value your comment. Once she finishes her Fatima Long Gold.
You can format HTML codes of <b>bold</b>, <i>italics</i>, and links: <a href="https://about.me/SleuthSayers">SleuthSayers</a>