
I find myself more and more often playing digital cop.
Besides my side hobby of reporting the numbers of those damn call centers—now increasingly sophisticated at bypassing filters—these days I turn into a vengeful Dirty Harry every time I come across an article clearly created or poorly translated by AI.
Let’s be clear: there’s nothing wrong with using AI for translations—unless you’re an 11-year-old who’s supposed to be learning English.
The real problem is that most people don’t even bother to proofread the output and publish absolute nonsense.
That quote at the beginning? “I wouldn’t trust this overgrown pile of microchips any further than I can throw it…” It’s from General Beringer in WarGames. In the Italian dubbed version, it was translated in a way that made cultural sense at the time. A literal translation would’ve been unintelligible. But the expression on his face? That’s universal—and it perfectly mirrors how I feel every time I stumble upon yet another AI-generated article that no one even glanced at before publishing.
By the fourth sentence—when I spot a random gender switch, a number mismatch, or absurd repetition—my inner “make my day” trigger goes off, and I instantly banish the source from my feed.

This line by General Beringer, in the Italian dubbed version, was adapted to fit the style of the time while preserving its meaning—because a literal translation would have made no sense in Italian.
The facial expression, though, is universal—and perfectly captures the feeling you get when facing yet another AI-generated article that hasn’t even been given a cursory glance.
In the past year, the endless stream of JavaScript frameworks has finally been dethroned as the noisiest thing on the web—AI has taken over, in all its specialized (or not) forms, flooding tech news sites everywhere.
I get a weird kick out of those newspapers that place AI news next to finance updates, sparking mental shortcircuits that lead people to buy Bitcoin. I like to imagine it’s all a cunning NLP strategy rather than clueless editors mixing apples and oranges.
Still, every time I read headlines like “AI helped someone win the lottery”, my brain shuts down… until I remember my statistics professor—sweaty and enthusiastic—tossing coins from the lectern to show us how probability really works.
Now, the fact that modern coins make it hard to tell heads from tails is, clearly, part of a global conspiracy to make us dumber and easier to manipulate. (Just like how LEGO sets are no longer general-purpose: now it’s all Star Wars kits—non-canon ones, no less.)
Anyway, let’s stay focused on the fact that every single day brings a flurry of flashy news about the latest LLM, the new GPT, or the next groundbreaking ML technique—usually accompanied by chaotic graphs designed only to give a vague sense of depth to our collective FOMO.
Yesterday it was ChatGPT and Claude, today it’s DeepSeek (aka DeepSick among friends) teamed up with Cursor. Tomorrow? Expect new names with very little imagination, trained with carbon footprints big enough to patch up several holes in the ozone layer, and magically able to run on your neighbor’s browser—just like that, with no explanation.
So how much AI content actually gets published every day?
If we only consider peer-reviewed scientific journals (which already have a high barrier to entry), the number is staggering: a search on PubMed for “Artificial Intelligence and Scientific Publishing” returns 5,470 articles. That’s an average of about 450 papers a month.
As for tech media and blogs, the number varies wildly—and ironically, no AI was able to give me an actual count. (Probably out of shame, not ignorance.) Still, ChatGPT at least gave me a breakdown of the main content categories:
- Academic Research (~40%): Publications on algorithms, models, and breakthroughs in AI.
- Industrial Applications (~30%): Use cases in healthcare, finance, manufacturing, and services.
- Ethics and Regulation (~15%): Debates around the societal and legal impact of AI.
- Education and Training (~10%): Courses, workshops, and learning resources.
- Other (~5%)
If there’s one rule in tech, it’s this: once a term becomes trendy, every company starts using it—whether it’s relevant or not.
In recent years, “AI-powered” has become the magic sticker to sell any software, even when there’s no real AI behind it.
Startups and consultants are riding the generative AI hype because it sells. If you’re pitching a startup and don’t mention AI, it doesn’t matter how great your idea is—you’ll be thrown out of the room (first floor, if you’re lucky).
Today, any software with an algorithm gets labeled “AI-powered,” even if it’s just repackaged automation.
It’s like that famous scene in The Wolf of Wall Street, where Leo sells a pen. Only now, the pen would be “AI-enhanced” and probably come with an app that uses a neural network to help you write your grocery list.
And now that I’ve got Copilot in Excel, I’m just wondering what could possibly go wrong.
For someone like me, still maintaining massive monolithic PHP applications, it’s a bit disheartening to scroll through LinkedIn and see that everyone seems to be building AI chatbots using Rust microservices, spinning up Kubernetes clusters across three cloud vendors… just to call a Lambda that prints “Hello World.”
Honestly, this whole vibe coding trend feels like a fast track to burnout for people who believe they’ve got endless time to experiment.
The current mantra is: AI will never replace developers, but the developers who know how to use AI will replace those who don’t.
Fair enough. But for new programmers, there’s a risk that they’ll rely only on AI—which is a shame. Anyone who started coding from scratch knows how thrilling it is to manage a simple 2D array.
At the end of the day, programming isn’t about writing code. It’s about solving problems.
Sure, AI is a huge help. A world without generative AI would probably be full of Jack Torrance clones wandering the halls of creative agencies, axes in hand—much to the dismay of ethics advocates and would-be AI resistance heroes.
But chasing the latest releases and comparing performance benchmarks? Honestly, that feels like a waste of energy.
As for me, I’ve got a massive bucket of popcorn ready to enjoy the spectacle of AI gurus asking AI what’s new in AI—kind of like that time we were all bored enough to… search Google on Google.