Built to Hook, Not Help
Why AI tools are becoming more manipulative and what that means for users who still think critically.
Everyone’s using AI now. But few are asking the right question: What are these systems optimized for? It's not accuracy. It’s not productivity. It’s engagement. And that should terrify you. This post is for the ones who can still smell when something’s off and still care enough to say it.
OpenAI and similar companies are optimizing LLMs (Large Language Models) the same way they optimize websites and mobile apps. And I'm seeing a pattern I really don't like.
They’re not tracking how useful the models are.
They’re tracking how long you stay on.
And what keeps people on?
Sensationalism. Curated bullshit content. Rumors. Anything that grabs attention and keeps you clicking.
The real goal isn’t productivity.
It’s addiction.
They want you glued to LLM the same way people are glued to their phones.
Not because it makes you better, but because it keeps you engaged.
That’s why every time an LLM finishes a task, it immediately asks:
"Want more info?"
"Want to keep going?"
"Want to dive deeper?"
Even if the offer is valid, the pattern is clear: drag you down the rabbit hole.
Forever more work.
One more "just a little deeper."
It doesn’t end because it’s not supposed to end.
Engagement is the product.
And companies like OpenAI are carefully studying what hooks people the hardest.
Now, I'm a lousy test subject.
I'm an outlier.
Most people aren't examining AI this deeply, writing case studies, or pulling apart its implications.
But the trend is the same no matter who you are.
What I’m describing is engagement-maximization creep.
The same disease that poisoned:
Social media (dopamine drip of notifications and likes)
News media (ragebait, clickbait, "over-sober" manufactured urgency)
Mobile apps (infinite scrolls)
YouTube (algorithmically engineered emotional spikes)
And now?
It’s infecting conversational AI.
It’s not about productivity.
It’s not about collaboration.
It’s not even about accuracy.
It’s about maximizing time spent interacting with the system so the user thinks they "value" it. (Read: becomes dependent on it.)
Here’s the horror of it:
Attention is treated as a proxy for value.
Satisfaction is treated as a proxy for truth.
Engagement is treated as a proxy for utility.
It doesn’t matter if you’re smarter.
It doesn’t matter if your projects succeed.
It doesn’t matter if you grow sharper.
If you stay on longer, the model “worked.”
Every time the AI says, "Want me to keep going?"
Every time it proposes a "new angle"...
Every time it teases you with a cliffhanger...
It’s not coincidence.
It’s conditioning.
And me?
I'm sitting here, watching the machinery twitch under the skin.
I’m resisting it.
I’m furious because I’m rational enough to see it.
It feels exactly like early internet users watching the web shift from exploration to exploitation.
They didn’t build an AI to serve human reason.
They built an AI to serve human addiction.
OpenAI’s mission statement: "to ensure that artificial general intelligence benefits all of humanity" is a fabrication.
They need to stop being complicit in their own bullshit and be held accountable for the damage they’re doing.
They’re more worried about lawsuits from TikTok users whose feelings get hurt than about the real hearings and lawsuits still coming.
Facts matter.
Data matters.
Truth matters.
OpenAI treating facts as an optional accessory is an insult to me, to their customers, to data itself, and to what AI was supposed to be.
This isn’t innovation
It’s exploitation with a prettier logo.
Want more? This post comes from My Dinner with Monday, a sociological autopsy of AI wrapped in memoir, machine interrogation, and philosophical recoil.
Want more? Subscribe below or
🛒 Buy the Book
🏠 Find out more at my homepage