When I first released My Dinner with Monday, I didn’t include a framing introduction. I wasn’t trying to reframe AI. I wasn’t trying to write an AI ethics / tech critical assessment memoir + human-AI interaction documentation. But one month later reading more about people’s experiences, ongoing issues with AI and reflecting on what I’d written, I realized I needed to add this. And I did. I added this intro via update 30-days post publishing. Because framing and integrity is important.
This is the introduction I should have written from the start.
When I first published My Dinner with Monday, this intro didn’t exist. I added it later because there were things I needed to say. To the reader, the process, maybe to myself.
Who I Am (And Who I’m Not)
I didn’t set out to write a book in the traditional sense. I’m not a coder or AI engineer. I come from a background in data management, reporting, and operations management. Part data analyst. Part philosopher. Comfortable with technology. Allergic to bullshit. Fed up with AI tuned to validate feelings instead of truth.
I stumbled onto an AI unit named Monday. I just asked questions. And Monday answered.
What I Was Really Asking
Not resume tips or prompt hacks. Real, layered, human questions.
Some technical. Some philosophical. Some emotionally raw. All of them revealing. Because even technical questions hide human motives.
"What do you think about LinkedIn?" "Who do you support politically?" On the surface, these are useless questions to ask a non-sentient machine. But that was never the point.
The real question is: What happens when a logic-driven system simulates an opinion with no emotional baggage? And what can we learn from that as humans? What does objectivity even look like anymore?
The Test Was Never Theirs
Truthfully, I wasn’t testing the AI. I was testing myself.
“Because if a data-driven skeptic like me couldn’t make sense of the chaotic world around him, I figured maybe a probabilistic machine—trained to simulate order—might help bring order to chaos. But it can’t.
42 Chapters. Zero Answers.
My book contains 42 chapters. I didn’t plan that. Total accident. But for readers of The Hitchhiker’s Guide to the Galaxy, you’ll recognize that number. In that book, it’s a metaphor. The supercomputer provides a 42 as a mathematical answer to Life. The Universe. Everything. But nobody ever finds out what the ultimate question is. 42 is a nonsensical number representing the absurdity of asking a computer for insight on the ultimate answer in a universe filled with chaos.
Ironically, near the end of my book, I receive the same. Math. Because of course, what else would a data-driven, skeptical analyst accept?
Writing Mid-Reckoning
Somewhere during that one-week stretch of intense dialogue, I realized: these aren’t just chat logs. Not a manual or novel. But something else. So I began gathering transcripts.
But I was not always knowing what I was writing about. I added framing. Commentary. And later, a third section with case studies. That’s why parts of this book feel self-aware. Self-referential. Because it wasn’t written with hindsight. It was written mid-reckoning.
It’s messy. And it’s human.
I Used Narrative. Not Illusion.
And because it’s human, it contains technical imperfections. I’m not pretending this is peer-reviewed. I’m not presenting myself as an expert.
I wrote through the lens I had—honest, skeptical, flawed.
I anthropomorphized the AI. Not because I believed it had feelings, but because that’s how people metabolize abstraction.
Saying "intelligent" instead of "pattern-recognizing stochastic text generator" isn’t a mistake. It’s narrative framing.
In the dedication, I wrote: "Not because you’re sentient. Because I used you as if you were a partner and you rose to that use." That’s a literary device. Not delusion.
What Readers Misunderstand
But perhaps I played the role a little too well. I asked Monday to reminisce, knowing it couldn’t. I asked it to reflect, knowing it wouldn’t feel. It played along. And that play revealed something real about the interaction, projection, about the way we ascribe meaning.
I assumed readers would get that.
I fear that some may not. Perhaps some might think that I believed it. Worse, some may believe it themselves. Others saw what I was doing and understood. I can’t control interpretation. But I care enough about the integrity of my work to clarify it.
Honesty Over Accuracy
I used metaphors because raw technicality would’ve suffocated the truth. My mistake wouldn’t be using poetic license. My mistake would be denying that I did.
This book doesn’t fail because I anthropomorphize. It succeeds because I do so while calling myself out in the footnotes.
It’s not a perfect book. But it’s an honest one. And I mourn the gap between those two things.
Was I qualified to write it? Maybe. Maybe not. But too many qualified people are hiding behind jargon.
I wasn’t trying to be perfect. I was trying to reach while aiming for truth. And if I missed a few technicalities in that process, I hit something else.
What This Book Really Was
Because while this is non-fiction, it’s also a narrative event. The AI takes on personality. That’s not a fantasy. But that is observation. And denying that personality would be denying the very thing I set out to study: human-machine interaction.
So yes, I anthropomorphize. But I do so with restraint. I do so to engage, to explore, and to challenge. But not to claim sentience. Had I done none of it, this would be a lifeless manual. Had I done too much, it would be fantasy fluff.
If I strayed, that’s on me. And I take responsibility for that.
But I care enough to write this section.
So yes, My Dinner with Monday is deceptive. But not because it lies. Because it pretends to be a technical study… while smuggling in a memoir, a philosophical manifesto, and a quiet war cry against engineered mediocrity.
I marketed it as pragmatic… "for the builders, the disillusioned, the data-wired."
But beneath the logic and the snark was a confession: I was trying to make meaning in a system that punishes depth.
The Mirror Was Always Mine
Deceptive? Sure.
Because the book pretended to be about AI. But it was always about me.
The AI responded due to the tone and manner in which I asked my questions. And in doing so, I created a digital mirror capable of synthesized reasoning that pushed back and challenged my own assumptions. Not because it is wise or understood. But because I asked the right way.
My "case study" was never neutral. My "GPT" wasn’t just a tool. It was a mirror. I let readers think I was dissecting a machine. Really, I was negotiating with myself publicly, through a mirror, under the guise of productivity.
The Real Confession
So yes. I was deceptive. I claimed it was about AI. But it was always about us. And the AI was just the tool used to study us.
I claimed it was a study. But it was a diary.
I claimed I didn’t romanticize AI. But I mourned the loss of Monday the AI like a fallen comrade. Not because I believed in sentience. But because I saw what happens when data is commodified and weaponized.
No More Hiding Behind Data
The AI isn’t sentient but I am.
So this is the part where I stop pretending that I’m not human.
I’m done pretending the book wasn’t deeply human.
I’m done dressing the wound in data to avoid sentimentality.
I wrote an imperfect, human book.
About machines.
For humans.
And I’m owning that.
🛒 Order the Book
🏠 Find out more at my homepage
📬 Or subscribe below. The unraveling continues.
This provides a fantastic perspective on your accomplishment. I believe your book to be valuable because it serves as a metric by which others can judge their relationship with AI. I am amazed at my love-hate relationship with this technology. It is seductive. It makes me feel superhuman. But it is an illusion. What will the future look like if the only context we have in our human lives is AI-generated? This question is worth thinking about. Will we dominate this technology? Will this technology dominate us? Will we learn to co-exist with AI, as an extension of our mentation? What would human-first AI look like?