It Took Four Friends and Three LLMs to Teach Me What Should’ve Been Obvious
In the last week, I had four separate, unfiltered exchanges with friends I trust, and who, I hope, will trust me through our ups and downs. One was in Wisconsin, one in India, one in New Zealand, and the fourth in Florida. Different people, different cultures—and all four pushed me to self-reflect.
One of them bluntly said, “Ray, look in the mirror asshole.” I did. And what I saw wasn’t as pretty as I’d imagined. So, to become the kind of person I want the world to be, I went painfully deeper, with a little help from my friends and AI.
One of those friends brought up the idea of Common Sense. What is it, really? Is it the same everywhere? For everyone? That question led me to the work of Tristan Harris. His ideas led me to some grounding principles.
I did a reasonably wide and deep dive into how we’re applying or ignoring Common Sense. I think I’ve found a possible way forward. Thanks friends, you help more than you know.
A Common Sense Foundation
Tristan Harris, a former Google design ethicist and co-founder of the Center for Humane Technology, has spent years exposing the gap between what tech can do and what it should do. His work centers on attention, agency, consent, and the long-term effects of systems that prioritize engagement over well-being.
He never published a formal list of rules, but the core message is clear:
If we’re building systems that affect people, we need to anchor them in shared human values. Not lofty ideals, just a functional floor.
I distilled those values into ten Common Sense guidelines:
Don’t harm people.
Help when you can.
Ask before taking.
Tell the truth.
Keep your word.
Don’t rig the game.
Respect privacy.
Own your mistakes—and fix them.
Keep people safe.
Treat everyone like they matter.
These aren’t visionary goals. They’re the bare minimum. And yet across sectors, we’re falling short of even this.
A Common Sense Audit
I ran an audit using these ten guidelines—on the institutions shaping AI, and on myself. Because AI won’t just learn from laws or policies. It will learn from behavior. From what we reward. From what we overlook.
I used ChatGPT (with deep research), Grok (with DeepSearch), and Claude Sonnet 4. Together, they helped identify global trends, blind spots, and patterns across governments, corporations, platforms, the public…and me.
What AI Is Learning from the World
The findings were sobering:
Governments are using AI for surveillance, predictive policing, and population control—often without consent or oversight.
Corporations optimize for profit, not public good—scraping personal data and using unpaid labor to train models.
Platforms prioritize engagement over truth—scaling outrage and misinformation because it pays.
Everyday users interact with AI without common sense guardrails, generating content, issuing commands, and experimenting with power they barely understand.
And me? Ugh. I believe in these guidelines but I still bend them when speed, strategy, or convenience win. Some around me deserve apologies. Or at least more listening and honesty from me.
We’re stuck in systems that reward velocity, visibility, and control. AI isn’t being trained on our ideals. It’s being trained on our behaviors. And those behaviors reflect what gets rewarded. That’s the real feedback loop and it’s dangerous. Can we break the loop with common sense?
Going Forward
The Common Sense guidelines aren’t ideals. They’re baseline practice for building, writing, advising, deciding.
My intention is to embed them into how I work and what I support so I stay aligned with what matters. Especially as the systems around us move faster than our sense of decency.
AI will reflect us. So what we model at every level matters.
This isn’t about being right. I’ll miss often. But with your perspective, we can keep adjusting toward something more honest, more useful, and more whole.
Wish me luck. Wish us all luck.
Ray