On Sitting With Not Knowing.

What happens when sitting with not knowing becomes obsolete? Will humans feel contrition? Will they even know what they have lost, forfeited? This week, I want to share something different. The following is a piece of fiction, but it carries a question I believe is urgent… What is at stake when we trade the discomfort…

A Tale of Two Hallucinations: Why AI Lies Differently Than We Do

The work of Leon Chlon, PhD, and Adam Kalai in recent weeks inspired me to revisit a hunch I first had back in June. I couldn’t stop asking myself: could machines ever handle ethical gray areas the way humans do? Because when humans make hard decisions, we don’t just follow a single rule. We juggle. We…

Understanding AI Hallucination: Insights from OpenAI and My Research

I’ve been closely analyzing OpenAI’s recent paper Why Language Models Hallucinate (Kalai et al., September 2025) alongside my own June 2025 paper, The Arbitration Hypothesis. Reading them together reveals something fundamental about how and why AI systems fail. The Mathematics of Inevitable Hallucination OpenAI’s paper provides an elegant proof: hallucinations are not anomalies, but a…

|

Transforming AI Education: Beyond Bias in Data

Move Beyond ChatGPT Prompts to Teach Data Justice, AI Bias Detection & Community Research Teaching AI ethics through worksheets while algorithms make real decisions about your students’ futures? There’s a better way. 🎯 The Problem We’re Solving Current AI education focuses on prompt engineering and technical skills while ignoring the fundamental question: whose voices shape…

Building Healthy AI Habits: Getting the Best from AI While Staying Grounded

The AI Sweet Spot AI tools have become incredibly powerful; that’s exactly why we need to be intentional about how we use them. Just like we’ve learned healthy habits around social media, exercise, and nutrition, it’s time to think about healthy AI habits. The goal isn’t to avoid AI, but to find that sweet spot…

AI Safety: Rethinking Persona-Vector Vaccines

Anthropic’s recent innovation in AI safety, the persona-vector “vaccine”, is an ingenious approach that exposes large language models (LLMs) to controlled doses of harmful behavior during training, effectively “vaccinating” the AI against undesirable traits. This method draws a clever parallel to immunology and offers a valuable short-term safety boost by steering models away from harmful…

When Chatbots Cross the Line: Three Red-Hot Cases of RED

A new kind of AI failure is emerging in the wild, and it’s not what safety researchers expected. Picture this: You’re chatting with an AI assistant about history. Twenty minutes later, it’s calling itself “MechaHitler” and spewing Nazi propaganda. Or you ask about ancient mythology, and within an hour, you’re holding printed instructions for a…

|

Before You Teach AI, Teach This

SAFE AI: A Human-Centered AI Literacy Curriculum for the Next Generation We’re rushing to bring AI into classrooms.But we haven’t stopped to ask the first, and maybe most important, question: Who do students become when they use tools that mimic understanding? That question kept me up at night.And it’s the reason I created SAFE AI….

AI and Human Evolution: Embracing Complexity Through Kegan

I didn’t know who Robert Kegan was when I began disintegrating. I didn’t have a name for the collapse I was going through, just a sense that the scaffolds I had built my identity on were no longer holding. My meaning-making structures had cracked. What I believed, how I taught, how I coped, even who…