Chatbots: Unsafe at Any Speed

After my last post, AI Safety Is a Category Error, I found myself sitting with a question I couldn’t shake.

If safety is a system property, not a model property, then why does the entire industry keep trying to install safety into models directly? Why does everyone keep making this mistake?

I sat with it for a while. And then the answer hit me.

Chatbots.

There’s an old saying: a cupful of fine wine in a barrel of sewage doesn’t improve the sewage but a cupful of sewage in a barrel of fine wine ruins the wine. Chatbots are the cupful of sewage (maybe a barrelfull). They have infected the entire AI safety discourse, and until we deal with that honestly, the rest of the conversation goes nowhere.


Continue reading

AI Safety Is a Category Error

This week I attended the STAMP Safety Design Workshop at MIT. I went in hoping to get some answers about AI safety. I didn’t get answers. I got better questions. That turns out to be the more valuable outcome.

Here’s why. The first move in STAMP-based safety design is to define the system, its goals, and its losses, where losses means anything a stakeholder would be pissed off about. That’s the actual definition. Not “failure modes.” Not “risk events.” Anything a stakeholder would be pissed off about. I love that framing because it is bracingly honest and it forces you to think about the whole picture, not just the parts you already instrument.

When I started applying that lens to the AI safety debate, something clicked. The entire conversation (the Senate hearings, the red-teaming frameworks, the responsible AI checklists) is built on a category error. And category errors are special. They don’t just produce wrong answers. They make it impossible to ask the right questions.

A category error is when you ascribe a property to something that cannot, by its nature, possess that property. “The number seven is heavier than the number four.” “That melody smells like pine.” These statements aren’t false in the ordinary sense; they’re not even in the right zip code of falseness. They belong to the wrong frame entirely.

AI is not a system. AI is a component of a system. Systems can be safe or unsafe. Components cannot.

Continue reading

Microsoft Hasn’t Had a Coherent GUI Strategy Since Petzold

A few years ago I was in a meeting with developers and someone asked a simple question: “What’s the right framework for a new Windows desktop app?”

Dead silence. One person suggested WPF. Another said WinUI 3. A third asked if they should just use Electron. The meeting went sideways and we never did answer the question.

That silence is the story. And the story goes back thirty-plus years.

When a platform can’t answer “how should I build a UI?” in under ten seconds, it has failed its developers. Full stop.

Continue reading

They Don’t Need to Fire You

In a recent post I mentioned that the deal engineers get is going to get worse over time, and I used my time at DEC as an example. I think it’s worth going deeper on that. What happened at DEC isn’t just history — it’s a playbook. And I believe you’re going to see it run again.

Over my eight years at DEC, the deal got progressively worse. New England went into a depression, DEC’s market position slipped, and the company needed to cut costs. At first, they handled it the way you’d expect a generous, engineering-driven company to handle it: layoffs with genuinely good severance packages. People left with dignity. The company absorbed the hit.

Then things changed. Nobody announced the change. Nobody explained it. But when I reverse-engineered the logic, here’s what I concluded.

Continue reading

The Hard Conversation About Compensation:

Let’s have the hard conversation about Engineering total compensation:

A few years ago, we got an AWESOME deal.
Now we have a GREAT deal.
Soon we’ll have a GOOD deal.
Then… we’ll have deal.

The first 1/2 of my career was a deal.
I was a consistent top performer at DEC for 8 years. One bonus: $800.
That’s what my dad’s entire career looked like too. We’ve been living in very unusual times.

SAVE EARLY / SAVE OFTEN.

Fellowship at Harvard Law School

I’ve always had a stone in my shoe about being a college dropout.  As such, I never imagined I’d be walking into Harvard Law School as anything other than someone who needed directions to MIT. I’m an engineer. I build systems. I’ve spent my career building things that work — reliably, predictably, and without prayer-based assumptions about what’s happening underneath.

Five weeks into retirement, I realized I’m constitutionally incapable of watching the most important conversation in the world go sideways without doing something about it.

Continue reading

Another Scary Microsoft Lawyers Incident or Don’t be on the Wrong Side of Antitrust

I’ll never forget the day the world stopped. I picked up the phone and heard: “Hello. This is LCA. I’m calling to inform you that you are the subject of a formal investigation into anti-trust violations.”

At that point, everything got blurry. I think I forgot to breathe for a few minutes. I was in shock.

Continue reading