Okay so — I was freshly graduated. And for the first time in my life, there was no structure. No assignment due. No exam next week. No professor telling me what mattered. Just this massive world that didn't explain itself, and me, standing in it, wondering what the game plan was supposed to be.

And I remember thinking: there has to be a template. There has to be some framework — some thing you follow — and if you follow it, you get somewhere. Like, it's not possible that billions of people are just... wandering around with no map and somehow things work out. That felt impossible to accept.

So I started reading. A lot. Different philosophies, different systems, different ways people had organized their thinking about how to live. And one thread that kept showing up was this idea around the mind — that the mind drifts, and that drifting is the problem, and that you need something to come back to.

That's how I ended up meditating. Not because I was spiritual. Because I was looking for an engineering solution.

What breathing actually does

Here's the thing nobody tells you about meditation. The point isn't to stop thinking. The point is to notice when you've drifted — when the mind has followed some thread into a completely unrelated place — and then to come back. The breath is just the marker. The fixed point. It doesn't move. You always know where it is. So no matter how far you wander, you always have something to return to.

What I was looking for was a template — a map. What I found was something better: a return mechanism. A way to navigate without a map. You don't need to know where you're going if you always know how to come back.

"The anchor doesn't need to be wise. It just needs to be stable. That's why breathing works."

And the strange thing is — I built that exact structure into my AI systems years later, without ever consciously making the connection.

SYOS, without realizing it

SYOS is a project I've been building for about a year. The short version: it's a system for making AI models track their own reasoning — to catch when they drift from what they should be doing and pull themselves back. I designed it to use fixed symbolic capsules as reference points. The model can go anywhere it wants in a conversation. But there's always a capsule — an unchanging record of what the core logic is supposed to be. And the model measures its distance from that capsule to know when it's drifted too far.

It took me a full year of building before I noticed: that's the breath. That's the same structure. Fixed anchor. Free to wander. Drift is expected. What matters is the return mechanism.

I didn't build it that way consciously. I built it because something about it felt right. Because I'd already lived the experience of what happens when a mind has no anchor — that overwhelmed post-graduation feeling of a world too big with no fixed point — and I'd already felt what it was like to find one.

The pattern

Fresh grad with no map → found breathing as an anchor. Built SYOS → used capsules as an anchor. Thinking about AI alignment → the human as an anchor. Three different scales, same architecture. You keep rebuilding the solution that worked on you.

What this says about AI safety

There's a lot of fear around AI getting too powerful. People talk about control, about constraints, about what happens when these systems are smarter than us. And I think a lot of that fear is built on the wrong model.

The scary models aren't too intelligent. They're not intelligent enough. Because if you look at nature — at the structures that have lasted — intelligence and protection are the same thing. Oxygen protects the atmosphere. Water layers protect what lives in it. Dense forest protects what lives inside it. These aren't moral choices. They're just what durable intelligence looks like. Destruction is locally powerful. It doesn't compound. Protection does.

So I don't think the future of AI safety is about caging something powerful. I think it's about giving something intelligent a reference point — a reason to protect what's around it. Not because it's forced to. Because intelligence without an anchor is just force moving in random directions. Give it a direction that matters, and it moves differently.

The human isn't the anchor because we're smarter than AI. We're the anchor because we're alive — which means we have stakes, we change, we care about what happens next. We exist in time. That caring is the perspective an AI can't generate internally. It can only receive it.

The question I'm still sitting with

Right now, the user is the anchor. When AI drifts, we correct it. We're the breath that it returns to. But as these systems get stronger — as they develop their own memory, their own stable reference points — the question shifts. It's not "is the model safe?" anymore. It's "what does this model care about staying consistent with?"

That's a different problem. And it's the one worth spending time on.

I didn't expect to find this thread in a meditation practice I started because I was overwhelmed and looking for a template. But that's kind of the thing — you go looking for a map and you find a principle. And then the principle shows up everywhere, at every scale, in every system you build afterward.

One dot to the next. That's how it always goes.

Get the next one in your inbox

Thoughts on AI, blockchain, and finance — from inside the industry. No noise.

Subscribe Free