Cognitive Exhaustion & Engineered Trust

I’ve been going to the same gym (close to home) since 2019. It used to be a place of quiet rhythm with familiar equipment, predictable routines, a kind of muscle memory not just for the body but the mind. I’d walk in, find my spot, ease into the day’s class. There was flow. Not just in movement, but in attention. The environment held me.

Then everything changed.

New people joined in waves. Coaches rotated weekly. Classes collided. Dumbbells became landmines. Every workout began with a risk assessment. Was that bench free? Will someone walk behind me mid-lift? Can I finish this set before another class floods in?

My body was lifting, but my mind was scanning. Hypervigilance had replaced focus. The gym hadn’t become more dangerous per se, but it had stopped helping me feel safe. And that made all the difference.

What broke wasn’t just order. What broke was affordancethat quiet contract between environment and behaviour, where the space guides you gently toward good decisions, without you even noticing. It wasn’t about rules. It was about rhythm. And without rhythm, all that remained was noise.

And that’s when I realised, this isn’t just about gyms. It’s about systems. It’s about how we design our spaces, physical, social, digital, that shape our decisions, our energy, and ultimately, our trust.

Driving in Bangalore: A Case Study in Cognitive Taxation

I live in Bangalore, a city infamous for its chaotic traffic. We’re not just focused on how we drive. We’re constantly second-guessing what everyone else will do. A bike might swerve into our lane. A pedestrian might dart across unexpectedly. Traffic rules exist, but they’re suggestions, not structure.

So we drive with one hand on the wheel and the other on our cortisol levels. Half our energy goes into vigilance. The other half is what’s left over, for driving, for thinking, for living. This isn’t just unsafe. It’s inefficient.

And the cost isn’t just measured in accidents. It’s measured in the slow leak of mental bandwidth. We don’t notice it day to day. But over weeks, months, years, our attention gets frayed. Our decisions get thinner. Our resilience drains. Not because we’re doing more. But because the system around us does less.

Chaos isn’t always a crisis. Sometimes, it’s a tax.

The Toyota Floor: What Safety Feels Like When It’s Working

Years ago, I worked on the factory floor at Toyota. It had real risks such as heavy machinery, moving parts, tight deadlines. But I felt less stressed there than I do on Bangalore roads or in my current gym.

Why?

Because the environment carried part of the load.

Walkways were marked in green. Danger zones had tactile and auditory cues. Tools had ergonomic logic. Even the sounds had a design language, each hiss, beep, or clang told you something useful. I didn’t need to remember a hundred safety rules. The floor whispered them to me as I walked on.

This wasn’t about surveillance. It was about upstream design, an affordance architecture that reduced the likelihood of error, not by punishing the wrong thing, but by making the right thing easy. Not through control. Through invitation. And it scaled better than mere control.

That made us more relaxed, not less alert. Because we weren’t burning all our cognition just staying afloat. We could actually focus on work. This isn’t just a better way to build factories. It’s a better way to build systems. Including AI.

Why Most AI Safety Feels Like My Gym

Most of what we call “AI alignment” today feels a lot like my chaotic gym. We patch dangerous behaviour with filters, tune models post-hoc with reinforcement learning, run red teams to detect edge cases. Safety becomes a policing problem. We supervise harder, tweak more often, throw compute at every wrinkle.

But we’re still reacting downstream. We’re still working in vigilance mode. And the system, like Bangalore’s traffic, demands too much of us, all the time.

What if we flipped the script? What if the goal isn’t stricter enforcement, but better affordance?

Instead of saying, “How do we make this model obey us?” we ask, “What does this architecture make easy? What does it make natural? What does it invite?”

When we design for affordance, we’re not just trying to avoid catastrophic errors. We’re trying to build systems that don’t need to be babysat. Systems where safety isn’t an afterthought, it’s the path of least resistance.

From Control to Co-Regulation

The traditional paradigm treats AI as a tool. Give it a goal. Clamp the outputs. Rein it in when it strays. But as models become more autonomous and more embedded in daily life, this control logic starts to crack. We can’t pre-program every context. We can’t anticipate every edge case. We can’t red-team our way to trust. What we need isn’t control. It’s co-regulation.

Not emotional empathy, but behavioural feedback loops. Systems that surface their uncertainty. That remember corrections. That learn not just from input-output pairs, but from the relational texture of their environment, users, constraints, other agents, evolving contexts, and are able to resolve conflicts.

This isn’t about making AI more human. It’s about making it more social. More modular. More structured in its interactions.

Distributed Neural Architecture (DNA)

What if, instead of one big fluent model simulating everything, we had a modular architecture composed of interacting parts? Each part could:

  • Specialise in a different domain,
  • Hold divergent priors or heuristics,
  • Surface disagreement instead of hiding it,
  • Adapt relationally over time.

I call it Distributed Neural Architecture or DNA.

Not a single consensus engine, but a society of minds in structured negotiation. This kind of architecture doesn’t just reduce brittleness. It allows safety to emerge, not be enforced. Like a well-designed factory floor, it invites trust by design through redundancies, reflections, checks, and balances.

It’s still early. I’ll unpack DNA more fully in a future post. But the core intuition is alignment isn’t a property of the parts. It’s a function of their relationships.

The Hidden Cost of Hyper vigilance

Whether we’re talking about gyms, traffic, factories, or AI systems, there’s a common theme here. When environments don’t help us, we end up doing too much. And over time, that extra effort becomes invisible. We just assume that exhaustion is the cost of functioning. We assume vigilance is the price of safety. We assume chaos is normal.

But it isn’t. It’s just what happens when we ignore design.

We can do better. In fact, we must because the systems we’re building now won’t just serve us. They’ll shape us. If we want AI that’s not just powerful, but trustable, we don’t need tighter chains. We need smarter scaffolds. Not stronger control. But better coordination. More rhythm. More flow.

More environments that carry the load with us, not pile it all on our heads.

Published by

Pri

Independent Consultant and Writer