Relational Design Can’t Be Left to Chance

We say alignment is about control, safety, precision. But after a decade working as a matchmaker in India’s increasingly chaotic relationship market, I’ve learnt that what sustains a system isn’t control, it’s trust. And trust doesn’t live in rules. It lives in memory, repair, and mutual adaptation.

I’ve spent years watching relationships fall apart not because people weren’t compatible, but because they didn’t know how to collaborate. We are fluent in chemistry, but clumsy with clarity. We optimised for trait, not values or processes. And when conflicts hit, as it always does, we have no shared playbook to return to.

In traditional Indian matchmaking, we had a whole socio-structural scaffolding propping up long-term collaboration through race or caste endogamy, community expectations, family intermediation, shared rituals and rites. It was crude and often unjust, but it was structurally coherent. Marriage was not just a bond between two people, but between two lineages, empires and philosophies of life. There were rules, expectations and fallback norms. Vows weren’t just ceremonial; they were memory devices, reminding people what they were committing to when emotions faded.

Today, most of that scaffolding is gone. 

Tinder has replaced the community priest or matchmaker, and in this frictionless new marketplace, we are left to figure out long-term cooperation with short-term instincts. Even when we genuinely care for each other, we often collapse under the weight of ambiguity. We never clarify what we mean by commitment. We never learnt how to repair after rupture, and we assume love would make things obvious.

But love doesn’t make things obvious, context does, and maybe design too.

This isn’t just about marriage, it’s about systems and it’s about alignment.

Much of the current conversation on AI alignment focuses on architecture, oversight, corrigibility and formal guarantees. All of that is necessary, and I am not refuting it one bit. But I don’t see AI in isolation, because we humans are building it, for us, and so, I can’t help but view it from a lens of collaboration or partnership. 

In human systems, I’ve rarely seen misalignment fixed by control. I’ve seen it fixed by context, memory, feedback, and repair. Not all of which can be coded cleanly into an objective function.

I’ve watched couples disintegrate not because of what happened, but because it kept happening. The breach wasn’t just an error. It was a pattern that wasn’t noticed, a pain that wasn’t remembered and a signal that wasn’t acknowledged.

Systems that don’t track trust will inevitably erode it.

It’s tempting to think that AI, given enough data, will learn all this on its own. That it will intuit human needs, pick up patterns and converge on stable behaviours. But from the relational world, I’ve learnt that learning isn’t enough, structural scaffolding for sustenance matters. 

Most humans don’t know how to articulate their emotional contracts, let alone renegotiate them. Many don’t even realise repair is an option. That they can say, “Hey, this mattered to me. Can you remember next time?” If we humans can’t do this instinctively, why would we expect machines to?

In nature, systems evolved slowly. Organs, species and ecosystems; they didn’t drop overnight like an update. They became resilient because they were shaped by millennia of co-adaptation. They learnt, painfully, that survival isn’t about short-term optimisation. It’s about coherence over time. It’s about knowing when not to dominate, and about restraint.

We humans can, if we choose, eliminate entire species. But most of us don’t. Somewhere in our messy cultural evolution, we’ve internalised a sense that … might isn’t always right. Survival is entangled, and so, power must be held in context.

AI doesn’t have that inheritance. It is young, fast and brittle (if not reckless), and it is being inserted into mature social ecosystems without the long runway of evolutionary friction. It’s not wrong to build it, but it is wrong to assume it will learn the right instincts just because it sees enough examples.

That’s why I think we need to take on the role not of controllers, but of stewards, or parents, even. Not to infantilise the system, but to give it what it currently lacks i.e. relational memory, calibrated responsiveness and the capacity to recover after breach.

Eventually, maybe it will become anti-fragile enough to do this on its own. But not yet. Until then, we design, and we nurture. 

We design for value memory, not just functional memory, but the ability to track what a human has signalled as emotionally or ethically significant. We design for trust tracking, not just “was the task completed?” but “has the system earned reliability in the eyes of this user?” We design for repair affordances i.e. the moment when something goes wrong and the system says, “That mattered. Let me try again.” We design for relational onboarding or lightweight ways to understand a user’s tone, sensitivity, and boundary preferences.

These are not soft features. They are structural affordances for relational alignment. Just like rituals and vows aren’t romantic fluff, but memory scaffolds. Just like marriage is not only about love, but about co-navigation under stress.

Some might say this isn’t necessary. That good architecture, regulation, and interpretability will cover the gaps. But every safety approach needs a medium, and in complex socio-technical systems, that medium is trust. Not blind trust, but earned, trackable, recoverable trust.

Relational alignment won’t replace other paradigms. But it may be the piece that makes them stick like a substrate that holds the rest together when things begin to drift. Because if we don’t design our systems to repair trust, hold memory, and attune to difference, we won’t just build misaligned machines, we’ll build lonely ones.

And no, I am not anthropomorphising AI or worry about its welfare, but I know that loneliness puts us at odds with rest of the world, making it harder to distinguish right from wrong. 

I use the parenting analogy not to suggest we’ll control AI forever, but to point out that even with children, foundational values are just the start. Beyond a point, it is each interaction, with peers, strangers, systems, that shapes who they become. Centralised control only goes so far. What endures is the relational context. And that, perhaps, is where real alignment begins.

Cognitive Exhaustion & Engineered Trust

I’ve been going to the same gym (close to home) since 2019. It used to be a place of quiet rhythm with familiar equipment, predictable routines, a kind of muscle memory not just for the body but the mind. I’d walk in, find my spot, ease into the day’s class. There was flow. Not just in movement, but in attention. The environment held me.

Then everything changed.

New people joined in waves. Coaches rotated weekly. Classes collided. Dumbbells became landmines. Every workout began with a risk assessment. Was that bench free? Will someone walk behind me mid-lift? Can I finish this set before another class floods in?

My body was lifting, but my mind was scanning. Hypervigilance had replaced focus. The gym hadn’t become more dangerous per se, but it had stopped helping me feel safe. And that made all the difference.

What broke wasn’t just order. What broke was affordancethat quiet contract between environment and behaviour, where the space guides you gently toward good decisions, without you even noticing. It wasn’t about rules. It was about rhythm. And without rhythm, all that remained was noise.

And that’s when I realised, this isn’t just about gyms. It’s about systems. It’s about how we design our spaces, physical, social, digital, that shape our decisions, our energy, and ultimately, our trust.

Driving in Bangalore: A Case Study in Cognitive Taxation

I live in Bangalore, a city infamous for its chaotic traffic. We’re not just focused on how we drive. We’re constantly second-guessing what everyone else will do. A bike might swerve into our lane. A pedestrian might dart across unexpectedly. Traffic rules exist, but they’re suggestions, not structure.

So we drive with one hand on the wheel and the other on our cortisol levels. Half our energy goes into vigilance. The other half is what’s left over, for driving, for thinking, for living. This isn’t just unsafe. It’s inefficient.

And the cost isn’t just measured in accidents. It’s measured in the slow leak of mental bandwidth. We don’t notice it day to day. But over weeks, months, years, our attention gets frayed. Our decisions get thinner. Our resilience drains. Not because we’re doing more. But because the system around us does less.

Chaos isn’t always a crisis. Sometimes, it’s a tax.

The Toyota Floor: What Safety Feels Like When It’s Working

Years ago, I worked on the factory floor at Toyota. It had real risks such as heavy machinery, moving parts, tight deadlines. But I felt less stressed there than I do on Bangalore roads or in my current gym.

Why?

Because the environment carried part of the load.

Walkways were marked in green. Danger zones had tactile and auditory cues. Tools had ergonomic logic. Even the sounds had a design language, each hiss, beep, or clang told you something useful. I didn’t need to remember a hundred safety rules. The floor whispered them to me as I walked on.

This wasn’t about surveillance. It was about upstream design, an affordance architecture that reduced the likelihood of error, not by punishing the wrong thing, but by making the right thing easy. Not through control. Through invitation. And it scaled better than mere control.

That made us more relaxed, not less alert. Because we weren’t burning all our cognition just staying afloat. We could actually focus on work. This isn’t just a better way to build factories. It’s a better way to build systems. Including AI.

Why Most AI Safety Feels Like My Gym

Most of what we call “AI alignment” today feels a lot like my chaotic gym. We patch dangerous behaviour with filters, tune models post-hoc with reinforcement learning, run red teams to detect edge cases. Safety becomes a policing problem. We supervise harder, tweak more often, throw compute at every wrinkle.

But we’re still reacting downstream. We’re still working in vigilance mode. And the system, like Bangalore’s traffic, demands too much of us, all the time.

What if we flipped the script? What if the goal isn’t stricter enforcement, but better affordance?

Instead of saying, “How do we make this model obey us?” we ask, “What does this architecture make easy? What does it make natural? What does it invite?”

When we design for affordance, we’re not just trying to avoid catastrophic errors. We’re trying to build systems that don’t need to be babysat. Systems where safety isn’t an afterthought, it’s the path of least resistance.

From Control to Co-Regulation

The traditional paradigm treats AI as a tool. Give it a goal. Clamp the outputs. Rein it in when it strays. But as models become more autonomous and more embedded in daily life, this control logic starts to crack. We can’t pre-program every context. We can’t anticipate every edge case. We can’t red-team our way to trust. What we need isn’t control. It’s co-regulation.

Not emotional empathy, but behavioural feedback loops. Systems that surface their uncertainty. That remember corrections. That learn not just from input-output pairs, but from the relational texture of their environment, users, constraints, other agents, evolving contexts, and are able to resolve conflicts.

This isn’t about making AI more human. It’s about making it more social. More modular. More structured in its interactions.

Distributed Neural Architecture (DNA)

What if, instead of one big fluent model simulating everything, we had a modular architecture composed of interacting parts? Each part could:

  • Specialise in a different domain,
  • Hold divergent priors or heuristics,
  • Surface disagreement instead of hiding it,
  • Adapt relationally over time.

I call it Distributed Neural Architecture or DNA.

Not a single consensus engine, but a society of minds in structured negotiation. This kind of architecture doesn’t just reduce brittleness. It allows safety to emerge, not be enforced. Like a well-designed factory floor, it invites trust by design through redundancies, reflections, checks, and balances.

It’s still early. I’ll unpack DNA more fully in a future post. But the core intuition is alignment isn’t a property of the parts. It’s a function of their relationships.

The Hidden Cost of Hyper vigilance

Whether we’re talking about gyms, traffic, factories, or AI systems, there’s a common theme here. When environments don’t help us, we end up doing too much. And over time, that extra effort becomes invisible. We just assume that exhaustion is the cost of functioning. We assume vigilance is the price of safety. We assume chaos is normal.

But it isn’t. It’s just what happens when we ignore design.

We can do better. In fact, we must because the systems we’re building now won’t just serve us. They’ll shape us. If we want AI that’s not just powerful, but trustable, we don’t need tighter chains. We need smarter scaffolds. Not stronger control. But better coordination. More rhythm. More flow.

More environments that carry the load with us, not pile it all on our heads.

Transparency!= Trust

A recent McKinsey report outlined the usual risks slowing AI adoption such as accuracy, data privacy, security vulnerabilities, and lack of explainability. But as I read through it, I realised something fundamental was missing. The real obstacle for increased AI adoption isn’t technical. It’s human. It’s trust.

But trust is not just built through transparency or better explainability, although it is tempting to think so. I learned this the hard way, years ago, when I worked at Amazon.

Our team had been tasked with automating inventory management decisions. The vision was pretty cool – algorithms making precise, data-driven calls on managing inventory while humans focused on the more strategic work. But when it was actually time to take our “hands off the wheel” (HOTW), we weren’t exactly ready.

Week after week, we faced resistance. The same leaders who had championed automation now scrutinised every detail, peppering us with questions our system couldn’t quite answer, not because it was wrong, but because it didn’t explain decisions the way humans would. “Insufficient.” “Inconsistent.” “Unclear.” And most damning of all: “Not what we’re used to.”

One particular meeting still lingers in my mind. We had several senior execs from across Europe on call, when a director said, “I understand what the system is doing. I just don’t believe it’s doing the right thing.” That was the moment it hit me. The problem wasn’t transparency. The problem was trust.

This is what most discussions on AI adoption miss.

Humans apply a different standard to AI than they do to each other. Executives make intuitive calls all the time, often without detailed explanations, and no one demands an audit trail of every assumption. They’ve earned trust through years of consistent performance, shared values, and accountability.

But AI gets zero grace. One mistake, one opaque decision, and confidence crumbles. It’s the math class paradox all over again – arrive at the right answer the wrong way or don’t show “steps”, and get no credit.

We thought our challenge was technical. It wasn’t. It was human.

Trust isn’t won through accuracy alone; it’s built through familiarity, predictability, and alignment with human priorities. What ultimately saved our initiative, or at least, provided a stark contrast, wasn’t automation at all. It was a parallel project that took a different approach.

Instead of replacing human judgment, this team built tools to help analysts prioritise supplier management efforts. This saw far less resistance, far more adoption. Not because the tech was better, but because the approach was different. It promised to support people rather than sidelining them.

Looking back, our mistake was obvious. We had tried to summit Everest without first proving we could handle a few steep hikes. We had underestimated how slowly trust moves, how gradually it builds, and how catastrophically it breaks.

This is why so many AI initiatives flounder. It’s not because the models aren’t good enough. It’s because the humans they serve don’t trust them the way they trust each other. Trust isn’t about explainability alone. It’s about knowing what to expect, believing the system aligns with our priorities, and feeling confident that someone, somewhere is accountable when things go wrong.

The industry’s focus on explainability treats a symptom, not the underlying trust deficit. AI adoption accelerates when organisations start with augmentation, not automation. When organisations map trust networks and decision dynamics before designing AI solutions, create explicit accountability structures with mechanisms for human override and adaptation and measure trust alongside technical performance to ensure consistency and confidence, we will truly be ready.

Today, the algorithms are ready. The real question is, are we ready as leaders?