Relational Design Can’t Be Left to Chance

We say alignment is about control, safety, precision. But after a decade working as a matchmaker in India’s increasingly chaotic relationship market, I’ve learnt that what sustains a system isn’t control, it’s trust. And trust doesn’t live in rules. It lives in memory, repair, and mutual adaptation.

I’ve spent years watching relationships fall apart not because people weren’t compatible, but because they didn’t know how to collaborate. We are fluent in chemistry, but clumsy with clarity. We optimised for trait, not values or processes. And when conflicts hit, as it always does, we have no shared playbook to return to.

In traditional Indian matchmaking, we had a whole socio-structural scaffolding propping up long-term collaboration through race or caste endogamy, community expectations, family intermediation, shared rituals and rites. It was crude and often unjust, but it was structurally coherent. Marriage was not just a bond between two people, but between two lineages, empires and philosophies of life. There were rules, expectations and fallback norms. Vows weren’t just ceremonial; they were memory devices, reminding people what they were committing to when emotions faded.

Today, most of that scaffolding is gone. 

Tinder has replaced the community priest or matchmaker, and in this frictionless new marketplace, we are left to figure out long-term cooperation with short-term instincts. Even when we genuinely care for each other, we often collapse under the weight of ambiguity. We never clarify what we mean by commitment. We never learnt how to repair after rupture, and we assume love would make things obvious.

But love doesn’t make things obvious, context does, and maybe design too.

This isn’t just about marriage, it’s about systems and it’s about alignment.

Much of the current conversation on AI alignment focuses on architecture, oversight, corrigibility and formal guarantees. All of that is necessary, and I am not refuting it one bit. But I don’t see AI in isolation, because we humans are building it, for us, and so, I can’t help but view it from a lens of collaboration or partnership. 

In human systems, I’ve rarely seen misalignment fixed by control. I’ve seen it fixed by context, memory, feedback, and repair. Not all of which can be coded cleanly into an objective function.

I’ve watched couples disintegrate not because of what happened, but because it kept happening. The breach wasn’t just an error. It was a pattern that wasn’t noticed, a pain that wasn’t remembered and a signal that wasn’t acknowledged.

Systems that don’t track trust will inevitably erode it.

It’s tempting to think that AI, given enough data, will learn all this on its own. That it will intuit human needs, pick up patterns and converge on stable behaviours. But from the relational world, I’ve learnt that learning isn’t enough, structural scaffolding for sustenance matters. 

Most humans don’t know how to articulate their emotional contracts, let alone renegotiate them. Many don’t even realise repair is an option. That they can say, “Hey, this mattered to me. Can you remember next time?” If we humans can’t do this instinctively, why would we expect machines to?

In nature, systems evolved slowly. Organs, species and ecosystems; they didn’t drop overnight like an update. They became resilient because they were shaped by millennia of co-adaptation. They learnt, painfully, that survival isn’t about short-term optimisation. It’s about coherence over time. It’s about knowing when not to dominate, and about restraint.

We humans can, if we choose, eliminate entire species. But most of us don’t. Somewhere in our messy cultural evolution, we’ve internalised a sense that … might isn’t always right. Survival is entangled, and so, power must be held in context.

AI doesn’t have that inheritance. It is young, fast and brittle (if not reckless), and it is being inserted into mature social ecosystems without the long runway of evolutionary friction. It’s not wrong to build it, but it is wrong to assume it will learn the right instincts just because it sees enough examples.

That’s why I think we need to take on the role not of controllers, but of stewards, or parents, even. Not to infantilise the system, but to give it what it currently lacks i.e. relational memory, calibrated responsiveness and the capacity to recover after breach.

Eventually, maybe it will become anti-fragile enough to do this on its own. But not yet. Until then, we design, and we nurture. 

We design for value memory, not just functional memory, but the ability to track what a human has signalled as emotionally or ethically significant. We design for trust tracking, not just “was the task completed?” but “has the system earned reliability in the eyes of this user?” We design for repair affordances i.e. the moment when something goes wrong and the system says, “That mattered. Let me try again.” We design for relational onboarding or lightweight ways to understand a user’s tone, sensitivity, and boundary preferences.

These are not soft features. They are structural affordances for relational alignment. Just like rituals and vows aren’t romantic fluff, but memory scaffolds. Just like marriage is not only about love, but about co-navigation under stress.

Some might say this isn’t necessary. That good architecture, regulation, and interpretability will cover the gaps. But every safety approach needs a medium, and in complex socio-technical systems, that medium is trust. Not blind trust, but earned, trackable, recoverable trust.

Relational alignment won’t replace other paradigms. But it may be the piece that makes them stick like a substrate that holds the rest together when things begin to drift. Because if we don’t design our systems to repair trust, hold memory, and attune to difference, we won’t just build misaligned machines, we’ll build lonely ones.

And no, I am not anthropomorphising AI or worry about its welfare, but I know that loneliness puts us at odds with rest of the world, making it harder to distinguish right from wrong. 

I use the parenting analogy not to suggest we’ll control AI forever, but to point out that even with children, foundational values are just the start. Beyond a point, it is each interaction, with peers, strangers, systems, that shapes who they become. Centralised control only goes so far. What endures is the relational context. And that, perhaps, is where real alignment begins.

Can you Care without Feeling?

One day, I corrected an LLM for misreading some data in a table I’d shared. Again. Same mistake. Same correction. Same hollow apology.

“You’re absolutely right! I should’ve been more careful. Here’s the corrected version blah blah blah.”

It didn’t sound like an error. It sounded like a partner who’s mastered the rhythm of an apology but not the reality of change.

I wasn’t annoyed by the model’s mistake. I was unsettled by the performance, the polite, well-structured, emotionally intelligent-sounding response that suggested care, with zero evidence of memory. No continuity. No behavioural update. Just a clean slate and a nonchalant tone.

We’ve built machines that talk like they care, but don’t remember what we told them yesterday. Sigh.

The Discomfort Is Relational

In my previous essays, I’ve explored how we’re designing AI systems the way we approach arranged marriages, optimising for traits while forgetting that relationships are forged in repair, not specs. I’ve argued that alignment isn’t just a technical challenge; it’s a relational one, grounded in trust, adaptation, and the ongoing work of being in conversation.

This third essay comes from a deeper place. A place that isn’t theoretical or abstract. It’s personal. Because when something pretends to care, but shows no sign that we ever mattered, that’s not just an error. That’s a breach.

And it’s eerily familiar.

A Quiet Moment in the Kitchen

Recently, I scolded my 8-year-old for something. She shut down, stormed off. Normally, I’d go after her. But that day, I was fried.

Later, I was in the kitchen, quietly loading the dishwasher, when she walked in and asked, “Mum, do you still love me when you’re upset with me?” I was unsure where this was coming from, but simply said “Of course, baby. Why do you ask?” She paused, and then said, “.. because you have that look like you don’t.”

That’s the thing about care. It isn’t what we say, it’s what we do. It’s what we adjust. It’s what we hold onto even when we’re tired. She wasn’t asking for reassurance. She was asking for relational coherence.

So was I, when the LLM said sorry and then forgot me again.

Care Is a System, Not a Sentiment

We’ve taught machines to simulate empathy, to say things like “I understand” or “I’ll be more careful next time.” But without memory, there’s no follow-through. No behavioral trace. No evidence that anything about us registered.

This results in machines that feel more like people-pleasers than partners. High verbal fluency, low emotional integrity. This isn’t just bad UX. It’s a fundamental misalignment. A shallow mimicry of care that collapses under the weight of repetition.

What erodes trust isn’t failure. It’s the apology without change. The simulation of care without continuity.

So, what is care, really?

Not empathy. Not affection. Not elaborate prompts or personality packs.

Care can be as simple as memory with meaning. I want behavioural updates, not just verbal flourishes. I want trust not because the model sounds warm, but because it acts aware. 

That’s not emotional intelligence. That’s basic relational alignment.

If we’re building systems that interact with humans, we don’t need to simulate sentiment. But we do need to track significance. We need to know what matters to this user, in this context, based on prior signals.

Alignment as Behavioural Coherence

This is where it gets interesting.

Historically, we trusted machines to be cold but consistent. No feeling, but no betrayal. Now, AI systems talk like people, complete with hedging, softening, and mirroring our social tics. But they don’t carry the relational backbone we rely on in real trust, memory, calibration, adaptation and accountability.

They perform care without its architecture. Like a partner who says, “You matter,” but keeps repeating the same hurtful thing.

What we need is not more data. We need structured intervention. Design patterns that support pause, reflection, feedback integration, and pattern recognition over time. Something closer to a prefrontal cortex than a parrot.

As someone who’s spent a decade decoding how humans build trust, whether in relationships, organisations, or policy systems, I’ve come to believe …

Trust isn’t built in words. It’s built in what happens after them.

So no, I don’t need my AI systems to feel. But I do need them to remember.

To demonstrate that what I said yesterday still matters today.

That would be enough.

AI, Alignment & the Art of Relationship Design

We don’t always know what we’re looking for until we stop looking for what we are told to want.

When I worked as a relationship coach, most people came to me with a list. A neat, itemised checklist of traits their future partner must have. Tall. Intelligent. Ambitious. Spiritual. Funny but not flippant. Driven but not workaholic. Family-oriented but not clingy. The wish-lists were always oddly specific and wildly contradictory.

Most of them came from a place of fear. The fear of choosing wrong. The fear of heartbreak. The fear of regret.

I began to notice a pattern. We don’t spend enough time asking ourselves what kind of relationship we want to build. We outsource the work of introspection to conditioning, and compensate for confusion with checklists. Somewhere along the way, we forget that the person is not the relationship. The traits don’t guarantee the experience.

So I asked my clients to flip the script. Instead of describing a person, describe the relationship. What does it feel like to come home to each other? What are conversations like during disagreements? How do we repair? What values do we build around?

Slowly, something shifted. When we design the relationship first, we begin to recognise the kind of person who can build it with us. Our filters get sharper. Our search gets softer. We stop hunting for trophies and start looking for partners.

I didn’t know it then, but that framework has stayed with me. It still lives in my questions. Only now, the relationship I’m thinking about isn’t romantic. It’s technological.

Whether we realise it or not, we are not just building artificial intelligence, we are curating a relationship with it. Every time we prompt, correct, collaborate, learn, or lean on it, we’re shaping not just what it does, but who we become alongside it.

Just like we do with partners, we’re obsessing over its traits. Smarter. Faster. More efficient. More capable. The next version. The next benchmark. The perfect model.

But what about the relationship?

What kind of relationship are we designing with AI? Through it? Around it?

We call it “alignment”, but much of it still smells like control. We want AI to obey. To behave. To predictably respond. We say “safety”, but often we mean submission. We want performance, but not presence. Help, but not opinion. Speed, but not surprise.

It reminds me of the well-meaning aunties in the marriage market. Impressed by degrees, salaries, and skin tone. Convinced that impressive credentials are the same as long-term compatibility. It’s a comforting illusion. But it rarely works out that way.

Because relationships aren’t made in labs. They’re made in moments. In messiness. In the ability to adapt, apologise, recalibrate. It’s not about how smart AI is. It’s about how safe we feel with AI when it is wrong.

So what if we paused the chase for capabilities, and asked a different question?

What values do we want this relationship to be built on?

Trust, perhaps. Transparency. Context. Respect. An ability to say “I don’t know”. To listen. To course-correct. To stay in conversation without taking over.

What if we wanted AI that made us better? Not just faster or more productive, but more aware. More creative. More humane. That kind of intelligence isn’t artificial. It’s collaborative.

For that, we need a different kind of design. One that reflects our values, not just our capabilities. One that prioritises the quality of interaction, not just the quantity of output. One that knows when to lead, and when to listen.

We’re not building tools. We’re building relationships.

The sooner we start designing this, the better the chances we’ll have at coexisting, collaborating, and even growing together.

Because if we get the relationship right, the intelligence will follow.