Relational Design Can’t Be Left to Chance

We say alignment is about control, safety, precision. But after a decade working as a matchmaker in India’s increasingly chaotic relationship market, I’ve learnt that what sustains a system isn’t control, it’s trust. And trust doesn’t live in rules. It lives in memory, repair, and mutual adaptation.

I’ve spent years watching relationships fall apart not because people weren’t compatible, but because they didn’t know how to collaborate. We are fluent in chemistry, but clumsy with clarity. We optimised for trait, not values or processes. And when conflicts hit, as it always does, we have no shared playbook to return to.

In traditional Indian matchmaking, we had a whole socio-structural scaffolding propping up long-term collaboration through race or caste endogamy, community expectations, family intermediation, shared rituals and rites. It was crude and often unjust, but it was structurally coherent. Marriage was not just a bond between two people, but between two lineages, empires and philosophies of life. There were rules, expectations and fallback norms. Vows weren’t just ceremonial; they were memory devices, reminding people what they were committing to when emotions faded.

Today, most of that scaffolding is gone. 

Tinder has replaced the community priest or matchmaker, and in this frictionless new marketplace, we are left to figure out long-term cooperation with short-term instincts. Even when we genuinely care for each other, we often collapse under the weight of ambiguity. We never clarify what we mean by commitment. We never learnt how to repair after rupture, and we assume love would make things obvious.

But love doesn’t make things obvious, context does, and maybe design too.

This isn’t just about marriage, it’s about systems and it’s about alignment.

Much of the current conversation on AI alignment focuses on architecture, oversight, corrigibility and formal guarantees. All of that is necessary, and I am not refuting it one bit. But I don’t see AI in isolation, because we humans are building it, for us, and so, I can’t help but view it from a lens of collaboration or partnership. 

In human systems, I’ve rarely seen misalignment fixed by control. I’ve seen it fixed by context, memory, feedback, and repair. Not all of which can be coded cleanly into an objective function.

I’ve watched couples disintegrate not because of what happened, but because it kept happening. The breach wasn’t just an error. It was a pattern that wasn’t noticed, a pain that wasn’t remembered and a signal that wasn’t acknowledged.

Systems that don’t track trust will inevitably erode it.

It’s tempting to think that AI, given enough data, will learn all this on its own. That it will intuit human needs, pick up patterns and converge on stable behaviours. But from the relational world, I’ve learnt that learning isn’t enough, structural scaffolding for sustenance matters. 

Most humans don’t know how to articulate their emotional contracts, let alone renegotiate them. Many don’t even realise repair is an option. That they can say, “Hey, this mattered to me. Can you remember next time?” If we humans can’t do this instinctively, why would we expect machines to?

In nature, systems evolved slowly. Organs, species and ecosystems; they didn’t drop overnight like an update. They became resilient because they were shaped by millennia of co-adaptation. They learnt, painfully, that survival isn’t about short-term optimisation. It’s about coherence over time. It’s about knowing when not to dominate, and about restraint.

We humans can, if we choose, eliminate entire species. But most of us don’t. Somewhere in our messy cultural evolution, we’ve internalised a sense that … might isn’t always right. Survival is entangled, and so, power must be held in context.

AI doesn’t have that inheritance. It is young, fast and brittle (if not reckless), and it is being inserted into mature social ecosystems without the long runway of evolutionary friction. It’s not wrong to build it, but it is wrong to assume it will learn the right instincts just because it sees enough examples.

That’s why I think we need to take on the role not of controllers, but of stewards, or parents, even. Not to infantilise the system, but to give it what it currently lacks i.e. relational memory, calibrated responsiveness and the capacity to recover after breach.

Eventually, maybe it will become anti-fragile enough to do this on its own. But not yet. Until then, we design, and we nurture. 

We design for value memory, not just functional memory, but the ability to track what a human has signalled as emotionally or ethically significant. We design for trust tracking, not just “was the task completed?” but “has the system earned reliability in the eyes of this user?” We design for repair affordances i.e. the moment when something goes wrong and the system says, “That mattered. Let me try again.” We design for relational onboarding or lightweight ways to understand a user’s tone, sensitivity, and boundary preferences.

These are not soft features. They are structural affordances for relational alignment. Just like rituals and vows aren’t romantic fluff, but memory scaffolds. Just like marriage is not only about love, but about co-navigation under stress.

Some might say this isn’t necessary. That good architecture, regulation, and interpretability will cover the gaps. But every safety approach needs a medium, and in complex socio-technical systems, that medium is trust. Not blind trust, but earned, trackable, recoverable trust.

Relational alignment won’t replace other paradigms. But it may be the piece that makes them stick like a substrate that holds the rest together when things begin to drift. Because if we don’t design our systems to repair trust, hold memory, and attune to difference, we won’t just build misaligned machines, we’ll build lonely ones.

And no, I am not anthropomorphising AI or worry about its welfare, but I know that loneliness puts us at odds with rest of the world, making it harder to distinguish right from wrong. 

I use the parenting analogy not to suggest we’ll control AI forever, but to point out that even with children, foundational values are just the start. Beyond a point, it is each interaction, with peers, strangers, systems, that shapes who they become. Centralised control only goes so far. What endures is the relational context. And that, perhaps, is where real alignment begins.

Cognitive Exhaustion & Engineered Trust

I’ve been going to the same gym (close to home) since 2019. It used to be a place of quiet rhythm with familiar equipment, predictable routines, a kind of muscle memory not just for the body but the mind. I’d walk in, find my spot, ease into the day’s class. There was flow. Not just in movement, but in attention. The environment held me.

Then everything changed.

New people joined in waves. Coaches rotated weekly. Classes collided. Dumbbells became landmines. Every workout began with a risk assessment. Was that bench free? Will someone walk behind me mid-lift? Can I finish this set before another class floods in?

My body was lifting, but my mind was scanning. Hypervigilance had replaced focus. The gym hadn’t become more dangerous per se, but it had stopped helping me feel safe. And that made all the difference.

What broke wasn’t just order. What broke was affordancethat quiet contract between environment and behaviour, where the space guides you gently toward good decisions, without you even noticing. It wasn’t about rules. It was about rhythm. And without rhythm, all that remained was noise.

And that’s when I realised, this isn’t just about gyms. It’s about systems. It’s about how we design our spaces, physical, social, digital, that shape our decisions, our energy, and ultimately, our trust.

Driving in Bangalore: A Case Study in Cognitive Taxation

I live in Bangalore, a city infamous for its chaotic traffic. We’re not just focused on how we drive. We’re constantly second-guessing what everyone else will do. A bike might swerve into our lane. A pedestrian might dart across unexpectedly. Traffic rules exist, but they’re suggestions, not structure.

So we drive with one hand on the wheel and the other on our cortisol levels. Half our energy goes into vigilance. The other half is what’s left over, for driving, for thinking, for living. This isn’t just unsafe. It’s inefficient.

And the cost isn’t just measured in accidents. It’s measured in the slow leak of mental bandwidth. We don’t notice it day to day. But over weeks, months, years, our attention gets frayed. Our decisions get thinner. Our resilience drains. Not because we’re doing more. But because the system around us does less.

Chaos isn’t always a crisis. Sometimes, it’s a tax.

The Toyota Floor: What Safety Feels Like When It’s Working

Years ago, I worked on the factory floor at Toyota. It had real risks such as heavy machinery, moving parts, tight deadlines. But I felt less stressed there than I do on Bangalore roads or in my current gym.

Why?

Because the environment carried part of the load.

Walkways were marked in green. Danger zones had tactile and auditory cues. Tools had ergonomic logic. Even the sounds had a design language, each hiss, beep, or clang told you something useful. I didn’t need to remember a hundred safety rules. The floor whispered them to me as I walked on.

This wasn’t about surveillance. It was about upstream design, an affordance architecture that reduced the likelihood of error, not by punishing the wrong thing, but by making the right thing easy. Not through control. Through invitation. And it scaled better than mere control.

That made us more relaxed, not less alert. Because we weren’t burning all our cognition just staying afloat. We could actually focus on work. This isn’t just a better way to build factories. It’s a better way to build systems. Including AI.

Why Most AI Safety Feels Like My Gym

Most of what we call “AI alignment” today feels a lot like my chaotic gym. We patch dangerous behaviour with filters, tune models post-hoc with reinforcement learning, run red teams to detect edge cases. Safety becomes a policing problem. We supervise harder, tweak more often, throw compute at every wrinkle.

But we’re still reacting downstream. We’re still working in vigilance mode. And the system, like Bangalore’s traffic, demands too much of us, all the time.

What if we flipped the script? What if the goal isn’t stricter enforcement, but better affordance?

Instead of saying, “How do we make this model obey us?” we ask, “What does this architecture make easy? What does it make natural? What does it invite?”

When we design for affordance, we’re not just trying to avoid catastrophic errors. We’re trying to build systems that don’t need to be babysat. Systems where safety isn’t an afterthought, it’s the path of least resistance.

From Control to Co-Regulation

The traditional paradigm treats AI as a tool. Give it a goal. Clamp the outputs. Rein it in when it strays. But as models become more autonomous and more embedded in daily life, this control logic starts to crack. We can’t pre-program every context. We can’t anticipate every edge case. We can’t red-team our way to trust. What we need isn’t control. It’s co-regulation.

Not emotional empathy, but behavioural feedback loops. Systems that surface their uncertainty. That remember corrections. That learn not just from input-output pairs, but from the relational texture of their environment, users, constraints, other agents, evolving contexts, and are able to resolve conflicts.

This isn’t about making AI more human. It’s about making it more social. More modular. More structured in its interactions.

Distributed Neural Architecture (DNA)

What if, instead of one big fluent model simulating everything, we had a modular architecture composed of interacting parts? Each part could:

  • Specialise in a different domain,
  • Hold divergent priors or heuristics,
  • Surface disagreement instead of hiding it,
  • Adapt relationally over time.

I call it Distributed Neural Architecture or DNA.

Not a single consensus engine, but a society of minds in structured negotiation. This kind of architecture doesn’t just reduce brittleness. It allows safety to emerge, not be enforced. Like a well-designed factory floor, it invites trust by design through redundancies, reflections, checks, and balances.

It’s still early. I’ll unpack DNA more fully in a future post. But the core intuition is alignment isn’t a property of the parts. It’s a function of their relationships.

The Hidden Cost of Hyper vigilance

Whether we’re talking about gyms, traffic, factories, or AI systems, there’s a common theme here. When environments don’t help us, we end up doing too much. And over time, that extra effort becomes invisible. We just assume that exhaustion is the cost of functioning. We assume vigilance is the price of safety. We assume chaos is normal.

But it isn’t. It’s just what happens when we ignore design.

We can do better. In fact, we must because the systems we’re building now won’t just serve us. They’ll shape us. If we want AI that’s not just powerful, but trustable, we don’t need tighter chains. We need smarter scaffolds. Not stronger control. But better coordination. More rhythm. More flow.

More environments that carry the load with us, not pile it all on our heads.

Relational Alignment

Recently, Dario Amodei, CEO of Anthropic, wrote about “AI welfare.” It got me thinking about the whole ecosystem of AI ethics, safety, interpretability, and alignment. We started by treating AI as a tool. Now we teeter on the edge of treating it as a being. In oscillating between obedience and autonomy, perhaps we’re missing something more essential – coexistence and collaboration.

Historically, we’ve built technology to serve human goals, then lamented the damage, then attempted repair. What if we didn’t follow that pattern with AI? What if we anchored the development of intelligent systems not just around outcomes, but around the relationships we hope to build with them?

In an earlier post, I compared the current AI design paradigm to arranged marriages: optimising for traits, ticking boxes, forgetting that the real relationship begins after the specs are met.

I ended that piece with a question …

What kind of relationship are we designing with AI?

This post is my attempt to sit with that question a little longer, and maybe go a level deeper.

From Obedience to Trust

We’re used to thinking of alignment in functional terms:

  • Does the system do what I ask?
  • Does it optimise the right metric?
  • Does it avoid catastrophic failure?

These are essential questions, especially at scale. But the experience of interacting with AI doesn’t happen in the abstract. It happens in the personal. In that space, alignment isn’t a solved problem. It’s a living process.

When I used to work with couples in conflict, I would often ask:

“Do you want to be right, or do you want to be in relationship?”

That question feels relevant again now, in the context of AI, because much of our current alignment discourse still smells like obedience. We talk about training models the way we talk about housebreaking a dog or onboarding a junior analyst.

But relationships don’t thrive on obedience. They thrive on trust, on care, attention, and the ability to repair when things go wrong.

Relational Alignment: A Reframe

Here’s the idea I’ve been sitting with …

What if alignment isn’t just about getting the “right” output, but about enabling mutual adaptation over time?

In this view, alignment becomes less about pre-specified rules and more about attunement. A relationally aligned system doesn’t just follow instructions, it learns what matters to you, and updates in ways that preserve emotional safety.

Let’s take a business example here: imagine a user relies on your AI system to track and narrate daily business performance. The model misstates a figure, off by a few basis points. That may not be catastrophic. But the user’s response will hinge on what they value: accuracy or direction. Are they in finance or operations? The same mistake can signal different things in different contexts. A relationally aligned system wouldn’t just correct the error. It would treat the feedback as a signal of value – this matters to them, pay attention.

Forgetfulness, in relationships, often erodes trust faster than malice. Why wouldn’t it do the same here?

From Universal to Relational Values

Most alignment work today is preoccupied with universal values such as non-harm, honesty, consent. And that’s crucial. But relationships also depend on personal preferences: the idiosyncratic, context-sensitive signals that make someone feel respected, heard, safe.

I think of these in two layers:

  • Universal values – shared ethical constraints
  • Relational preferences – contextual markers of what matters to this user, in this moment

The first layer sets boundaries. The second makes the interaction feel meaningful.

Lessons from Parenting

Of course, we built these systems. We have the power. But that doesn’t mean we should design the relationship to be static. I often think about this through the lens of parenting.

We don’t raise children with a fixed instruction set handed over to the infant at birth. We teach through modeling. We adapt based on feedback. We repair. We try again.

What if AI alignment followed a similar developmental arc? Not locked-in principles, but a maturing, evolving sense of shared understanding?

That might mean building systems that embody:

  • Memory for what matters
  • Transparency around uncertainty
  • Protocols for repair, not just prevention
  • Willingness to grow, not just optimise
  • Accountability, even within asymmetry

Alignment, then, becomes not just a design goal but a relational practice. Something we stay in conversation with.

Why This Matters

We don’t live in isolation. We live in interaction.

If our systems can’t listen, can’t remember, can’t repair, we risk building tools that are smart but sterile. Capable, but not collaborative. I’m not arguing against technical rigour, I’m arguing for deeper foundations.

Intelligence doesn’t always show up in a benchmark. Sometimes, it shows up in the moment after a mistake, when the repair matters more than the response.

Open Questions

This shift opens up more questions than it resolves. But maybe that’s the point.

  • What makes a system trustworthy, in this moment, with this person?
  • How do we encode not just what’s true, but what’s meaningful?
  • How do we design for difference, not just in data, but in values, styles, and needs?
  • Can alignment be personal, evolving, and emotionally intelligent, without pretending the system is human?

An Invitation

If you’re working on the technical or philosophical side of trust modelling, memory, interpretability, or just thinking about these questions, I’d love to hear from you. Especially if you’re building systems where the relationship itself is part of the value.

AI, Alignment & the Art of Relationship Design

We don’t always know what we’re looking for until we stop looking for what we are told to want.

When I worked as a relationship coach, most people came to me with a list. A neat, itemised checklist of traits their future partner must have. Tall. Intelligent. Ambitious. Spiritual. Funny but not flippant. Driven but not workaholic. Family-oriented but not clingy. The wish-lists were always oddly specific and wildly contradictory.

Most of them came from a place of fear. The fear of choosing wrong. The fear of heartbreak. The fear of regret.

I began to notice a pattern. We don’t spend enough time asking ourselves what kind of relationship we want to build. We outsource the work of introspection to conditioning, and compensate for confusion with checklists. Somewhere along the way, we forget that the person is not the relationship. The traits don’t guarantee the experience.

So I asked my clients to flip the script. Instead of describing a person, describe the relationship. What does it feel like to come home to each other? What are conversations like during disagreements? How do we repair? What values do we build around?

Slowly, something shifted. When we design the relationship first, we begin to recognise the kind of person who can build it with us. Our filters get sharper. Our search gets softer. We stop hunting for trophies and start looking for partners.

I didn’t know it then, but that framework has stayed with me. It still lives in my questions. Only now, the relationship I’m thinking about isn’t romantic. It’s technological.

Whether we realise it or not, we are not just building artificial intelligence, we are curating a relationship with it. Every time we prompt, correct, collaborate, learn, or lean on it, we’re shaping not just what it does, but who we become alongside it.

Just like we do with partners, we’re obsessing over its traits. Smarter. Faster. More efficient. More capable. The next version. The next benchmark. The perfect model.

But what about the relationship?

What kind of relationship are we designing with AI? Through it? Around it?

We call it “alignment”, but much of it still smells like control. We want AI to obey. To behave. To predictably respond. We say “safety”, but often we mean submission. We want performance, but not presence. Help, but not opinion. Speed, but not surprise.

It reminds me of the well-meaning aunties in the marriage market. Impressed by degrees, salaries, and skin tone. Convinced that impressive credentials are the same as long-term compatibility. It’s a comforting illusion. But it rarely works out that way.

Because relationships aren’t made in labs. They’re made in moments. In messiness. In the ability to adapt, apologise, recalibrate. It’s not about how smart AI is. It’s about how safe we feel with AI when it is wrong.

So what if we paused the chase for capabilities, and asked a different question?

What values do we want this relationship to be built on?

Trust, perhaps. Transparency. Context. Respect. An ability to say “I don’t know”. To listen. To course-correct. To stay in conversation without taking over.

What if we wanted AI that made us better? Not just faster or more productive, but more aware. More creative. More humane. That kind of intelligence isn’t artificial. It’s collaborative.

For that, we need a different kind of design. One that reflects our values, not just our capabilities. One that prioritises the quality of interaction, not just the quantity of output. One that knows when to lead, and when to listen.

We’re not building tools. We’re building relationships.

The sooner we start designing this, the better the chances we’ll have at coexisting, collaborating, and even growing together.

Because if we get the relationship right, the intelligence will follow.

The Einstein Paradox in AI

After reading Thomas’s essay on AI and scientific discovery, I couldn’t agree more with him. Yet, I was troubled by something that I felt deserves more attention – the visibility problem for contrarian thinking. For the last few weeks, I’ve had a blogpost in my drafts about intuition in AI, and architectures that can enable that. But I haven’t published it because I don’t even know if it makes sense. The irony of writing about contrarian thinking while doubting my own contrarian ideas isn’t lost on me.

I am not a software engineer, I am not a researcher in AI and I don’t even know how to put it out into the world where the idea can be read and reviewed by people who may help in fleshing out the idea better. There is simply no reason for any one of remote consequence in AI to bother with this. After all, the world is already full of brilliant ideas from people with the right credentials, why add my voice to the cacophony?

In his essay, Thomas argues that we need a B student who sees and questions what everyone else missed rather than perfect A+ students who excel at answering known questions. Yet there’s a profound irony here if you think about it. Our systems are designed to silence precisely these non-conformist voices. We’re searching for original thinkers in an ecosystem designed to eliminate them. It’s like trying to fish in a desert.

Very often, I find myself thinking that a widely agreed upon idea is flawed. But I’ve rarely felt safe enough to challenge it out loud. Over time, I became conditioned to stop questioning what was being said, and got busy trying to fit in. I assume several other contrarian thinkers have faced similar challenges too because our entire society is designed to optimise for conformity. The path of least resistance is to nod along with consensus or keep a poker face like we are too dumb to even nod.

Our society is manufacturing agreement at industrial scale.

I have a distinct memory of an incident when I was 12 years old, on a senior’s house terrace. We were discussing chaos theory and multiverses as both of us shared an interest in astrophysics. He called me a “non-conformist” during our discussion. I heard that term for the first time and wasn’t entirely sure what it meant. I came home and looked up the word in the dictionary; I remember thinking “Oh well, it’s an upgrade from being blatantly dismissed for my perspective”, but I still felt like an odd ball.

This pattern continues into adulthood. I could publish contrarian views online, but what’s my incentive to keep doing so? Without institutional support or an established platform, these ideas disappear into the void where no one of significant influence will ever encounter them. It’s the intellectual equivalent of screaming into a pillow. The issue isn’t just about avoiding judgment, it’s about fundamental visibility. If you’re in the minority, your voice is rarely even heard, let alone considered. Our systems of knowledge creation and distribution have built-in filters that maintain the status quo.

Status hierarchies determine whose ideas receive attention. Contrarian views face a much higher burden of proof. Networks and institutions amplify established voices while muting others. Publication systems have inherent biases toward consensus thinking. There’s rarely immediate reward for challenging consensus, while conformity offers clear benefits. The game is rigged, and we all know the rules. This creates a self-reinforcing cycle where even potentially revolutionary ideas remain hidden simply because they come from the wrong sources or challenge powerful incumbents or lack logical data-backed proof. Einstein might have remained a patent clerk if his papers hadn’t somehow broken through.

Building “safe spaces” for speculation isn’t enough. We need mechanisms that actively elevate diverse viewpoints regardless of their source. The question of incentives is crucial. Without recognition or the possibility of impact, why would anyone invest the intellectual and social capital required to develop and promote contrarian views, let alone document their ideas for training purposes.

This visibility problem directly impacts how we develop AI. If we’re building systems trained primarily on consensus knowledge without exposure to diverse perspectives, we’re encoding the very same biases that keep contrarian thinking hidden. We’re coding conformity into silicon. The “country of yes-men on servers” Thomas fears isn’t just a function of how we train models technically, it’s a reflection of whose voices get amplified in our training data and evaluation criteria. We’re building mirrors that reflect only the most established parts of ourselves.

When I first thought about a novel AI architecture (there will be a separate post on this), I was exploring how we might design systems that integrate multiple cognitive approaches rather than privileging analytical thinking alone. But implementing such systems requires first acknowledging and addressing this fundamental visibility problem for contrarian perspectives. If we want to find the next Einstein, we need to build systems where their voices can be heard in the first place. Otherwise, we’ll just keep building increasingly sophisticated echoes of conventional wisdom.

P.S. – I might still go ahead and publish the blogpost sitting in my drafts at some point, because if nothing, it will go into training data some day and hopefully inspire someone of significant influence in AI to build on it. Until then, my contrarian thoughts can marinate in obscurity like a fine wine that no one’s been invited to taste.

Distributed Neural Architecture

For years, artificial intelligence has been on a steady trajectory i.e. bigger models, more data, more compute. The belief has been simple – if you scale it, intelligence will emerge. But what if we’ve hit a wall?

Today’s large AI models are undeniably impressive. They can summarise documents, write code, even simulate conversation. But they’re also fragile. They hallucinate. They require enormous resources. And they centralise power into the hands of a few organisations with the ability to train and operate them. So what if the next leap in AI doesn’t come from scaling even bigger monoliths, but from rethinking how intelligence is organised in the first place?

This is the idea behind Distributed Neural Architecture, or DNA.

Rethinking the Architecture of Intelligence

Imagine if we stopped thinking of AI as one giant brain and started thinking of it like a collaborative system, like a society of experts. In the DNA model, an AI system wouldn’t be one massive model trying to know everything like a student who tops the class. Instead, it would be composed of many smaller, specialised neural modules, each excellent at a particular domain such as reasoning, language, vision, ethics, law, medicine, etc.

These modules could be developed independently by different research labs, startups, or institutions, and then seamlessly integrated on demand. The intelligence wouldn’t live in any one place. It would emerge from the collaboration between these specialised parts.

Three Core Principles of DNA

1. Seamless Specialisation: Each module is designed to do one thing really well. One might be great at directions, another at diagnosing heart conditions. Rather than stuffing all this knowledge into one bloated model, DNA allows each to be lightweight, focused, and constantly improving in its own niche.

2. Invisible Orchestration: There’s no central command centre. Instead of one “master” model deciding how to route tasks, the modules negotiate and self-organise based on the task at hand. They share information through a standard communication protocol and make decisions collectively. It’s intelligence by conversation, not by control.

3. Cognitive Augmentation: These modules don’t just provide external tools. They become part of the thinking process. Their contributions are dynamically weighted based on performance and reliability. The system gets smarter not by retraining everything, but by learning which combinations of modules work best.

So… How Does This Actually Work?

At the core of DNA is the idea of a Neural Protocol Layer. Think of it like the internet’s TCP/IP, but for AI modules. It defines how modules talk to each other, how they share context, how they authenticate themselves, and how they know when to contribute.

The architecture would also include a neural cache to remember successful combinations, latency-aware routing to ensure speed and confidence weighting to decide which module’s opinion matters most. This system would work across different AI models, frameworks, and even hardware setups. It’s designed to be open, interoperable, and extensible.

Why Not Just Use Mixture of Experts?

You might be wondering, doesn’t this already exist in Mixture of Experts (MoE) models? Kind of. But not really. MoE still happens inside a single system, controlled by a single entity. DNA breaks out of that. It allows for true decentralisation, different organisations building and hosting modules that work together through shared protocols. It’s not just modular computation, it’s modular intelligence.

But What About Safety?

One of the biggest challenges with decentralised systems is governance. What happens when one module gives biased or harmful outputs? What if someone uploads a malicious module? Who decides what “counts” as valid? DNA addresses this by embedding governance into its core design. It proposes a democratic governance model inspired by constitutional frameworks. This includes independent “councils” of modules that make decisions, reputation systems that ensure quality and trust worthiness and a decentralised judiciary layer that can review disputes and errors. This isn’t just about building smarter AI, it’s about building systems that are safe, accountable, and participatory.

What Comes Next?

DNA is not just a concept, it’s a roadmap:

  1. Define shared protocols
  2. Enable independently built modules to plug into the system
  3. Build governance frameworks to ensure safety and accountability
  4. Create a marketplace where innovation is open, compensated, and transparent

It’s ambitious. It’s complex. But it’s also the kind of idea we need if we want to steer AI toward collective benefit, not just competitive dominance.

Read the Full White Paper

If any of this has sparked curiosity, I am writing a white paper [will share link when ready] that goes deeper, covering technical design, use cases, governance frameworks, implementation challenges, and why this shift matters now. This is a living breathing document as I am looking for collaborators to further the research. If you’re interested in building better minds working together rather than just one giant brain, let’s talk.

Intuition in AI

Recently, my family introduced me to this card game called “donkey”.

You’re dealt 13 cards. The person holding the Ace of Spades begins by playing that card, and the others must follow suit, quite literally. They continue playing cards of the same suit until someone runs out and is forced to break the sequence with a card from another suit. That break hands the momentum and the pile to the opponent. The first player to exhaust their pack wins.

My husband and daughter had no trouble keeping track, strategising, counting how many cards were left in each suit and playing the long game. Meanwhile, I was blundering through the game like a blind woman in an obstacle course.

Did I lose the first game? Yes. The second? Also yes. But the third? I won. How? I don’t know. I wasn’t calculating the odds, I was sensing moves. It’s like my hands had figured out the game before my brain could, like adding salt to taste while cooking. My husband and daughter tried explaining using logic several times, yet, I kept relying on this quite invisible sense of knowing.

That moment made me reflect on something deeper. I have always figured my way around unknowns through intuition. But we rarely talk about intuition, especially in the context of artificial intelligence. Why is that? What is intuition, and can AI ever develop it?

The Flash of Knowing

When Steve Jobs decided to focus on the iPod, many analysts thought it was a mistake. The market was crowded with MP3 players, and Apple was a computer company. However, Jobs’ intuition about the potential for a user-friendly, stylish music player proved correct. The iPod became a massive success, revolutionising the music industry and setting Apple on a path to becoming one of the world’s most valuable companies.

In the early 2000s, Jeff Bezos decided to invest heavily in cloud computing, which was then a nascent technology. Many questioned the wisdom of a book seller investing in infrastructure. However, Bezos’ intuition about the future of cloud computing paid off. AWS became a dominant player in the cloud infrastructure market, generating billions of dollars in revenue for Amazon.

Richard Branson’s intuition has been fundamental to Virgin Group’s success, balancing instinct with strategy to drive innovation and competitiveness. Virgin Atlantic, for example, was born when Branson, stranded in Puerto Rico after a canceled flight, chartered a plane and sold tickets to fellow travellers. A spontaneous decision became a thriving airline, renowned for its service and innovation despite initial skepticism.

This pattern of intuitive foresight isn’t unusual among successful business leaders. These examples highlight something critical about human intelligence that our AI systems currently lack. It is the ability to make intuitive leaps, to just know something without explicitly reasoning through each step.

We’ve all experienced moments where we arrive at conclusions without being able to articulate exactly how we got there. And often, these intuitive judgments prove remarkably accurate. Yet when we build artificial intelligence, we focus almost exclusively on logical, step-by-step reasoning.

We design systems that can explain every decision, show their work, and follow clear patterns of deduction. Even our most advanced AI models, transformer-based language models and multi-modal systems, ultimately rely on structured patterns of prediction and brute-force processing rather than the elegant efficiency of human intuition.

What if we’re missing half of what makes human intelligence so remarkable?

The Neurobiology of Intuition

Neuroscience has long recognised that human cognition operates through what psychologist Daniel Kahneman called “System 1” and “System 2” thinking, intuitive and analytical processes, respectively. But intuition isn’t magic or mysticism, it is one of the old forms of intelligence, one that existed even before we developed language. Intuition has been critical for our survival and has concrete neurobiological foundations. Nature never designed the brain for interpretability. It designed it for survival.

The human brain isn’t a single processing unit but a symphony of specialised systems working like a concert orchestra. This biological architecture offers not just inspiration but a practical blueprint for building more intuitive AI. Our intuition emerges from several neural structures, each contributing different aspects of rapid, efficient decision-making:

  • The Insular Cortex integrates bodily signals, emotional states, and environmental information to generate “gut feelings”and those immediate, visceral responses that precede conscious reasoning. These aren’t random hunches but compressed insights based on complex pattern recognition happening below our awareness.
  • The Basal Ganglia stores procedural memory and implicit knowledge gained through repeated experience. This allows experts to recognise patterns instantly, a chess grandmaster seeing the right move or a doctor diagnosing a condition from subtle symptoms, without the need for explicit step-by-step analysis.
  • The Prefrontal Cortex serves as an executive control system, determining when to trust intuitive responses and when to engage deeper analytical thinking based on uncertainty, stakes, and available time.
  • The Enteric Nervous System, our “second brain” located in the digestive tract, maintains constant communication with our central nervous system, contributing to physiological responses that often precede conscious awareness of potential threats or opportunities.

What makes human intuition so powerful isn’t any one component but how these systems and many more interact, allowing us to process information through multiple pathways simultaneously and switch effortlessly between intuitive and analytical modes as the situation demands.

The Computational Cost of Certainty

Modern AI systems pay an enormous price for their logical precision. Consider a state-of-the-art LLM generating a response to a complex query. It might perform trillions of calculations, consuming significant energy and time to produce an answer that appears seamless to users. A human expert, in contrast, often arrives at comparable conclusions with far less cognitive effort. How? By leveraging intuition, the brain’s remarkable ability to compress experience into instantaneous judgments.

The efficiency gap becomes even more apparent in dynamic environments where conditions change rapidly. While traditional AI must recalculate from first principles with each new data point, human intuition allows us to continuously update our understanding with minimal cognitive overhead.

This isn’t just about speed or energy consumption (though both matter enormously). It’s about a fundamentally different approach to intelligence, one that recognises the value of approximate answers arrived at efficiently over perfect answers that come too late or cost too much.

From Biology to AI

If we want to build truly intuitive AI, we need to design systems that mirror the distributed, specialised nature of human cognition. That means moving beyond monolithic models. I’ve been thinking of a framework that integrates multiple specialised components (more on this in my next post), each inspired by a different aspect of human cognition.

Just as the human brain has distinct regions for different cognitive functions, we could build systems with multiple models, each optimised for a particular domain or processing style. One might handle spatial reasoning. Another, emotion recognition. Another, linguistic nuance. These models wouldn’t run sequentially. They’d operate in parallel.

In the brain, the prefrontal cortex helps determine when to trust a gut instinct versus when to think things through. Similarly, an intuitive AI would need a kind of executive function, systems that decide whether to go with a fast, approximate response or switch to a slower, more computationally expensive process depending on the stakes, confidence, or uncertainty involved.

The basal ganglia encodes patterns from repeated experiences, allowing experts to develop refined intuition over time. We could similarly incorporate components that retain and learn from past experiences, developing expertise rather than treating each decision as an isolated event.

Human cognition often processes information through both intuitive and analytical pathways simultaneously. We could implement similar parallel processing, running both quick, heuristic evaluations and deeper reasoning concurrently, with mechanisms to reconcile differences when they arise. We could combine both outputs, weighted by confidence levels and contextual appropriateness.

The insular cortex monitors internal bodily states to generate “gut feelings.” We could include analogous systems that continuously monitor the AI’s internal state, processing loads, uncertainty levels, and pattern recognition confidence to generate approximations of intuitive responses.

Through my research, I have learnt technical foundations already exist (Bayesian neural networks to switch between modes, Memory-augmented neural networks to simulate a subconscious memory, Monte Carlo dropout techniques to “feel” uncertainty). But what’s missing is an orchestration framework that integrates these elements into a cohesive cognitive architecture rather than a singular model.

Obviously, building a new architecture model is no mean feat. We will need to build prototypes for a low-risk use cases (eg. logistics), test and train extensively before deploying in high risk areas (eg. health). We must be conscious that these systems still lack accountability, so this is at best an augmentation tool for human potential rather than a replacement.

Challenges and Approaches

Creating truly intuitive AI involves significant challenges including technical, philosophical, and social.

First, there’s the training problem. Human intuition develops through years of embodied experience and hands-on interaction with the world, refined by immediate feedback. AI, on the other hand, is typically trained on static datasets or simulations that strip away context and ambiguity. Developing machine intuition may require new approaches to learning, including continuous deployment in low-stakes domains, repeated exposure to messy data, and iterative feedback cycles that resemble real life.

Second, we face the interpretability paradox. Human intuition often works below the level of conscious awareness. We know, but we don’t know how we know. If AI begins to function similarly, how do we trust its decisions when it can’t explain them step-by-step? In high-stakes domains like healthcare or law, that’s a major challenge. We’ll need new frameworks that balance intuitive power with context-sensitive transparency, different explanations for different audiences and use cases.

Third, there’s the issue of validation. How do we know when AI’s intuition is good versus when it’s just reproducing patterns from biased data? Human experts earn our trust over time, through training, testing, certification, and experience. AI might need something similar, a kind of probation period, where its intuitive recommendations are tested in shadow mode before being allowed to guide real decisions.

The Future of Intelligence

For too long, we’ve approached AI development through a false dichotomy, either logical interpretable systems that can explain every step or black-box models that provide answers without transparency. The path forward lies not in choosing between these approaches but in integrating them, just as human cognition seamlessly blends intuition and analysis.

Imagine AI systems that can respond instantaneously to familiar patterns with minimal computational overhead, recognise when a situation requires deeper analysis and seamlessly switch modes, build expertise through experience rather than simply applying fixed algorithms and understand their own limitations, involving human judgment when the uncertainty is beyond their comprehension.

Such systems wouldn’t just be more efficient, they would fundamentally change our relationship with artificial intelligence. Rather than tools that either dictate answers or require exhaustive human oversight, they would become true cognitive partners, complementing human intelligence rather than merely mimicking its analytical aspects.

The supply chain manager who receives an intuitive alert about a potential disruption isn’t being replaced by AI, they’re being empowered by a system that extends their awareness beyond what any individual could monitor. The doctor who consults an intuitive diagnostic system isn’t surrendering medical judgment, they’re gaining a second perspective that might notice patterns they missed.

From Intuition to Integration

As we build the next generation of AI, we can either continue refining systems that reason step-by-step, or expand our imagination to include the full spectrum of human intelligence, intuition included. This path calls for humility, curiosity, and creativity. Humility to accept that intelligence comes in many forms. Curiosity to draw from biology without copying it. And creativity to design architectures that don’t just mimic how we think, but extend what thinking can be.

I’ve learned, sometimes painfully, that our systems, educational, scientific, and computational tend to favour explicit logic while dismissing intuitive insight. That bias doesn’t just limit individuals. It limits what we build. Blending intuition with analysis won’t just make AI more efficient. It might reveal entirely new ways of understanding intelligence itself. The goal isn’t perfect replication. It’s thoughtful partnership. Not just machines that think, but machines that know when to trust a feeling.


P.S. This piece started as a simple reflection while playing a card game with my family. What followed was a deeper exploration of intuition, intelligence, and how we think about both, especially in the context of artificial intelligence. I wrote this in collaboration with AI. Not as a ghostwriter, but as a thinking partner. I used it to research neuroscience, technology and test ideas, like a curious co-author who knew when to bring the science and technology but let me hold the story. My hope is to find others including builders, researchers, educators and designers who care about the full spectrum of intelligence, whether we know how to explain it or not.

Matchmaking to Machine Learning

… what DinnerClub taught me about DeepSeek (well, sort of)

As someone who has spent years thinking about intelligent systems, first in human behaviour, now in AI, I had one of those moments where my brain short-circuits because two completely unrelated things suddenly make sense together.

This happened when I was reading about DeepSeek and the Mixture of Experts (MoE) architecture. Turns out Dinner Club, my pandemic dating experiment, wasn’t just a desperate attempt to keep love alive during lockdown, it was accidentally pioneering AI design.

Who knew playing digital cupid would one day help me understand machine learning?

At first glance, designing a dating experiment and designing AI systems seem unrelated. But both require understanding incentives, optimising decision-making, and building feedback-driven intelligence. Both involve trial, error, and the occasional catastrophic failure. That’s the heart of my work.

The Accidental AI Architect

It was 2020. The world was in lockdown, dating apps were flooded with bored people who “wanted to see where things go” (nowhere, my friend, nowhere), and I launched Dinner Club, essentially speed dating for the apocalypse, minus the awkward silences, plus some human intelligence.

Unlike dating apps, where everyone swipes incessantly into a void, I set strict limits, including three potential matches max, mandatory feedback forms (yes, homework), and a social credit system that rewarded kindness.

Because apparently, adults need credit to remember basic courtesies.

For those unfamiliar with AI terminology, a Mixture of Experts (MoE) system is an efficient architecture where a “router” directs inputs to specialised “expert” models rather than running everything through one massive system. It’s like having specialized doctors instead of making everyone see a general practitioner for every ailment.

This is similar to how LLMs like DeepSeek utilise MoE architecture as part of their design. I had accidentally built a human version of a Mixture of Experts system, complete with routing algorithms (me, playing matchmaker) and specialised experts (highly illiquid people in the dating market who excel at specific types of connections).

The Anti-Tinder Manifesto

Traditional dating apps are like those massive LLMs everyone’s obsessed with, that burn through resources like a tech bro burning through his Series A funding.

Every profile could potentially match with every other profile, creating an inefficient system. Dinner Club took a different approach. Like an MoE system’s router, I acted as the gatekeeper. I directed each person to a limited number of matches based on both obvious and subtle compatibility patterns.

Sometimes these patterns were unconventional. “Both similarly weird” turned out to be a surprisingly successful matching criterion, though I highly doubt traditional matchmakers would approve of this.

This routing efficiency was only part of the system; equally important was how we collected and used data to improve matches over time.

When Feedback-Forms Met Feelings

Those mandatory feedback forms after every date weren’t just bureaucratic exercises, they were valuable data collection tools. Each date generated quantitative ratings on niceness and compatibility, plus qualitative feedback to refine future matches.

It was basically A/B testing for hearts, using guided preference evolution. This dating approach resembles techniques already used in AI development like preference learning and RLHF, but applied to human relationships.

Take the woman who insisted on dating men who “who loved Eckhart Tolle and lived in the present”. After I matched her with exactly that, a wanderer who travelled the world with a satchel and no savings, her tune changed swiftly. Suddenly, “future-oriented” didn’t sound so bad. Funny how that works.

The Art of Being Wrong (gracefully)

When participants clung to rigid preferences (looking at you, “must be a CEO of a funded startup” person), I didn’t just shrug and move on. Instead, I developed a three-tier approach, courtesy my inner therapist:

  • Self-discovery exercises (people prefer to realise they’re wrong on their own)
  • Pattern-based insights (12 years of matchmaking teaches you that “must love dogs” is rarely the real deal-breaker)
  • Experiential learning (you have to let people date the wrong person to appreciate the right one)

This is where AI systems could actually level up. Imagine an AI that doesn’t just nod along like a sycophant but subtly nudges users to expand their horizons, like a trusted adviser. It’s the difference between “I understand your preference for emotionally unavailable partners” and “Have you considered therapy?”

The Genius of Social Credit

The social credit system in Dinner Club started as a way to gamify good behaviour, but it revealed something deeper. When we reward the right behaviours, we don’t just get better dates, we build a better ecosystem.

It’s like training a puppy, if the puppy had an MBA and unresolved issues. The genius wasn’t in the points themselves but in how they rewired behaviour. Kindness became its own currency, which is probably the most capitalist approach to decency ever attempted.

This is essentially the reward function for reinforcement learning in LLMs, where models are trained to maximise positive feedback while minimising negative outcomes.

Just as my system encouraged daters to be more considerate and responsive by rewarding those behaviours, RLHF shapes AI responses by reinforcing helpful, harmless and honest outputs while penalising problematic ones. Both systems evolve through iterative feedback, gradually aligning behaviour with desired outcomes.

A Systems Thinker’s Evolution

My work now isn’t matchmaking. It’s designing intelligent systems, whether human or AI. Because at the core, I’ve come to realise that they both require structures that don’t just guide behaviour but create better decision-making, better relationships, and ultimately, better intelligence.