Intuition in AI

Recently, my family introduced me to this card game called “donkey”.

You’re dealt 13 cards. The person holding the Ace of Spades begins by playing that card, and the others must follow suit, quite literally. They continue playing cards of the same suit until someone runs out and is forced to break the sequence with a card from another suit. That break hands the momentum and the pile to the opponent. The first player to exhaust their pack wins.

My husband and daughter had no trouble keeping track, strategising, counting how many cards were left in each suit and playing the long game. Meanwhile, I was blundering through the game like a blind woman in an obstacle course.

Did I lose the first game? Yes. The second? Also yes. But the third? I won. How? I don’t know. I wasn’t calculating the odds, I was sensing moves. It’s like my hands had figured out the game before my brain could, like adding salt to taste while cooking. My husband and daughter tried explaining using logic several times, yet, I kept relying on this quite invisible sense of knowing.

That moment made me reflect on something deeper. I have always figured my way around unknowns through intuition. But we rarely talk about intuition, especially in the context of artificial intelligence. Why is that? What is intuition, and can AI ever develop it?

The Flash of Knowing

When Steve Jobs decided to focus on the iPod, many analysts thought it was a mistake. The market was crowded with MP3 players, and Apple was a computer company. However, Jobs’ intuition about the potential for a user-friendly, stylish music player proved correct. The iPod became a massive success, revolutionising the music industry and setting Apple on a path to becoming one of the world’s most valuable companies.

In the early 2000s, Jeff Bezos decided to invest heavily in cloud computing, which was then a nascent technology. Many questioned the wisdom of a book seller investing in infrastructure. However, Bezos’ intuition about the future of cloud computing paid off. AWS became a dominant player in the cloud infrastructure market, generating billions of dollars in revenue for Amazon.

Richard Branson’s intuition has been fundamental to Virgin Group’s success, balancing instinct with strategy to drive innovation and competitiveness. Virgin Atlantic, for example, was born when Branson, stranded in Puerto Rico after a canceled flight, chartered a plane and sold tickets to fellow travellers. A spontaneous decision became a thriving airline, renowned for its service and innovation despite initial skepticism.

This pattern of intuitive foresight isn’t unusual among successful business leaders. These examples highlight something critical about human intelligence that our AI systems currently lack. It is the ability to make intuitive leaps, to just know something without explicitly reasoning through each step.

We’ve all experienced moments where we arrive at conclusions without being able to articulate exactly how we got there. And often, these intuitive judgments prove remarkably accurate. Yet when we build artificial intelligence, we focus almost exclusively on logical, step-by-step reasoning.

We design systems that can explain every decision, show their work, and follow clear patterns of deduction. Even our most advanced AI models, transformer-based language models and multi-modal systems, ultimately rely on structured patterns of prediction and brute-force processing rather than the elegant efficiency of human intuition.

What if we’re missing half of what makes human intelligence so remarkable?

The Neurobiology of Intuition

Neuroscience has long recognised that human cognition operates through what psychologist Daniel Kahneman called “System 1” and “System 2” thinking, intuitive and analytical processes, respectively. But intuition isn’t magic or mysticism, it is one of the old forms of intelligence, one that existed even before we developed language. Intuition has been critical for our survival and has concrete neurobiological foundations. Nature never designed the brain for interpretability. It designed it for survival.

The human brain isn’t a single processing unit but a symphony of specialised systems working like a concert orchestra. This biological architecture offers not just inspiration but a practical blueprint for building more intuitive AI. Our intuition emerges from several neural structures, each contributing different aspects of rapid, efficient decision-making:

  • The Insular Cortex integrates bodily signals, emotional states, and environmental information to generate “gut feelings”and those immediate, visceral responses that precede conscious reasoning. These aren’t random hunches but compressed insights based on complex pattern recognition happening below our awareness.
  • The Basal Ganglia stores procedural memory and implicit knowledge gained through repeated experience. This allows experts to recognise patterns instantly, a chess grandmaster seeing the right move or a doctor diagnosing a condition from subtle symptoms, without the need for explicit step-by-step analysis.
  • The Prefrontal Cortex serves as an executive control system, determining when to trust intuitive responses and when to engage deeper analytical thinking based on uncertainty, stakes, and available time.
  • The Enteric Nervous System, our “second brain” located in the digestive tract, maintains constant communication with our central nervous system, contributing to physiological responses that often precede conscious awareness of potential threats or opportunities.

What makes human intuition so powerful isn’t any one component but how these systems and many more interact, allowing us to process information through multiple pathways simultaneously and switch effortlessly between intuitive and analytical modes as the situation demands.

The Computational Cost of Certainty

Modern AI systems pay an enormous price for their logical precision. Consider a state-of-the-art LLM generating a response to a complex query. It might perform trillions of calculations, consuming significant energy and time to produce an answer that appears seamless to users. A human expert, in contrast, often arrives at comparable conclusions with far less cognitive effort. How? By leveraging intuition, the brain’s remarkable ability to compress experience into instantaneous judgments.

The efficiency gap becomes even more apparent in dynamic environments where conditions change rapidly. While traditional AI must recalculate from first principles with each new data point, human intuition allows us to continuously update our understanding with minimal cognitive overhead.

This isn’t just about speed or energy consumption (though both matter enormously). It’s about a fundamentally different approach to intelligence, one that recognises the value of approximate answers arrived at efficiently over perfect answers that come too late or cost too much.

From Biology to AI

If we want to build truly intuitive AI, we need to design systems that mirror the distributed, specialised nature of human cognition. That means moving beyond monolithic models. I’ve been thinking of a framework that integrates multiple specialised components (more on this in my next post), each inspired by a different aspect of human cognition.

Just as the human brain has distinct regions for different cognitive functions, we could build systems with multiple models, each optimised for a particular domain or processing style. One might handle spatial reasoning. Another, emotion recognition. Another, linguistic nuance. These models wouldn’t run sequentially. They’d operate in parallel.

In the brain, the prefrontal cortex helps determine when to trust a gut instinct versus when to think things through. Similarly, an intuitive AI would need a kind of executive function, systems that decide whether to go with a fast, approximate response or switch to a slower, more computationally expensive process depending on the stakes, confidence, or uncertainty involved.

The basal ganglia encodes patterns from repeated experiences, allowing experts to develop refined intuition over time. We could similarly incorporate components that retain and learn from past experiences, developing expertise rather than treating each decision as an isolated event.

Human cognition often processes information through both intuitive and analytical pathways simultaneously. We could implement similar parallel processing, running both quick, heuristic evaluations and deeper reasoning concurrently, with mechanisms to reconcile differences when they arise. We could combine both outputs, weighted by confidence levels and contextual appropriateness.

The insular cortex monitors internal bodily states to generate “gut feelings.” We could include analogous systems that continuously monitor the AI’s internal state, processing loads, uncertainty levels, and pattern recognition confidence to generate approximations of intuitive responses.

Through my research, I have learnt technical foundations already exist (Bayesian neural networks to switch between modes, Memory-augmented neural networks to simulate a subconscious memory, Monte Carlo dropout techniques to “feel” uncertainty). But what’s missing is an orchestration framework that integrates these elements into a cohesive cognitive architecture rather than a singular model.

Obviously, building a new architecture model is no mean feat. We will need to build prototypes for a low-risk use cases (eg. logistics), test and train extensively before deploying in high risk areas (eg. health). We must be conscious that these systems still lack accountability, so this is at best an augmentation tool for human potential rather than a replacement.

Challenges and Approaches

Creating truly intuitive AI involves significant challenges including technical, philosophical, and social.

First, there’s the training problem. Human intuition develops through years of embodied experience and hands-on interaction with the world, refined by immediate feedback. AI, on the other hand, is typically trained on static datasets or simulations that strip away context and ambiguity. Developing machine intuition may require new approaches to learning, including continuous deployment in low-stakes domains, repeated exposure to messy data, and iterative feedback cycles that resemble real life.

Second, we face the interpretability paradox. Human intuition often works below the level of conscious awareness. We know, but we don’t know how we know. If AI begins to function similarly, how do we trust its decisions when it can’t explain them step-by-step? In high-stakes domains like healthcare or law, that’s a major challenge. We’ll need new frameworks that balance intuitive power with context-sensitive transparency, different explanations for different audiences and use cases.

Third, there’s the issue of validation. How do we know when AI’s intuition is good versus when it’s just reproducing patterns from biased data? Human experts earn our trust over time, through training, testing, certification, and experience. AI might need something similar, a kind of probation period, where its intuitive recommendations are tested in shadow mode before being allowed to guide real decisions.

The Future of Intelligence

For too long, we’ve approached AI development through a false dichotomy, either logical interpretable systems that can explain every step or black-box models that provide answers without transparency. The path forward lies not in choosing between these approaches but in integrating them, just as human cognition seamlessly blends intuition and analysis.

Imagine AI systems that can respond instantaneously to familiar patterns with minimal computational overhead, recognise when a situation requires deeper analysis and seamlessly switch modes, build expertise through experience rather than simply applying fixed algorithms and understand their own limitations, involving human judgment when the uncertainty is beyond their comprehension.

Such systems wouldn’t just be more efficient, they would fundamentally change our relationship with artificial intelligence. Rather than tools that either dictate answers or require exhaustive human oversight, they would become true cognitive partners, complementing human intelligence rather than merely mimicking its analytical aspects.

The supply chain manager who receives an intuitive alert about a potential disruption isn’t being replaced by AI, they’re being empowered by a system that extends their awareness beyond what any individual could monitor. The doctor who consults an intuitive diagnostic system isn’t surrendering medical judgment, they’re gaining a second perspective that might notice patterns they missed.

From Intuition to Integration

As we build the next generation of AI, we can either continue refining systems that reason step-by-step, or expand our imagination to include the full spectrum of human intelligence, intuition included. This path calls for humility, curiosity, and creativity. Humility to accept that intelligence comes in many forms. Curiosity to draw from biology without copying it. And creativity to design architectures that don’t just mimic how we think, but extend what thinking can be.

I’ve learned, sometimes painfully, that our systems, educational, scientific, and computational tend to favour explicit logic while dismissing intuitive insight. That bias doesn’t just limit individuals. It limits what we build. Blending intuition with analysis won’t just make AI more efficient. It might reveal entirely new ways of understanding intelligence itself. The goal isn’t perfect replication. It’s thoughtful partnership. Not just machines that think, but machines that know when to trust a feeling.


P.S. This piece started as a simple reflection while playing a card game with my family. What followed was a deeper exploration of intuition, intelligence, and how we think about both, especially in the context of artificial intelligence. I wrote this in collaboration with AI. Not as a ghostwriter, but as a thinking partner. I used it to research neuroscience, technology and test ideas, like a curious co-author who knew when to bring the science and technology but let me hold the story. My hope is to find others including builders, researchers, educators and designers who care about the full spectrum of intelligence, whether we know how to explain it or not.

Published by

Pri

Independent Consultant and Writer