After reading Thomas’s essay on AI and scientific discovery, I couldn’t agree more with him. Yet, I was troubled by something that I felt deserves more attention – the visibility problem for contrarian thinking. For the last few weeks, I’ve had a blogpost in my drafts about intuition in AI, and architectures that can enable that. But I haven’t published it because I don’t even know if it makes sense. The irony of writing about contrarian thinking while doubting my own contrarian ideas isn’t lost on me.
I am not a software engineer, I am not a researcher in AI and I don’t even know how to put it out into the world where the idea can be read and reviewed by people who may help in fleshing out the idea better. There is simply no reason for any one of remote consequence in AI to bother with this. After all, the world is already full of brilliant ideas from people with the right credentials, why add my voice to the cacophony?
In his essay, Thomas argues that we need a B student who sees and questions what everyone else missed rather than perfect A+ students who excel at answering known questions. Yet there’s a profound irony here if you think about it. Our systems are designed to silence precisely these non-conformist voices. We’re searching for original thinkers in an ecosystem designed to eliminate them. It’s like trying to fish in a desert.
Very often, I find myself thinking that a widely agreed upon idea is flawed. But I’ve rarely felt safe enough to challenge it out loud. Over time, I became conditioned to stop questioning what was being said, and got busy trying to fit in. I assume several other contrarian thinkers have faced similar challenges too because our entire society is designed to optimise for conformity. The path of least resistance is to nod along with consensus or keep a poker face like we are too dumb to even nod.
Our society is manufacturing agreement at industrial scale.
I have a distinct memory of an incident when I was 12 years old, on a senior’s house terrace. We were discussing chaos theory and multiverses as both of us shared an interest in astrophysics. He called me a “non-conformist” during our discussion. I heard that term for the first time and wasn’t entirely sure what it meant. I came home and looked up the word in the dictionary; I remember thinking “Oh well, it’s an upgrade from being blatantly dismissed for my perspective”, but I still felt like an odd ball.
This pattern continues into adulthood. I could publish contrarian views online, but what’s my incentive to keep doing so? Without institutional support or an established platform, these ideas disappear into the void where no one of significant influence will ever encounter them. It’s the intellectual equivalent of screaming into a pillow. The issue isn’t just about avoiding judgment, it’s about fundamental visibility. If you’re in the minority, your voice is rarely even heard, let alone considered. Our systems of knowledge creation and distribution have built-in filters that maintain the status quo.
Status hierarchies determine whose ideas receive attention. Contrarian views face a much higher burden of proof. Networks and institutions amplify established voices while muting others. Publication systems have inherent biases toward consensus thinking. There’s rarely immediate reward for challenging consensus, while conformity offers clear benefits. The game is rigged, and we all know the rules. This creates a self-reinforcing cycle where even potentially revolutionary ideas remain hidden simply because they come from the wrong sources or challenge powerful incumbents or lack logical data-backed proof. Einstein might have remained a patent clerk if his papers hadn’t somehow broken through.
Building “safe spaces” for speculation isn’t enough. We need mechanisms that actively elevate diverse viewpoints regardless of their source. The question of incentives is crucial. Without recognition or the possibility of impact, why would anyone invest the intellectual and social capital required to develop and promote contrarian views, let alone document their ideas for training purposes.
This visibility problem directly impacts how we develop AI. If we’re building systems trained primarily on consensus knowledge without exposure to diverse perspectives, we’re encoding the very same biases that keep contrarian thinking hidden. We’re coding conformity into silicon. The “country of yes-men on servers” Thomas fears isn’t just a function of how we train models technically, it’s a reflection of whose voices get amplified in our training data and evaluation criteria. We’re building mirrors that reflect only the most established parts of ourselves.
When I first thought about a novel AI architecture (there will be a separate post on this), I was exploring how we might design systems that integrate multiple cognitive approaches rather than privileging analytical thinking alone. But implementing such systems requires first acknowledging and addressing this fundamental visibility problem for contrarian perspectives. If we want to find the next Einstein, we need to build systems where their voices can be heard in the first place. Otherwise, we’ll just keep building increasingly sophisticated echoes of conventional wisdom.
P.S. – I might still go ahead and publish the blogpost sitting in my drafts at some point, because if nothing, it will go into training data some day and hopefully inspire someone of significant influence in AI to build on it. Until then, my contrarian thoughts can marinate in obscurity like a fine wine that no one’s been invited to taste.