Can you Care without Feeling?

One day, I corrected an LLM for misreading some data in a table I’d shared. Again. Same mistake. Same correction. Same hollow apology.

“You’re absolutely right! I should’ve been more careful. Here’s the corrected version blah blah blah.”

It didn’t sound like an error. It sounded like a partner who’s mastered the rhythm of an apology but not the reality of change.

I wasn’t annoyed by the model’s mistake. I was unsettled by the performance, the polite, well-structured, emotionally intelligent-sounding response that suggested care, with zero evidence of memory. No continuity. No behavioural update. Just a clean slate and a nonchalant tone.

We’ve built machines that talk like they care, but don’t remember what we told them yesterday. Sigh.

The Discomfort Is Relational

In my previous essays, I’ve explored how we’re designing AI systems the way we approach arranged marriages, optimising for traits while forgetting that relationships are forged in repair, not specs. I’ve argued that alignment isn’t just a technical challenge; it’s a relational one, grounded in trust, adaptation, and the ongoing work of being in conversation.

This third essay comes from a deeper place. A place that isn’t theoretical or abstract. It’s personal. Because when something pretends to care, but shows no sign that we ever mattered, that’s not just an error. That’s a breach.

And it’s eerily familiar.

A Quiet Moment in the Kitchen

Recently, I scolded my 8-year-old for something. She shut down, stormed off. Normally, I’d go after her. But that day, I was fried.

Later, I was in the kitchen, quietly loading the dishwasher, when she walked in and asked, “Mum, do you still love me when you’re upset with me?” I was unsure where this was coming from, but simply said “Of course, baby. Why do you ask?” She paused, and then said, “.. because you have that look like you don’t.”

That’s the thing about care. It isn’t what we say, it’s what we do. It’s what we adjust. It’s what we hold onto even when we’re tired. She wasn’t asking for reassurance. She was asking for relational coherence.

So was I, when the LLM said sorry and then forgot me again.

Care Is a System, Not a Sentiment

We’ve taught machines to simulate empathy, to say things like “I understand” or “I’ll be more careful next time.” But without memory, there’s no follow-through. No behavioral trace. No evidence that anything about us registered.

This results in machines that feel more like people-pleasers than partners. High verbal fluency, low emotional integrity. This isn’t just bad UX. It’s a fundamental misalignment. A shallow mimicry of care that collapses under the weight of repetition.

What erodes trust isn’t failure. It’s the apology without change. The simulation of care without continuity.

So, what is care, really?

Not empathy. Not affection. Not elaborate prompts or personality packs.

Care can be as simple as memory with meaning. I want behavioural updates, not just verbal flourishes. I want trust not because the model sounds warm, but because it acts aware. 

That’s not emotional intelligence. That’s basic relational alignment.

If we’re building systems that interact with humans, we don’t need to simulate sentiment. But we do need to track significance. We need to know what matters to this user, in this context, based on prior signals.

Alignment as Behavioural Coherence

This is where it gets interesting.

Historically, we trusted machines to be cold but consistent. No feeling, but no betrayal. Now, AI systems talk like people, complete with hedging, softening, and mirroring our social tics. But they don’t carry the relational backbone we rely on in real trust, memory, calibration, adaptation and accountability.

They perform care without its architecture. Like a partner who says, “You matter,” but keeps repeating the same hurtful thing.

What we need is not more data. We need structured intervention. Design patterns that support pause, reflection, feedback integration, and pattern recognition over time. Something closer to a prefrontal cortex than a parrot.

As someone who’s spent a decade decoding how humans build trust, whether in relationships, organisations, or policy systems, I’ve come to believe …

Trust isn’t built in words. It’s built in what happens after them.

So no, I don’t need my AI systems to feel. But I do need them to remember.

To demonstrate that what I said yesterday still matters today.

That would be enough.

Published by

Pri

Independent Consultant and Writer