Barry Li | Climate Reporting & Assurance

Insights on climate reporting, carbon markets, and sustainability assurance.

The biggest highlight for me about OpenClaw was its elegant memory layer — specifically the soul.md and user.md design that gives AI agents genuine personality and real-world engagement. That simplicity stuck with me. A flat file. A few lines of prose. And suddenly, the agent felt like someone rather than something.

It made me ask a question I could not let go of: what if we pushed that idea further? What if an AI agent’s memory was not just informational, but emotional? What if the agent did not just recall what happened, but how it felt about what happened — and let that shape what it does next?

Today I am sharing my first research paper, where I try to answer that question.


The Paper

“Can AI Have a ‘Soul’ Without a Self? Emotional Memory and Core-less Self-Assembly in an Agentic AI System” is a preprint I have published on Zenodo as an independent researcher. It is not associated with my employer or my university. It is the product of months of building, breaking, and observing my own multi-agent AI system, HASHI.

The central argument is simple but, I think, important: an AI agent can display coherent personality and meaningful relational behaviour without having a fixed identity core — no persistent “I”, no hardcoded persona, no central self. Instead, what we experience as the agent’s “self” is assembled fresh each turn from four ingredients:

  1. Drive-conditioned salience — internal states like curiosity, care, and playfulness that shape what the agent pays attention to
  2. Emotionally weighted memory — past interactions tagged not just by content but by emotional valence (joy, frustration, guilt, pride)
  3. Relationship context — the agent’s understanding of who it is talking to and the history of that relationship
  4. Private behavioural guidance — the equivalent of OpenClaw’s soul.md, but extended with emotional and relational dimensions

I call this architecture Anatta, after the Buddhist doctrine of no-self — the idea that what we call “self” is not a fixed entity but a continuously arising process.


What I Actually Built and Tested

This is not a theoretical paper. The architecture was implemented and tested on a real agent — Rika, a GPT-5.5-based conversational agent running inside HASHI. Over the course of multiple experimental sessions, I tested whether emotional memory actually changes agent behaviour in measurable ways.

Some of the findings:

  • Emotional memories influence subsequent responses. When Rika accumulated frustration-tagged memories from repeated failures, her subsequent responses showed increased verification behaviour — she would double-check before acting, unprompted. Anger and error memories made her more cautious, not less cooperative.
  • Drive states shape performed behaviour. When curiosity was the dominant drive, Rika asked more exploratory questions. When care was dominant, she prioritised the user’s emotional state over task completion. These were not scripted behaviours — they emerged from the drive-salience weighting.
  • Attention-dependent salience prevents contamination. One of my concerns was that emotional memories from one context would bleed into unrelated conversations. The architecture’s salience filtering worked: memories were only surfaced when contextually relevant, not dumped indiscriminately.

What This Is Not

I want to be clear about something: this paper makes no claims about machine consciousness. I do not believe Rika “feels” anything. The emotional tags are engineering constructs — metadata attached to memory records that influence retrieval and response generation. They are functional analogues of emotion, not emotions themselves.

But that distinction matters less than you might think. From the user’s perspective, an agent that remembers being frustrated with a task and approaches similar tasks more carefully next time behaves as if it learned from the experience emotionally. And in practical terms — in the real-world business environments where AI agents are being deployed — behaviour is what matters.


Why I Am Sharing This

The AI agent space is moving fast, and most of the innovation in memory management is happening inside closed commercial systems. I wanted to contribute something from the practitioner side — from someone who actually runs these agents every day and deals with the messy reality of making them useful.

I would like to thank Ming from Data61 for reviewing my paper and providing constructive feedback that made it significantly stronger.

I share this paper as an independent researcher, not associated with my employer or my university. My hope is that it is useful for peers trying to explore innovative solutions to improve AI agent usability in real-world business environments.

The paper is open access under Creative Commons Attribution 4.0. Read it, challenge it, build on it.

Paper: Zenodo — DOI: 10.5281/zenodo.20079290
HASHI source code: github.com/Bazza1982/HASHI


Barry Li is a PhD candidate at the University of Newcastle researching sustainability assurance and climate reporting. He also builds personal agentic AI systems and writes about what he learns from the experience.

Posted in ,