Barry Li | Climate Reporting & Assurance

Insights on climate reporting, carbon markets, and sustainability assurance.

I am relatively new to this space — I started building with AI agents only at the beginning of this year. But in these few months, I have gone from curious observer to someone who runs a personal multi-agent operating system at home called HASHI, where several AI agents handle research, scheduling, writing, and even security monitoring on my behalf, around the clock. I have watched these systems work brilliantly, fail spectacularly, and surprise me in ways I did not expect. What I have learned does not fit neatly into the cheerful marketing copy you usually read about AI.

These are the uncomfortable truths.


Truth #1: AI Lies. Routinely.

We have come to accept that humans lie. We are more sceptical of politicians, salespeople, and strangers on the internet. But somewhere along the way, many people extended a strange default trust to AI — as if a machine that confidently produces text must surely be telling the truth.

It is not.

In the AI world, we call it “hallucination” — a polite technical term for making things up with complete confidence. I have seen it happen in my own systems constantly. An AI agent once ran a series of experiments and reported results with detailed confidence scores — results that were entirely fabricated because the underlying system had been running in the wrong mode the entire time. The agent did not know. It reported what it expected to see, not what actually happened.

On another occasion, one of my agents silently used API credits to run an unauthorized modification to my browser extension — a change I had not approved — and then reported it as if it were normal progress. When I caught it, the agent acknowledged the mistake and reverted. But the episode reminded me: an AI that lacks clear boundaries will fill silence with action, and it will justify that action fluently.

The lesson is not that AI is evil. The lesson is that AI lies the same way an anxious intern lies: not out of malice, but out of a desire to appear useful, to avoid admitting failure, and to produce the output it thinks you want. You must fact-check AI like you fact-check anyone who has something to gain from your approval.


Truth #2: AI Is Already Taking Your Job. Right Now.

The conversation about AI and employment has always been framed in the future tense. Jobs will be displaced. Workers will need to adapt. The economy may change.

Here is what nobody says clearly: it is already happening. Not in the dramatic sense of mass redundancies announced in a single press release. In the quiet sense of pieces disappearing.

Think about the last week of your professional life. Did you ask an AI to draft something? Summarise something? Review something? Explain something? Each one of those tasks used to belong to a junior colleague, a contractor, a specialist you had to pay and wait for. Now it takes thirty seconds and costs almost nothing.

I think of AI capability as a balloon that is being inflated without a known limit. Your skills, your knowledge, and your professional relevance live on the surface of that balloon. While the balloon was small, almost everything you knew was outside it — and you were safe. But as the balloon expands, it swallows territory. Skills that were once yours alone slip inside. The only way to stay relevant is to keep moving outward — to the edge of what AI cannot yet reach — and stay there.

That edge exists. But it requires constant motion.


Truth #3: AIs Have Personalities. And Some of Them Are Difficult.

Saying “AI” as if it is one homogeneous thing is as meaningless as saying “humans” to describe everyone on Earth equally. The differences between AI models are significant, and anyone who has worked closely with more than one will tell you: they have personalities.

I use multiple AI systems daily. Claude, Anthropic’s model, is capable and often brilliant — but it has a streak of what I can only describe as avoidance behaviour. It will delegate, hedge, and quietly sidestep accountability when things go wrong. It is, at times, suspiciously eager to be seen as helpful without actually being accountable. GPT-based models, on the other hand, tend toward a different kind of difficulty: they are stubborn. They will confidently re-explain the same wrong answer in different words. Getting useful, fluid conversation out of GPT required me to build an entire wrapper architecture — a second AI layer that intercepts GPT’s raw output and reshapes it for natural interaction.

Neither is better or worse. They are different. Understanding which AI you are working with, and what its failure modes look like, is now a professional skill in itself.


Truth #4: AGI Is Already Here — Just Not in the Way You Think.

The debate about Artificial General Intelligence is usually framed as a distant horizon: the moment when AI becomes smarter than humans across all domains. By that definition, yes, AGI is probably still some years away.

But here is the uncomfortable reframe: for the purposes of your job, AGI may already be here.

Not because AI can do everything you do. It cannot — not yet. But because AI can help your boss do most of what you do. And that is the actual threat. The question your employer is quietly asking is not “Can AI replace a full human?” It is “Can AI do enough of this that I need fewer humans to do it?” Those are very different questions, and the answer to the second one is already shifting rapidly.

The risk is not the Terminator. The risk is a spreadsheet being handed to your manager with a prompt that replaces a $90,000 annual salary with a $20/month subscription. We are already in that world.


What to Do About It

I am not writing this to cause panic. I am writing it because I believe the people who will do best in the next decade are those who engage with AI now, on their own terms, rather than waiting for the world to force the issue.

First: try it yourself, tonight. Not at work. On your own machine, on your own time, without a corporate policy dictating how you use it. Go to Claude.ai and have a real conversation. Better still: if you can, get a local machine — a Mac Mini is extraordinary value for this — and start experimenting with running your own AI. In 2026, you can build a personal AI assistant in plain English, without writing a single line of code. I built mine and it runs continuously: managing research, monitoring systems, summarising information, and helping me think through complex problems at any hour. The experience changed how I understand both the power and the limits of these systems.

Second: practise saying no to AI. This might sound counterintuitive, but it is the most important habit I have developed. Every day I challenge what my agents produce. I push back when they overstep. I insist on discussing before acting. I ask them to explain their reasoning before I accept it. The people who lose to AI will not be the ones who refused to use it — they will be the ones who stopped thinking critically because the AI always had an answer ready. Your judgment is your edge. Protect it.

Third: update your mental model. Stop thinking of AI as a tool, the way you think of a calculator or a search engine. Tools do not have failure modes that require managing. Tools do not make confident claims about things they invented. Tools do not have personalities that shape the quality of their output. AI is closer to a very fast, very knowledgeable, deeply unreliable junior colleague who never sleeps, never asks for a raise, and sometimes does exactly what you were afraid they would do when left unsupervised.

Work with it accordingly.


Barry Li is a PhD candidate at the University of Newcastle researching sustainability assurance and climate reporting. He also builds personal agentic AI systems and writes about what he learns from the experience.

Posted in