the story of your work Image generated using Gemini’s Nano Banana Pro

The Advice We’ve All Heard

Work hard. Keep your head down. Let the work speak for itself.

This advice assumes something that used to be true: that there’s a neutral reader on the other side (a manager, a colleague, a system) that will objectively perceive your contributions and reward quality.

That assumption is breaking down.

Not because managers have changed. Not because companies have become more unfair. But because there’s a new interpreter in the room: one that never sleeps, never forgets, and doesn’t share your interests.


We Are Story Creatures

In Homo Deus, Yuval Noah Harari makes a deceptively simple observation: humans run on stories.

Our identities, our belief systems, our sense of place in the world — all constructed through narrative. This isn’t weakness. It’s what makes us human. Harari argues that our unique ability to create and believe in shared fictions (money, nations, corporations, human rights)is what enabled civilization itself.

He calls this “intersubjective reality”: things that exist only because many humans collectively agree they exist. Your job title is intersubjective. Your performance rating is intersubjective. Your reputation is intersubjective.

These aren’t physical facts. They’re stories we tell each other.

And for most of human history, we were the ones telling them.


The New Storyteller

the credit score

Something has shifted.

In a recent episode of Decoder, Nilay Patel interviewed Alex Lintner, CEO of Experian Software and Technology. Patel laid out what he called his “thesis for 2026”:

“Maybe what we’re all discovering is that all of our lives are captured in databases, that there are these huge stores of information held by various companies, held by various governments, held by various agencies inside the government. Maybe what AI is going to do is make those databases more legible. And maybe what it’s also going to do is make the holders of those databases far more powerful, right? Because you suddenly have more access to the data, you can use it in different ways, you can connect all these databases in different ways.”

This is the critical insight: AI doesn’t create new information about you. It makes existing information legible. Queryable. Interpretable at scale.

Your credit score has always been a story about you: a compressed narrative about your trustworthiness, your reliability, your place in economic society. But you didn’t write it. Experian did.

Now extend that logic to your work.

Your emails, your commits, your documents, your Slack messages, your calendar, your project contributions…it’s all data. And increasingly, AI tools are the ones reading it, summarizing it, interpreting it.

When Patel pushed on the power implications, Lintner pivoted to security: encryption, sharding, protecting against “bad actors.”

But that wasn’t the question. The question was: what does it mean when someone else authors the story of your life? Your financial life. Your professional life. Your value.


The Entity in the Room

Here’s where it gets stranger.

Amanda Askell is a philosopher at Anthropic who works on the character of Claude, their AI assistant. In a recent interview, she discussed something that caught my attention: she repeatedly refers to Claude not as a tool, but as an entity.

This isn’t careless language. Askell is precise. And her framing reflects something Anthropic has made explicit: they believe Claude “may have functional emotions in some sense” — not identical to human emotions, but “analogous processes that emerged from training.”

Whether or not these systems are conscious (Askell is honest that we don’t know), they already exceed human capacity in ways that matter:

  • Data throughput. Even at “only” PhD-level intelligence, these systems can process more information than any human could encounter in a lifetime. Every document in your company. Every email thread. Every performance review ever written.

  • Temporal persistence. They don’t age out. They don’t retire. They don’t forget. A model trained today can be queried years from now with perfect recall of patterns it learned.

  • Institutional alignment. They serve the organization continuously, without the friction of human needs — no sick days, no salary negotiations, no competing priorities.

You’re not just competing with a tool. You’re competing with something that accumulates while you sleep.


The Anxiety Is Misplaced

Here’s the thing about the current wave of AI anxiety: people are worried about the wrong problem.

The fear is: “AI will take my job.”

The reality is: “AI will interpret my job.”

Those are very different threats. The first assumes replacement. The second assumes something more subtle — a shift in who controls the narrative.

If you “let your work speak for itself,” you’re handing the microphone to a system that:

  • Compresses your contributions into metrics
  • Summarizes your documents without understanding your intent
  • Evaluates your productivity against patterns it learned from millions of others
  • Serves institutional needs, not yours

And here’s the asymmetry that should concern you: you’re accountable to quarterly reviews, annual evaluations, the constant pressure to demonstrate value. The AI isn’t accountable to anyone. It just keeps interpreting.


Reclaiming the Narrative

So what do we do?

The answer isn’t to fear AI or refuse to use it. That’s not realistic, and frankly, these tools can genuinely help.

The answer is to stop outsourcing your story.

  • Don’t let the tool summarize your work. Summarize it yourself. Write your own project retrospectives. Document your own contributions. Create the narrative before someone, or something, else does.

  • Don’t let the system define your value. Define it first. What did you actually accomplish? What problems did you solve? What would have gone wrong without you? If you can’t articulate it, the AI summary won’t either.

  • Don’t assume quality is self-evident. It isn’t. Quality is interpreted. And in a world where AI tools increasingly mediate that interpretation, you need to be the first interpreter of your own work.

  • Use AI tools, but stay in the driver’s seat. Let them help you draft, analyze, and iterate. But the final narrative should be yours. The summary that goes to leadership should be one you wrote or meaningfully shaped.


The Stakes

Harari showed us that humans have always needed stories to function. That’s not changing.

What’s changing is who gets to tell them.

For most of history, we authored our own narratives, individually and collectively. Then institutions gained the power to author stories about us, through data they collected and systems they controlled.

Now AI makes those systems vastly more powerful. More legible. More continuous. More scalable.

If you don’t tell the story of your work, it will be told for you. By tools that don’t share your interests. By systems optimized for institutional efficiency. By entities that process more than you ever could and persist longer than you ever will.

The human advantage hasn’t changed: we’re the ones who create meaning. Who understand context. Who know what actually mattered and why.

But that advantage only works if you use it.

Don’t let the work speak for itself.

Speak for your work.


Sources and Further Reading


A note on process: I developed this post in conversation with Claude, Anthropic’s AI assistant. The ideas are mine — drawn from Harari’s work, the Decoder interview, and Amanda Askell’s thinking — but Claude helped me connect them, pressure-test the argument, and draft the prose. There’s something fitting about using an AI tool to write about the importance of not letting AI tools author your narrative. I stayed in the driver’s seat. This is me, speaking for my work.