The End of Task-Based Knowledge Work: Why We Should Stop Optimizing and Start Orchestrating
A reflection on the future of knowledge work from our Z AI Accelerate discussions
The Question That Changed Everything
During this week’s IBM Z AI Accelerate meeting, I posed a question to the team that’s been keeping me up at night: Are we learning the wrong things?
We’ve all spent months getting better at prompting with various AI tools, mastering ChatGPT shortcuts, and finding clever ways to shave 5% off our daily tasks. We share tips in Slack channels. We celebrate when an AI summarizes a meeting well. We feel productive when we use Copilot to draft an email faster.
But what if we’re rearranging deck chairs on the Titanic?
Follow the Money, Not the Hype
Here’s what crystallized my thinking: IBM recently announced a major partnership with Anthropic—a company valued at nearly $200 billion that does exactly one thing: build enterprise-level AI models and agentic workflows. Not better chatbots. Not smarter autocomplete. Autonomous digital workers.
When I look at where enterprise investment is flowing, it’s not toward making Claude give better answers in a chat window. Every major announcement—from Microsoft’s AI agents to Salesforce’s Agentforce to IBM’s own initiatives—points in one direction: agentic workflows that operate autonomously.
The interactive chat window we’ve all grown comfortable with? That’s likely as good as it’s getting. The development dollars are going elsewhere.
What My Job Taught Me About the Future
As a project leader for IBM Redbooks, I recently mapped out what it would take to produce a publication using a fully agentic workflow. The exercise was revealing—and unsettling.
I broke down the entire Redbook production lifecycle into phases:
- Planning & Scoping
- Team Assembly
- Content Creation
- Review & Quality
- Production & Publishing
- Post-Publication Support
Within these phases, I identified 27 primary agents and over 70 specialized sub-agents that could handle discrete tasks. Everything from the Project Coordinator Agent that orchestrates timelines to the Errata Management Agent that tracks and publishes corrections.
Here’s the uncomfortable truth: I could define my entire job as a series of agentic workflows.
Not some of it. Not the boring parts. Nearly all of it.
The Two Paths Diverge
This realization forced me to confront a choice that I believe every knowledge worker will face in the next 2-3 years:
Path 1: AI as Augmentation
Continue learning how to use AI tools to speed up personal workflows. Get better at prompts. Build prompt libraries. Make tasks 10-20% faster. Celebrate marginal gains.
Path 2: AI as Automation
Learn to design, deploy, monitor, and enhance autonomous agents that eliminate tasks entirely. Become an orchestrator of digital workers rather than a doer of tasks.
These paths require fundamentally different skills, and we don’t have time to master both.
Why Path 2 Is the Only Realistic Choice
1. The Investment Signal Is Clear
When companies write billion-dollar checks for agentic AI capabilities, they’re not planning to give you better productivity tools. They’re planning to restructure how work gets done.
2. Repeatability = Automation
As noted in our discussion: if you do something twice, it should probably be handled by an agent. How much of your job is truly novel versus variations on repeatable patterns?
3. The Entry-Level Job Market Tells the Story
We observed that entry-level positions have essentially disappeared. Why? Because companies realize they can give experienced workers better tools rather than hiring junior staff for task-based work. The next logical step: those “better tools” are autonomous agents.
4. The Skills Gap Problem Demands It
We have team members with 30+ years of experience and people with 2-3 years. In between? Almost no one. Traditional knowledge transfer is broken. The only scalable solution is agents that can meet users where they are. I’ve already started prototyping a system that not only returns accurate RAG content, but tailors it to user-defined role and experience.
What This Means for How We Work
During our discussion, a colleague made an observation that stuck with me. He noted how the terminology shifts as our relationship with AI changes:
- Task: Something you do yourself
- Tool: Something that helps you do a task
- Assistant: Something that does the task for you
- Agent: Something that produces outcomes without your involvement
- Your New Job: Managing the agents
When copy editing became something an assistant did for us, we didn’t celebrate making copy editing faster. We immediately started planning how to eliminate copy editing from our workflow entirely. That task will simply stop existing for us as individuals.
Now multiply that across every repeatable task in your job.
The Uncomfortable Question: Then What?
This is where the conversation got existential. If agents handle all the repeatable tasks, what’s left?
One colleague rightfully worried about trust and what happens when things go wrong. Another correctly pointed out that we’ll need fewer people to monitor automated tasks. I found myself wondering: What’s the business case for me?
Here’s what I think survives and actually becomes more valuable:
The Irreducibly Human Work
- Judgment calls in novel situations - When something breaks that’s never broken before
- Interpersonal skills - Running effective meetings, building consensus, supporting career growth
- Strategic vision - Deciding what should be built in the first place
- Ethical oversight - Evaluating outputs for bias, accuracy, appropriateness
- Agent design and orchestration - Understanding how to structure workflows and what success looks like
As I told the team: “I have met people who can’t run meetings, can’t set agendas, can’t be empathetic. An AI won’t do any of that stuff for you. It can replace you doing tasks, but it can’t replace you being human.”
Those soft skills we used to think were nice-to-have? They’re about to become the only real differentiator when everyone has access to the same AI capabilities.
A Concrete Example: How I’m Applying This
Rather than building better prompts for my team to use inconsistently, I’m now focused on:
- Defining clear agent workflows - Breaking down Redbook production into orchestratable components
- Building context-aware systems - Creating agents like PARIS that adapt to user expertise levels
- Establishing quality gates - Determining where human judgment is truly required vs. nice-to-have
- Documenting the undocumented - Capturing institutional knowledge before it’s too late
- Designing handoff protocols - Ensuring agents pass context effectively to each other and to humans
This is design work. It’s systems thinking. It requires all the skills I use today…just applied to a different substrate.
The Learning Path Forward
If you accept this premise (that agentic workflows are inevitable and arriving faster than we think) here’s what I believe we should be learning:
Stop Prioritizing:
- Perfect prompt engineering for chat interfaces
- Personal productivity hacks with AI tools
- Marginal optimizations of existing workflows
Start Prioritizing:
- Systems design for multi-agent workflows
- Quality metrics for autonomous processes
- Context window management and tokenization
- Agent monitoring and debugging
- Workflow orchestration platforms (like WatsonX Orchestrate, Claude’s Computer Use, etc.)
- Understanding when to trust automation vs. require human oversight
The Timeline Is Shorter Than You Think
One of our team members joked about being glad to be close to retirement. But for those of us with 20+ years left in our careers, we don’t have the luxury of waiting this out.
I genuinely believe we have 2-3 years before “AI-augmented knowledge worker” and “agent orchestrator” become distinct career paths. After that, the augmentation skills won’t command premium value because everyone will have them. The orchestration skills will be where the differentiation, and job security, lives.
The Both/And Fallacy
During the discussion, I noted that this doesn’t feel binary, but when I really think about my learning time allocation, it kind of has to be.
I can spend 10 hours learning to write better prompts for Copilot, or I can spend 10 hours learning to build an editorial agent that makes copy editing decisions autonomously. Both have value. But one has a shelf life of maybe 18 months. The other is a foundational skill for the next decade.
We’re all busy. We’re all trying to keep up with changes in our actual jobs while also learning about AI. The cruel reality is that we don’t have time to deeply master both approaches.
My Recommendation: Make the Jump Now
Start treating your job as a collection of workflows that needs agents, not a collection of tasks that needs better tools.
When you encounter a problem, don’t ask: “How can AI help me do this faster?”
Ask: “How would I design an agent to handle this autonomously, and what would I need to monitor?”
The first question keeps you on the treadmill of incremental improvement. The second question starts building the skills you’ll need when your role fundamentally changes.
Because it will change. The only question is whether you’ll be ready.
A Final Thought
As we wrapped up the meeting, someone joked again about how close we could all be to retirement. We all laughed, but the anxiety was real.
Here’s what I actually believe: The people who thrive in the next five years won’t be the ones who became slightly more productive with AI tools. They’ll be the ones who learned to see their work differently: as systems to design rather than tasks to complete.
The future isn’t about being replaced by agents. It’s about becoming the kind of professional who knows how to deploy them, monitor them, improve them, and know when to override them.
That’s a future I want to be part of. And it starts with changing what we choose to learn today.
What do you think? Are you investing in optimization or orchestration? I’d love to hear your perspective, especially if you disagree.
About This Post: This reflection came out of our October 10, 2025 Z AI Accelerate discussion, where we grappled with some uncomfortable questions about the future of knowledge work. Special thanks to to my colleagues for pushing the conversation into territory most teams aren’t ready to explore yet.