Secret agents
Last year, I wrote that more than 75% of the AI startups I saw were explicitly pitching job replacement in their fundraising decks (but not always in their sales decks). The majority of these were building some kind of agentic AI.
Fast forward to today, and where are we? 
AI agent as change agent
Agentic AI is designed to act autonomously to complete tasks without continuous human oversight. It is typically focused on completing a domain-specific task. For instance, agentic AI might independently respond to customer help queries or order product inventory based on recent buyer demand.
On the ground, investments in AI agents are being made at two levels:
AI investments by leaders: The leader aims to use AI to replace a whole class of work that has historically been done by people. Using AI to handle customer help queries is a perfect example; if you can use AI agents to cover more help queries, you need fewer customer service people.
AI investments by individuals: The person uses AI to automate tasks that have historically been manual and time-consuming. For instance, someone might build an agent to summarize their long email threads and flag items needing a response. 
The leadership bets get all the press because they are connected to job loss. But the bets being made (and not made) by individuals are equally revealing.
You get an agent, and you get an agent! Everybody gets an agent!
Today, when people use AI at work, it's generally embedded into their apps. This includes ChatGPT and other chatbots, but also a wide spectrum of other workplace software ranging from email to CRMs to accounting suites. AI is already widely used in these contexts.
On the other hand, most corporate workers are not building their own AI agents. Even though 1) everyone could tell you which of their work tasks is most time-consuming, and 2) building agents has never been easier, very few people are automating their own tasks. There are three reasons why.
#1: They don't know what's possible
Most people think of AI as something they use rather than something they make. They think of an agent as software built by a developer, not as a personal assistant they train. 
It's also difficult for people to imagine what an agent might look like within the context of their role, and employees are rarely encouraged to think about this question independently.
#2: They don't believe they can create trustworthy AI
Just ask any SDR pumping out AI slop: People already trust AI in all kinds of areas, including some places they shouldn't. But when it comes to creating AI agents for individual use, the fear factor is higher.
Case in point: I know an investor who has spent her career in finance. She is an intelligent and analytical person who invests in AI companies. But when I suggested that she might build her own AI agent to automate some manual diligence tasks, she pushed back. 
"But I'm not technical," she said.
"You don't need to be," I assured her. "Doing this will not only help with your day-to-day tasks, it will also give you better insight into the kinds of investments to consider."
Ultimately, she was too afraid of messing something up to give it a try. Like a lot of people, she doesn't trust an agent not to make public mistakes, send the wrong email, or share sensitive data, especially an agent that she herself has created.
People are afraid that something will go wrong, and they are even more afraid of looking stupid when it does. 
#3: The organization says no
Sometimes the "no" is direct; especially within highly regulated industries like finance and healthcare, an employee's ability to build AI agents may be explicitly restricted. But even when the rules officially allow for employees to build their own agents, organizations may block it in more nuanced ways. The most common blocker involves data security. 
Imagine an environment where employees are encouraged to automate their own workflows with AI. The CEO talks about it. Employees are rewarded for this kind of work when they do it. The company has even invested in approved tools to make this kind of building approachable. 
But the reality is that the CISO isn't always bought in. Even if employees have AI tools whose use is approved, oftentimes the security team restricts the data that employees can access in their efforts. In practical terms, this means that no one can build anything useful.
There are also more analog ways that organizations say no to AI building even if official company policy supports it. Employees may be too buried in day-to-day tasks to have time to experiment. Experimentation isn’t recognized as real work, so time spent testing a new agent feels like stealing from one’s actual job.
In other cases, automation work just isn't rewarded. Leaders praise heroics (“I stayed up late to finish the report”) more than systems thinking (“I built a agent so I’ll never have to stay up late again”). In many places, automation isn’t tied to promotions, bonuses, or recognition. Efficiency feels like career risk rather than a bona fide path to advancement.
The bottom line: Many of the biggest opportunities for AI at work show up when you build an employee base that is enabled, empowered, and motivated to make their own jobs easier. This is the real wave of AI transformation that most organizations have yet to catch. 
Kieran
If you liked this story, why not subscribe to nerd processor and get the back issues?
My latest data stories | Build like a founder | nerdprocessor.com