In an AI world, all mistakes are people mistakes


Turtles all the way down

Every big mistake I've made in my career has been a people mistake. The same is true for every leader I know.

Contrary to what you might think, the rise of AI is going to make this truer than ever.

Case study lies

It's common for business school students to analyze business successes and failures. The goal is to identify the patterns that drive success and failure respectively. Typically, B-school case studies focus on the most impactful levers, such as new products, unique manufacturing processes, or a team's ability to innovate quickly.

Occasionally, these levers include organizational factors, like how compensation is structured or how big teams are. But even when business case studies look at org factors, the analysis is generally structural, not interpersonal.

This is striking, since every decision about a major business lever is made by an individual person. In other words, the choices you make about personnel determine your ultimate business performance.

When I think about my own successes and failures in business, all the big errors, like the ones that have been messiest to come back from, have been mistakes about individual people. Things like:

  • Betting on the wrong person
  • Not engaging with conflict when I needed to
  • Making decisions about individuals based on hope rather than facts
  • Not prioritizing growth mindset highly enough in the people I hire

There are many things I wish I'd known sooner: Most yellow flags are red. The more talented the person, the more harmful it is if they're toxic. By the time you're consistently questioning someone's work or character, it's too late.

But AI is machines, not people. Right?

The theory of an agentic AI world is that AI will be increasingly able to take autonomous actions in the workplace. So it might seem strange to assert that in an AI world, all the biggest mistakes are still people mistakes. But there are three reasons that your people bets matter in an agentic AI world even more than they did before.

#1: AI doesn’t decide what to optimize. People do.

If your team asks the wrong question, sets the wrong objective, or feeds biased data into a system, the model will faithfully take the wrong action faster and at scale. When things go wrong, it isn't the AI per se that has failed. It's the responsibility of whoever scoped, framed, and validated the system inputs.

The net of this is that betting on the wrong person in an agentic AI world hoses you faster and more comprehensively than ever before.

#2: Trust and adoption are human decisions.

Even the best AI systems are worthless if no one trusts or uses them.

Resistance to change, poor communication, unclear accountability, and lack of psychological safety can all derail adoption. Culture, incentives, and leadership clarity are human dynamics, not technical gaps.

For instance, it doesn't matter if your AI forecast tool can improve accuracy by 20%. If your sales leader ignores it because the tool's forecasts run counter to their intuitions, you won't get any of the benefits.

Conversely, if your sales outreach tool produces absolute dreck but your sales leader insists that you use it anyway in a misguided efficiency play, you're doomed.

Betting on the wrong people means you'll make the wrong AI bets in the first place. The bigger your AI bets, the bigger the risk to your current operations, and the more you're counting on individual people's judgment.

#3: Ethical and strategic judgment can't be automated.

AI can execute, but it doesn’t own (or care about) consequences.

Decisions about fairness, privacy, transparency, and acceptable risk come back to human judgment. If an AI system makes a harmful or reputationally disastrous decision, it’s because people failed to set or enforce boundaries.

For example, if a generative AI campaign violates copyright because no one reviewed its training source policy, that's a people oversight, not a model glitch.

Don't think you're off the hook just because you're using a credible third party's AI, either. As we're starting to see in cases like Mobley v. Workday and Harper v. SiriusXM Radio, both the AI vendor and their customer may be legally accountable when things go wrong.

Because AI accelerates the impact of decision-making to scale a lot more quickly, making the wrong ethical decision on the human side causes a lot more harm.

The bottom line: AI represents a tectonic shift in how work will get done. The potential to rapidly scale the wrong system, process, or ethics is like nothing we've ever seen before. That makes it more important than ever that you make the strongest possible decisions about individual personnel.

Kieran


If you liked this story, why not subscribe to nerd processor and get the back issues?

My latest data stories | Build like a founder | nerdprocessor.com

kieran@nerdprocessor.com
Unsubscribe · Preferences

nerd processor

Every week, I write a deep dive into some aspect of AI, startups, and teams. Tech exec data storyteller, former CEO @Textio.

Read more from nerd processor

The best compliment I ever got When I was in middle school, I had an academic rival. Occasionally we were friendly, but our classroom dynamic was open combat. One time, after a particularly contentious debate in algebra, I overheard his best friend complaining how argumentative I was. I'll never forget what I heard my rival say next: "Look, even her greatest enemy would admit she's good at math." All these years later, it remains one of the most validating compliments I've ever received. If...

The startup anti-pattern Back when I was young and naive, like in 2024, I assumed that everything I’d learned a founder would apply to the current moment. After all, we did so much good early AI at Textio. While most software was stuck on workflows, Textio promised outcomes! Our customer success engineers were essentially FDEs! We built generative writing features in 2019! But while we broke ground in many respects, in most ways, we were SaaS 101. I literally ran a bootcamp for Textios that...

People and money Many long-standing business systems are shifting out from under us right now. Content marketing started changing as soon as ChatGPT launched, not entirely for the better. Software development is changing for sure, at least if you're technical-ish; coding isn't even the right verb anymore. Companies are optimizing supply chains, using AI to interact with customers, and, at least in theory, changing how decisions get made. It's a lot of change. But whatever industry you're in,...