In an AI world, all mistakes are people mistakes


Turtles all the way down

Every big mistake I've made in my career has been a people mistake. The same is true for every leader I know.

Contrary to what you might think, the rise of AI is going to make this truer than ever.

Case study lies

It's common for business school students to analyze business successes and failures. The goal is to identify the patterns that drive success and failure respectively. Typically, B-school case studies focus on the most impactful levers, such as new products, unique manufacturing processes, or a team's ability to innovate quickly.

Occasionally, these levers include organizational factors, like how compensation is structured or how big teams are. But even when business case studies look at org factors, the analysis is generally structural, not interpersonal.

This is striking, since every decision about a major business lever is made by an individual person. In other words, the choices you make about personnel determine your ultimate business performance.

When I think about my own successes and failures in business, all the big errors, like the ones that have been messiest to come back from, have been mistakes about individual people. Things like:

  • Betting on the wrong person
  • Not engaging with conflict when I needed to
  • Making decisions about individuals based on hope rather than facts
  • Not prioritizing growth mindset highly enough in the people I hire

There are many things I wish I'd known sooner: Most yellow flags are red. The more talented the person, the more harmful it is if they're toxic. By the time you're consistently questioning someone's work or character, it's too late.

But AI is machines, not people. Right?

The theory of an agentic AI world is that AI will be increasingly able to take autonomous actions in the workplace. So it might seem strange to assert that in an AI world, all the biggest mistakes are still people mistakes. But there are three reasons that your people bets matter in an agentic AI world even more than they did before.

#1: AI doesn’t decide what to optimize. People do.

If your team asks the wrong question, sets the wrong objective, or feeds biased data into a system, the model will faithfully take the wrong action faster and at scale. When things go wrong, it isn't the AI per se that has failed. It's the responsibility of whoever scoped, framed, and validated the system inputs.

The net of this is that betting on the wrong person in an agentic AI world hoses you faster and more comprehensively than ever before.

#2: Trust and adoption are human decisions.

Even the best AI systems are worthless if no one trusts or uses them.

Resistance to change, poor communication, unclear accountability, and lack of psychological safety can all derail adoption. Culture, incentives, and leadership clarity are human dynamics, not technical gaps.

For instance, it doesn't matter if your AI forecast tool can improve accuracy by 20%. If your sales leader ignores it because the tool's forecasts run counter to their intuitions, you won't get any of the benefits.

Conversely, if your sales outreach tool produces absolute dreck but your sales leader insists that you use it anyway in a misguided efficiency play, you're doomed.

Betting on the wrong people means you'll make the wrong AI bets in the first place. The bigger your AI bets, the bigger the risk to your current operations, and the more you're counting on individual people's judgment.

#3: Ethical and strategic judgment can't be automated.

AI can execute, but it doesn’t own (or care about) consequences.

Decisions about fairness, privacy, transparency, and acceptable risk come back to human judgment. If an AI system makes a harmful or reputationally disastrous decision, it’s because people failed to set or enforce boundaries.

For example, if a generative AI campaign violates copyright because no one reviewed its training source policy, that's a people oversight, not a model glitch.

Don't think you're off the hook just because you're using a credible third party's AI, either. As we're starting to see in cases like Mobley v. Workday and Harper v. SiriusXM Radio, both the AI vendor and their customer may be legally accountable when things go wrong.

Because AI accelerates the impact of decision-making to scale a lot more quickly, making the wrong ethical decision on the human side causes a lot more harm.

The bottom line: AI represents a tectonic shift in how work will get done. The potential to rapidly scale the wrong system, process, or ethics is like nothing we've ever seen before. That makes it more important than ever that you make the strongest possible decisions about individual personnel.

Kieran


If you liked this story, why not subscribe to nerd processor and get the back issues?

My latest data stories | Build like a founder | nerdprocessor.com

kieran@nerdprocessor.com
Unsubscribe · Preferences

nerd processor

Every week, I write a deep dive into some aspect of AI, startups, and teams. Tech exec data storyteller, former CEO @Textio.

Read more from nerd processor

Fight, flight, or freeze They say that most of us, when faced with difficult conflict, tend to fight, flight, or freeze: Fighters are energized by the conflict and dive in to hash it out directly People who take flight disengage at the first sign of discord, trying to avoid the conflict entirely Freezers become mentally stuck, unable to take action while waiting for the conflict to pass For most of my life, I've been a fighter. As an adult, I've had to work to channel that instinct into...

📣 PSA: If you're a parent with teenagers, please take my anonymous survey on kids and entrepreneurship! 📣 Results coming soon to a nerd processor near you. Thank you and on to this week's nerd processor! Three Minutes of Fame Back when he worked at Microsoft, Jensen Harris (Textio's current CEO) used to do this thing with his monthly all-hands called Three Minutes of Fame. The idea was simple. Every month, he chose five people in his organization at random. A few days before the all-hands, he...

Welcome to the AI job fair A few weeks ago, I published new data showing that AI job posts vary significantly across different tech hubs. No one talks more about responsible AI than Seattle, while SF spikes highest on AI hype language. NYC is enterprise central. This week, I'm analyzing AI jobs through another lens. We're looking at how AI job posts have changed over the last year, and what that tells us about how businesses are evolving their approach to AI. AI transformation by any other...