Vive la résistance
Over the last month, I've built six awesome apps for my personal use without looking at a line of code. This does not include the agents I've built in my day job. As a hobbyist developer, will I ever open an IDE again? Maybe not. Natural language is the new building modality.
These days, even my non-technical friends are making apps. Anyone with a credit card can easily experiment. It's simple to build hands-on(you should) or try open-source AI agents like OpenClaw (you probably shouldn't). The tools are easily available and evolving quickly.
When it comes to so-called "AI transformation," it is truly an anarchist's moment.
Top-down is a myth
When I was a CEO, I tried to lead many things top-down. Our product direction. Our sales pitch. Our market positioning. Our language models. It's actually a pretty long list. At some stage of our company development, it made sense for me to personally lead all these things. After all, I had invented them.
But as we grew, I realized that there were very few things I could successfully lead top-down. In the end, if you're a leader with any kind of scale, there are only three things you can lead top-down over the long haul: values, priorities, and what gets rewarded on your team (which has to match values and priorities or you're sunk).
Everything else is overfunctioning. If you work in the weeds rather than holding people accountable for doing good work in the weeds, you end up with a team that doesn't deliver very well. But if you leave people to work in the weeds without setting clear values and priorities, or if you comp the wrong behaviors, you also end up with a team that doesn't deliver very well.
It sounds easy on paper, but it totally is not.
All this is to say that, even before AI upended how we work day-to-day, top-down decision-making was already mostly a myth except in a very few high-stakes places.
Top-down is a double-myth when it comes to AI transformation
Over the last week, I have used Microsoft Copilot, ChatGPT, GitHub Copilot, Claude Code, Amplifier, Gemini, Udio, Researcher, Lavalier, an agent I built called Synthesizer, and probably more AI I'm forgetting. That is not even counting many AI features that are baked in to other software I use.
I have used AI at work and in personal projects. I have used many AI products in both settings. The tools are developing fast, many are widely available, and the lines between work computing and personal computing are blurring at a rapid rate. Practically speaking, this means that your employees can experiment with all the AI they can eat no matter what frameworks and tool choices you assert from the top.
Look, you can (and should) set privacy and security policies. You can (and should) choose to pay only for AI tools that add specific value for your business. But no matter how hard you work to prevent automated access to sensitive data in your workplace, every employee at your company can paste any internal data they want into their personal ChatGPT accounts. And, no matter what your AI usage policies are, they probably are.
In other words, regardless of what you implement top-down, AI transformation is going to happen bottoms-up one way or the other.
Enablement is stronger than compliance
I feel motivated to help make the AI transformation movement positive for labor. I think this is totally possible. But you have to accept as a given that, just as with any other groundswell movement, you won't control what's happening from the top with an iron fist.
By the time you hash out, document, and mandate your AI policies, your team has tried three more free tools. Or worse, they're too afraid to try new tools because they diligently follow your draconian policies, and your innovation falls behind.
Remember a couple of months ago I published a pile of data about what was happening to job titles in AI transformation? Job titles in "AI enablement" and "AI activation" are rapidly replacing those in "AI transformation." I don't think this is an accident.
In an anarchist's moment, the most powerful thing you can do is enable people to make empowered, values-aligned decisions. As a framework for AI transformation, especially in large systems, enablement is stronger than compliance.
The bottom line: In the era of AI transformation, top-down leadership boils down to the same stuff it always has: consistent values, clear priorities, and decisions about what kinds of work get rewarded.
Kieran
If you liked this story, why not subscribe to nerd processor and get the back issues? Also, why not learn to tell data stories of your own?
My latest data stories | Build like a founder | nerdprocessor.com