To ReadMe or Not to ReadMe?
An outstanding CEO I know recently told me that he has historically been a big reader, but over the last 6 months, AI has replaced his reading habit. Rather than reading a book cover to cover, he has a conversation with AI about the concepts in the book. He considers it a more practical and effective way to learn.
I can't get this out of my head. This is an incredibly intelligent and successful AI-native leader, replacing his lifelong reading habit with AI use. As an intentional strategy.
Sit with that for a minute.
What are words for?
I read 70-80 books a year. Most of my reading list is fiction, so the above strategy would be neither useful nor desirable for me. But if the AI were good enough, I see why this strategy would be powerful for certain non-fiction. Especially for business writing that can be a bit dry.
Of course, right now there are significant IP issues with working this way. This CEO gets the benefit of the original author's thinking without paying them for the book. Since general-purpose AIs are not paying authors to train on their content, if a lot of people work like this CEO does, it quickly stops making economic sense for people to write this kind of book.
I know some authors who have trained their own proprietary AIs and launched them as products, but I don't know anyone who has made real money doing this. It's hard to see the economics of this working out, since general-purpose AIs continue to vacuum up the author's public articles, speeches, and social posts and regurgitate them for free.
But although the IP issues are a big deal, the problem with source material quality is even bigger.
Who counts as trustworthy?
In the old world, you consume an author's content in one of two ways: directly from the author, or via other people's discussion of their work. When you read the author's book or watch their presentations, the content is presented exactly as the author intends, with their name and reputation standing behind it. You hear directly from the author, and you can decide for yourself how credible they are.
Similarly, when you discuss an author's work with a group in a class or on social, you are clear on which comments come from the original author and which come from Joe Schmo the project manager at FakeCorp. Here too you can decide for yourself how much you trust each person in the conversation.
None of this applies in the new AI world. When you have conversations with an AI about an author's original ideas, everyone knows that the concepts may not be presented accurately. But it's worse than that. None of it is credible, since you can't tell which content comes from the intended author and which content is bolted-on commentary from randos on the internet.
Who's an expert, anyway?
It would be comforting to conclude that materials that have been synthesized by AI can never be as good as stuff that is written, spoken, and debated the old-fashioned way. I'll save this for another post, but I don't believe this is the case. Given the right source material, AI will become stellar at this. This makes the source material issue even more problematic.
The vast opportunity and long-time yikes of the internet is the total democratization of information online. On the internet, any wacko can set up an account or a website, pontificate authoritatively without substantial knowledge, and go toe-to-toe with a credentialed expert. It doesn't matter what is true; the content that gets seen the most is the content that gets clicked on the most.
This is of course not new. Online, the space between thoughtful, expert content and random comments on Reddit and X has always been narrow. This is why many schools try to teach kids how to distinguish credible content from fake stuff. Unfortunately, when general-purpose AI sucks in training data, it doesn't even try to tell the difference.
The bottom line: As any kindergarten teacher can tell you, just you saying something doesn't make it true. But you saying it and a zillion people clicking on it and AI training on it and resurfacing it makes it "true." And in a world where super smart people are increasingly relying on AI as their primary vector for learning and decision-making, that's a problem.
Kieran
If someone sent you this nerd processor and you liked it, here's a direct subscribe link with all the back issues. If you love AI + work nerdiness, please consider leaving me a tip to keep nerd processor broadly available!
My latest data stories | Build like a founder | nerdprocessor.com