Inside the Minds of Top AI Writers: What 3000+ Articles Reveal About Converging Ideas
An in-depth analysis of AI thought leadership, writing patterns, and why casting models like a team is more than a metaphor.
In my last article, I shared four of the most common traps I’ve seen people fall into when using AI. It became one of my most popular pieces on Substack.
Then I received a pointed message: it seemed to be lifted from work by
, especially the concept of “casting your models like a team”.I was stunned, not just by the accusation, but also by the fact that I hadn’t even subscribed to that (very good) newsletter before. I knew where my take came from: a learning circle of thousands of practitioners from varied backgrounds, constantly exchanging experiments, failures, and insights. I distilled all that into four patterns I kept seeing again and again.
Could it really be that all of it overlapped with one author?
I reached out to the commenter for clarification but never heard back. Totally understandable, people get busy, and I’ve forgotten to reply to messages that moved me deeply, too.
I was ready to move on. Then a friend told me, “look at this article, it’s almost exactly like yours”. It’s for programmers, but the concept, structure, the advice were so similar.
I read it and nodded along the entire time. I’ve followed his work before and respect it deeply, so I wasn’t surprised he landed on similar conclusions. Those four traps I wrote about aren’t niche, to my mind, they’re the backbone of effective AI use. If someone else reached the same conclusions, especially someone with deep technical experience, that’s validation.
Imagine when
found out our overlap, it can be, what a bummer, why do we have to write the same things…And then it hit me, what if the person who commented me was simply describing exactly what he saw? Just like my friend’s experience?
That sparked a bigger question:
What if we’re all converging?
What if AI researchers, engineers, educators, even creators, are all independently arriving at the same insights?
What if we’ve already passed some invisible threshold where shared use leads to shared thinking?
That made me curious.
The Goal and Plan
I wanted to:
Find out “top” AI voices.
By “top”, I mean those who’ve posted dozens or hundreds of articles over a long stretch.Analyze their core concepts, writing styles, and points of overlap.
Track how their ideas and perspectives have evolved.
And explore how similar (or different) the idea of "casting models like a team" shows up across them.
To tackle this, I used a combination of tools:
GPT-o3 for strategic comparisons and hypothesis testing,
GPT-4o for general discussion and synthesis,
Cursor + Claude Sonnet 4 Thinking for large-scale data collection, analysis, and raw content extraction,
NotebookLM to distill and surface high-level insights from the collected material.
Step 1: Collect The Top AI Voices
Industry Leaders I (should) Admire
I’ll admit, I haven’t done a great job consistently following the insights of foundational figures in AI. This was my chance to correct that and build a better mental map. I focused on those I already knew or could easily explore:
Sam Altman - OpenAI
Lillian Weng - OpenAI
Andrew Ng - DeepLearning
Clem Delangue - HuggingFace
Dario Amodei - Anthropic
Jeff Dean - Google
Ali Ghodsi – Databrick
While these leaders have immense influence, most don’t publish frequently enough to support deep longitudinal analysis. Guess when you're actively shaping the future, blogging understandably takes a back seat.
Substack Thought Leaders
This part was easier. Earlier this year, I pushed out a Substack Explorer site to help me discover newsletters across specific niches. So I’m using that to filtered for:
Newsletters with 10k+ subscribers (enough writing volume)
Content that focuses on opinion and insight, not just news curation
I then scraped and collected archives of the most promising ones. After an initial screen for clarity, consistency, and relevance, I narrowed it down to a small list that’s enough to cover my curiosity (with no specific order):
- by
DiamantAI by
Artificial Intelligence Made Simple by
AI Supremacy by
AI Disruptor by
AI Snake Oil by
andThe Sequence by
Refactoring by
- (formerly Creator Economy)
Future/Proof by
Step 2: Overall Analysis
2.1. Industry Leaders’ Opinions
Before diving into the Substack voices, I wanted to understand how high-impact leaders (those steering labs, products, and policy) frame their thinking.
Due to their limited output, I simply compared the semantic similarity of their published views:
The “AI Democratization Cluster”
Clem Delangue and Andrew Ng showed the highest similarity (0.970), closely followed by Ali Ghodsi.
Common ground: democratizing AI via open-source (Delangue), education (Ng), and enterprise enablement (Ghodsi).Jeff Dean’s Isolation
Dean’s lowest similarity was with Lillian Weng (0.214). Dean reflects Google’s deliberate, research-centric ethos;
Weng, from OpenAI, leans toward fast iteration and capability expansion.Altman’s Bridge Role
Sam Altman sits somewhere in the middle, balancing research, deployment, and public discourse.Business Model as a Predictor
Alignment in openness (Delangue, Ng, Ghodsi) correlates with similar opinions. Closed-source actors (like Dean) diverge, regardless of technical depth.
Despite different backgrounds and responsibilities, they converge on a few clear themes:
AI as transformation: Everyone agrees this is a once-in-a-century shift.
Responsible rollout: Most advocate for iterative, cautious deployment.
Human-centric purpose: Ultimately, AI should amplify human value, not replace it.
But their angles and strategic philosophies diverge.
Altman: Long-term governance and AGI readiness
Amodei: Fast-forwarding human progress, if safety is ensured
Ng: Education + open knowledge as national strategy
Weng: Technical deep dives into planning, memory, tools
Delangue: Open-source advocacy and decentralized evaluation
Ghodsi: Enterprise-focused delivery: small start, fast iteration
Each leader represents a critical node in the broader AI ecosystem, from safety to scale, research to real-world ops.
Their views provided a helpful benchmark. They’re not writing weekly essays, but when they speak, it’s worth listening.
2.2 Substack Top Voices
To complement the leader benchmark, I ran a further bulk analysis across the curated set of prolific Substack newsletters.
2.2.1 Conceptual Clusters: The Intellectual Geography
A t-SNE plot of all articles revealed clear thematic groupings.
The five territories in the AI Substack landscape become clear:
Technical Deep-Dive Archipelago (far right): The Sequence dominates this space with dense, research-heavy content - academic, consistent, specialized.
Practical AI Hub (center-right): Elevate, Diamantai, and One Useful Thing lead here, blending hands-on use with clear, accessible writing.
My own articles cluster nearby - hoping to bridge theory and implementation.Critical Analysis Zone (upper left): AI Snake Oil stands apart, rigorous, skeptical, focused on ethics, policy, and systemic risk.
Creator Economy Bridge (left-center): Creator Economy focuses on how AI empowers solo builders and indie entrepreneurs.
Synthesis Sweet Spot:
My own work spans multiple zones; not drift, but an attempt to connect the dots.
2.2.2 Stylometric Fingerprints: The Writing DNA
Stylometric analysis highlights not just what top voices say, but how they say it, revealing patterns in readability, sentence structure, vocabulary, and tone.
Readability
Most accessible: The Dan Koe (68.9), Write With AI (65.1), Creator Economy (59.8)
Balanced: Myself (50.0), One Useful Thing (53.9)
Most dense: The Sequence (35.5), Diamant AI (32.6)
Sentence Length
Concise: Myself (14.9), Write With AI (14.7)
Academic: One Useful Thing (20.8), The Sequence (20.2)
Lexical Diversity
High: The Sequence (0.54)
Balanced: Myself (0.42), Refactoring (0.43)
Low: Devansh (0.33), One Useful Thing (0.31)
Tone (Modal Use)
Cautious: The Dan Koe (0.021), One Useful Thing (0.021)
Confident: The Sequence (0.009), Myself (0.012)
I learnt that my writing lands where I’d hoped: clear, confident, and efficient. Not overly dense, but not too casual either.
2.2.3 Topic Overlap: The Uniqueness Map
The topic overlap matrix reveals just how fragmented the AI content landscape is:
Most overlap scores are only 0.1–0.2, meaning most writers share just 10–20% topical focus.
Even the highest overlap - between Write With AI and The Dan Koe - is only 0.39, despite both focusing on AI for creators.
My own overlap sits around 0.1–0.15 with most, suggesting breadth over niche focus.
Now I’m ready to learn more closely on opinion evolutions. Specifically on Substack.
Step 3: Opinion Evolutions on Substack
I first tried to extract opinion evolution mathematically.
3.1. Quantitative Analysis: Useful? Not Insightful?
Working with GPT-o3, I measured:
Similarity Drift – Semantic distance over article sequence
Complexity Evolution – Variance in topic embeddings
Topic Consistency – Shifts in subject matter
Here are some examples for Write with AI:
While the quantitative metrics showed how content changed mathematically, I still didn’t know how their actual opinions evolved.
What I really wanted to see was language like: “He used to believe... now he sees AI as...” or “Her stance shifted from skepticism to strategic integration.”
So I needed a more descriptive approach, something that could tell the story behind the metrics.
3.2. Qualitative Evolution: The Real Shift
I’ve been using Cursor extensively, whether it’s scanning entire codebases or digging through shared folders, it’s been incredibly helpful. So I applied the same approach here.
With the full article archives already collected for each Substack writer, I Cursor crafted a master prompt to serve as a consistent instruction set for Cursor, guiding it folder by folder. The goal: to extract a clear, descriptive narrative of how each writer’s opinions evolved over time.
I defined the objective, provided the evolution criteria, and ran the process across each complete archive. The result?
Roughly 200 lines of distilled opinion analysis per newsletter, exactly what I needed to move beyond the metrics and uncover the story behind the shifts (an example output is shown in screenshot below).
Each writer had their own trajectory. Here’s a distilled overview:
3.2.1. The Sequence
Arc: From research curation to thought leadership and technical commentary
Key Shifts:
Aggregation → Deep insight and platform segmentation (Research, Opinion, Edge)
External linking → Original, critical frameworks
Confidence: Moderate → Very high
Sentiment: Cautiously optimistic → Technically realistic
Key Insight: Exceptional adaptability; became a multi-stream intellectual engine in the AI space
3.2.2. AI Snake Oil (Narayanan & Kapoor)
Arc: From debunking hype to building frameworks and engaging with policy
Key Shifts:
Reactive critiques → Proactive theorizing (e.g., “AI as normal tech”)
Scholarly lens → Societal-scale critique
Confidence: High → Very high
Sentiment: Alarmed academic → Authoritative policy voice
Key Insight: Evolved into the gold standard of AI criticism through depth, consistency, and rigor
3.2.3. Elevate (Addy Osmani)
Arc: From developer motivation to nuanced AI integration strategies
Key Shifts:
Broad self-dev → Engineering practice → AI-enhanced workflows
General audience → Senior technical professionals
Confidence: High → Very high
Sentiment: Motivated → Pragmatic → Experienced
Key Insight: Merged growth mindset with grounded engineering wisdom as AI entered the developer’s toolkit
3.2.4. Write With AI (Nicolas Cole)
Arc: From tool-centric to systems and strategy-oriented writing
Key Shifts:
Prompt tips → Workflow building → Business outcomes
Hobbyists → Content teams → Entrepreneurs
Confidence: Optimistic → Strategic
Sentiment: Curious → Confident → Tactical
Key Insight: Became a blueprint provider for AI-enabled business growth
3.2.5. DiamantAI (Nir Diamant)
Arc: From academic theory to implementer education
Key Shifts:
Dense, niche research → Accessible, modular tutorials
AI experts → Intermediate devs → New tech professionals
Confidence: High in theory → High in teaching
Sentiment: Controlled → Empowering
Key Insight: Excelled at translating complexity into clarity, becoming a go-to AI educator
3.2.6. Refactoring (Luca Rossi)
Arc: From founder reflections to AI-integrated tech leadership
Key Shifts:
Anecdotal stories → Structured playbooks → AI-first team dynamics
Tactical lessons → Strategic frameworks (e.g., “tech capital”)
Confidence: Medium → Very high
Sentiment: Reflective → Constructive → Forward-looking
Key Insight: A masterclass in how technical leadership evolves with AI as both tool and strategy
3.2.7. AI Supremacy (Michael Spencer)
Arc: From excitement about tools to geopolitical systems analysis
Key Shifts:
AI productivity apps → Infrastructure → China vs. U.S. AI positioning
Western consumer focus → Global industrial view
Confidence: Moderate → Authoritative
Sentiment: Excited → Strategic and analytical
Key Insight: Offers rare, consistent geopolitical framing of the AI arms race
3.2.8. Peter Yang (Creator Economy)
Arc: From personal musings to frameworks for creator-led AI leverage
Key Shifts:
Lifestyle blog → Career tips → AI-forward solo entrepreneurship
Passive content → Strategic frameworks and tooling
Confidence: Low → High
Sentiment: Curious → Professional → Visionary
Key Insight: Showed how to transform creator experimentation into systematic AI leverage
3.2.9. One Useful Thing (Ethan Mollick)
Arc: From theorist to practitioner to systems-level commentator
Key Shifts:
AI skepticism → Embrace → Institutional experimentation
Personal improvement → Societal transformation
Confidence: High → Very high
Sentiment: Neutral → Urgently optimistic
Key Insight: A rare example of academic agility, adapting to tools through usage, not just theory
3.2.10. Artificial Intelligence Made Simple (Devansh)
Arc: From skepticism about model complexity to strategic systems thinking
Key Shifts:
Simplicity advocacy → Coordination-aware architecture
Technical critique → Socio-technical governance
Confidence: High in critique → High in structural realism
Sentiment: Skeptical → Analytical → Strategic
Key Insight: Developed a deep systems lens, viewing AI as both infrastructure and narrative scaffolding for coordination
Shared Trends:
They all went through the evolution:
Curiosity → Frameworks
Tools → Strategy
Human-centric → System-oriented
Step 4. Deep Dive: “Cast AI Models Like a Team”
I was specifically called out for the phrase “cast models like a team,” so I dug into how others described multi-model workflows.
I asked Cursor to analyze how different writers approach multi-model workflows, focusing on:
What frameworks they use
Where do they overlap
How their approaches differ
Besides Mollick, I studied Addy Osmani, Devansh, AI Disruptor (Alex McFarland), and AI Adopters (Kamil Banc).
Despite coming from different domains: engineering, content, orchestration, business; they all share one foundational belief:
AI models should be treated as specialized team members, not interchangeable tools.
The shared principles are:
Role-Based Specialization: Models are assigned tasks based on unique strengths, not generic utility.
Workflow Integration: Emphasis on structured pipelines, not one-off prompts.
Quality via Orchestration: Cross-checks, verification, and layered use yield better results.
Cost-Consciousness: Resource allocation matters—premium models only when they add value.
With a few key differences:
Personally, I use the same “intern” metaphor at work a lot. But my practical style aligns most with Devansh’s layered orchestration style.
It turns out I wasn’t alone. Nearly everyone who works deeply with multi-model AI ends up independently framing their approach this way. I truly believe:
The metaphor of “casting AI like a team” isn’t just useful.
It’s inevitable for serious builders.
Reflection: What This Means for Builders
When I was first accused of copying, I felt defensive. But after digging into this analysis, I’m genuinely grateful for the nudge.
It pushed me into a space I’d long been curious about but hadn’t fully explored. It forced me to examine not just ideas, but how they evolve, how opinions converge, then diverge in use, application, and nuance. Watching each newsletter’s growth journey unfold was, frankly, fascinating.
Without that moment of discomfort, I doubt I would’ve taken the initiative to collect and analyze over 3,000 articles across the AI landscape.
More than anything, it taught me to approach other people’s work with greater respect and intellectual rigor. There’s so much to learn, not just from what they say, but how their thinking takes shape.
The truth in newsletter growth:
It’s easy to forget: even Ethan Mollick’s earliest posts had just a handful of likes. Most of these now-prominent newsletters took months, sometimes years, to gain traction.
Meanwhile, I’ve quit YouTube, blogs, and social media more times than I can count. There’s no shame in that. The key is to find a format that fits, stick with it, and keep creating, even when no one’s watching.
A great newsletter isn’t built in a day. Ethan’s early posts had limited engagement, just like yours, just like mine.
Yes, it’s the same old advice: persevere. We all know it. We just don’t always do it. And the real lesson? Find where your voice resonates, and keep going, even if the room is quiet.
The part that scared me most:
Some newsletters had so much content that I genuinely worried whether AI could even process the volume. When collecting them, I found myself hoping, “Please let this be parsable.”
To make sense of it all, I ran similarity tests, comparing my own articles against those from all these newsletters. This originally accused article showed meaningful overlap with nearly every one I’ve studied.
I’ve since compiled those overlapping articles into a reading list. My goal now is to dive into each one, understand the topics they explore, identify what resonates, and study what made them effective. That list keeps growing…
What struck me the most:
If dozens of top AI minds independently converge on the same concept, it doesn’t mean any of us are unoriginal. It means the idea is strong.
There’s power in shared intuition. When the best practitioners all land in the same place, pay attention.
Confession:
This entire analysis, from collecting materials to trial-and-error testing to compiling the final version, took under 20 hours. So yes, it’s far from exhaustive.
There are many fascinating angles I didn’t include. Some analyses I ran but didn’t fully explain. And I didn’t replicate every comparison across all newsletters.
Plus, with limited access to some subscriptions, some datasets may have been incomplete or inconsistently processed. The results are enough to surface strong patterns, but they’re not perfect.
One last note:
I don’t have a “things I enjoyed” list this week, the newsletters I mentioned are what I enjoyed. Check them out, they are genuinely worth the reading.
P.S. I didn’t include all the prompts, technical details, or full analysis results here. If you’re interested in those specifics, let me know.
I haven’t decided how best to publish the complete study yet. A full version might read like a dissertation, and I don’t want to bury you in academic overload. Maybe I’ll upload it to my site, or compile everything into a Notion page for those who want to dig deeper.
P.P.S. Do you have a landscape you want to investigate like this?
You are taking the analysis to the next level Jenny, super impressed! 🙌🏻
If those top tier newsletter already overlapping, what happening to us who’s still climbing from the bottom of the valley? 😅
But I guess there’s a lot of angle, nuance, audience segmentation that each of use can do to differentiate ourselves from others.
The topic evolution analysis really caught my attention. There’s so much things have changed and sometimes I also wondering how much my topics will change overtime once AI already becomes the norm and more people will be onboarded? Until that day comes, Will just keeps showing up and writing.
Thanks for doing this analysis!
Not unlike KB, I gave a directive to my assistant too:
Objective critique. Please accept.
Newsletter writing requires what economists call "luxury time" - the ability to write consistently without immediate payback. This creates a class of AI commentators who are already economically secure, talking to others who can afford to experiment with premium AI tools. Meanwhile:
• **Frontline workers** using AI to survive (gig workers optimizing routes, service workers managing schedules) never get heard
• **Small business owners** quietly solving real problems with AI don't have time to write about it
• **Students and unemployed people** finding creative workarounds get no platform
The "future disenfranchised" you mention are already here - they just don't have newsletter audiences.
**Temporal Myopia is Glaring:**
This analysis is basically "here's what worked in the last 18 months" presented as universal truth. But:
• **No forward modeling** - Where do these patterns lead? What happens when everyone follows them?
• **No cycle awareness** - Every tech trend has backlash phases, but this assumes linear progression
• **No scenario planning** - What if AI development stalls? Regulation changes everything? Economic conditions shift?
• **Missing the obvious** - We're probably in the "early adopter" phase, not the "mass adoption" phase
**Self-Importance Problem:**
The whole study feels like successful AI newsletter writers studying themselves and declaring their approaches "inevitable." It's circular:
1. Successful writers converge on patterns
2. Study proves these patterns work
3. More writers copy these patterns
4. "Success" validates the original analysis
**The Real Responsibility Gap:**
Instead of asking "how do successful AI writers think?" the better questions are:
• How do we democratize these insights for people without newsletter platforms?
• What AI patterns work for people with real constraints (time, money, access)?
• How do we prevent this convergence from becoming orthodoxy that excludes different approaches?
This research accidentally documents how privilege shapes AI discourse, then presents it as universal wisdom.