👋 Hey friends,
A few months ago, I was on a call with a founder who said something that really stuck with me:
“Our first 10 users weren’t customers. They were co-founders we didn’t pay.”
That line captures one of the most overlooked truths about building AI products right now.
Most builders start too broad. They dream of creating a “copilot for everyone” — flexible, intelligent, universal. But if you look closely, the products that actually make it? They start painfully narrow.
They don’t build for everyone.
They build for one person — so deeply that it feels like magic.
And that’s the foundation of what I call The Power User Loop — a repeatable process used by top AI builders to learn faster, design smarter, and scale without losing focus.

TL;DR
The best AI products don’t start with scale.
They start with clarity — one archetype, one workflow, and one tight feedback loop that compounds learning over time.
In this edition, we’ll explore:
Why building for everyone breaks most AI products — and how narrowing your focus creates better outcomes
How small, obsessed user groups evolve into entire markets — the quiet path from niche to adoption
The Power User Loop Framework — a step-by-step model for turning feedback into product momentum
The One-Workflow Rule — how focusing on a single repeatable task accelerates clarity and retention
The Weekly Power User Playbook — a tactical way to apply these ideas starting today
— Naseema Perveen
IN PARTNERSHIP WITH ATTIO
Introducing the first AI-native CRM
Connect your email, and you’ll instantly get a CRM with enriched customer insights and a platform that grows with your business.
With AI at the core, Attio lets you:
Prospect and route leads with research agents
Get real-time insights during customer calls
Build powerful automations for your complex workflows
Join industry leaders like Granola, Taskrabbit, Flatfile and more.
⚠️ Why Building for Everyone Breaks AI Products
AI gives us infinite possibility — which sounds empowering until it becomes paralyzing.
With today’s tools, you can prototype an AI feature before lunch, fine-tune a model by dinner, and launch a “copilot” by the weekend. The speed is intoxicating. But here’s the trap: that speed convinces you that breadth equals progress.

You start imagining your product serving everyone — founders, marketers, students, designers, entire teams. The problem? Each of those groups speaks a different language, feels different pains, and expects different outcomes.
When you chase everyone at once, three dangerous things happen:
1. Your model gets confused.
Each workflow demands different data, prompting, and tuning. A designer might want a visual workflow; a founder might want summaries; a student might want structure and clarity. The same model can’t serve all three without compromise.
What you end up building is a tool that sort of helps everyone — and deeply satisfies no one.
In AI, “average” performance across use cases isn’t average. It’s failure.
Great AI products feel magically specific. They don’t try to generalize intelligence; they narrow it until it feels human.
That’s why the best early products focus on one archetype and one workflow. They go deep on the nuances of that user’s pain — the prompts, phrasing, and data that make the model’s response feel alive and personal.
2. Your feedback gets noisy.
When your user base is too broad, every signal becomes static.
You might get a message saying, “The UX feels clunky.” But is that because your product is genuinely broken — or because that user was never your target audience to begin with?
Early-stage AI feedback is already chaotic. Users bring different expectations, and models evolve weekly. The only way to get meaningful insights is to limit the variability.
If ten people with the same job tell you something’s confusing, that’s a pattern.
If ten random people tell you ten different things, that’s a distraction.
A clear user archetype makes every piece of feedback sharper. It’s the difference between listening to noise and tuning into a frequency.
3. Your conviction fades.
The hardest part of building AI isn’t shipping models — it’s maintaining belief through ambiguity.
When your user base is scattered, your metrics will be too. One group loves your output; another finds it useless. You start adding features to please everyone, running in circles instead of learning.
Without clear signals, conviction dies fast. You begin to chase every new request, every “Can you also make it do X?” — instead of studying how your core users actually behave.
Conviction isn’t just emotional. It’s strategic. It’s what helps you say “no” to good ideas that would break focus.
AI amplifies both focus and chaos.
The difference lies in how narrowly you start.
As one founder put it to me:
“When we built for one user, everything felt clear. When we built for everyone, everything felt broken.”
Data Snapshot
According to McKinsey’s 2025 AI Adoption Report:
42% of AI projects never reach production — mostly due to unclear scope or lack of user adoption.
However, companies that launched focused pilots with one defined user group achieved 3× higher success rates.
The takeaway?
Building small isn’t playing safe. It’s engineering for survival.
The Power User Loop Framework
Here’s how every successful AI product scales its learning without scaling chaos.

This loop compounds like interest.
Each improvement tightens alignment between product and user.
That’s how ten users can teach you what a thousand surveys can’t.
The Power User Playbook
Here’s how to put the Power User Loop into action — one week at a time.
Monday:
Talk to one user live. Watch them use your product.
Ask where they hesitate, retype, or switch tabs.
Tuesday:
Summarize what you learned. Share the clip with your team.
Circle patterns — not complaints.
Wednesday:
Ship one micro-improvement that reduces friction.
Thursday:
Announce it publicly: “You asked, we shipped.”
Tag the user who inspired it.
Friday:
Reflect. What did this week’s feedback reveal about your real workflow?
Do this for eight weeks. You’ll know exactly who your product serves — and what it truly does.
Common Pitfalls to Avoid
Even when founders understand the value of focus, the same traps appear again and again. These mistakes aren’t about bad strategy — they’re about impatience, ego, or misreading early signals.

Here’s what I see most often (and how to avoid each one).
1. Building a “general copilot.”
If your pitch says “for everyone,” you’re already off track.
Every week, there’s another deck promising a universal copilot — “for every workflow, every team, every use case.” The intention makes sense. AI feels flexible enough to serve everyone. But flexibility is not a product strategy.
The best products in this space started hyper-specific. They picked one problem, one workflow, and one archetype to obsess over — then layered out from there.
Here’s the test:
If you can’t describe your first user in a sentence (“a marketing manager who writes weekly reports”), you’re not focused enough.
A product that tries to help everyone usually ends up serving no one deeply enough to matter.
Try this instead: Write your positioning as if your product were custom-built for one person. If that person feels like you’re inside their head, you’re on the right track.
2. Listening but not learning.
Collecting feedback is easy. Translating it into design is rare.
Founders love to say, “We’re listening to users.” But listening alone doesn’t move the product forward — synthesizing feedback does.
If you have 50 pieces of user feedback, your job isn’t to act on all of them. It’s to identify the three that reveal something structural — a pattern that points to a broken assumption or a missing capability.
The best founders don’t just ask, “What did users say?” They ask, “What are they really trying to tell me?”
For example:
When users say “the model feels inconsistent,” they might actually be asking for predictability, not power.
When users ask for more customization, they might really mean “I don’t trust it to make decisions for me.”
Try this: Reframe feedback sessions as co-design conversations. Instead of asking what users want, observe how they work. Where do they hesitate? What do they correct? That’s where the learning lives.
3. Scaling too soon.
Don’t raise money to reach 10,000 users until you can keep 100 delighted.
Scaling magnifies everything — the good and the bad. If your product isn’t sticky with 100 people, it won’t magically become sticky with 10,000.
Premature scale leads to bloat: too many users, too many edge cases, too little signal. Your support load increases faster than your insight quality.
Growth is supposed to compound learning, not bury it.
Instead of obsessing over scale metrics, obsess over engagement depth. Ask:
Do users come back without being reminded?
Are they building their own workflows around your product?
Do they share it organically, not because of incentives but because it’s become part of their daily rhythm?
If you don’t have “evangelists,” you’re not ready for scale.
Try this: Focus on your stickiness metrics before your acquisition metrics. Measure love, not volume.
4. Ignoring quiet power users.
Your loudest users might not be your most insightful. The best ones rarely shout.
The users sending you the longest Slack threads or the loudest feature requests often represent edge cases. They’re useful — but they don’t define the heart of your product.
Your most valuable users are often the quiet ones who show up daily, rely on your product deeply, and rarely complain. They’re not noisy because your product fits them.
If you don’t deliberately seek them out, you’ll miss the clearest signals.
Try this:
Track engagement time, not just messages.
Identify your top 5% of “always active” users.
Reach out personally — not with surveys, but with curiosity. Ask how they use it, what they ignore, and what they’d miss most if it disappeared.
Their answers will tell you more about your true product-market fit than any dashboard ever could.
Bottom line:
Every founder says they’re focused until focus starts costing them optionality.
The ones who win are the ones willing to choose.
Choose a user.
Choose a workflow.
Choose the kind of feedback that sharpens your product instead of distracting it.
Because in the early stages of AI, the smaller your circle of obsession, the faster your product learns.
Why Focus Matters Even More in AI
AI systems crave context.
The more specific your users and workflows, the more coherent your data and outcomes.
The broader your use cases, the noisier your signals — and the slower your improvement loop.
That’s why horizontal AI tools often hit ceilings.
They scale users, not understanding.
The winners — Harvey, Hippocratic, Typeface — went deep before they went wide.
Their clarity became their competitive advantage.
Or as one VC put it to me:
“In AI, focus is the new scale.”

Case Study #1: Relevance AI’s “Workflow One”
When Relevance AI started, they didn’t target “anyone doing analytics.” They went after one persona: user-research teams drowning in interview transcripts.
They built one simple workflow:
Turn dozens of qualitative interviews into summarized insights using AI.
It wasn’t fancy. It didn’t require multimodal models or complex prompts. But it solved a painful, recurring bottleneck.
Their first 10 customers weren’t beta testers. They were co-creators. They sent feedback daily, annotated results, and even helped refine the labeling process.
That tight loop helped Relevance AI uncover what mattered most: accuracy and interpretability.
Within six months, those same customers were asking,
“Can we use this for customer feedback? HR surveys? Sales calls?”
They didn’t pivot into those areas — they evolved into them.
One workflow led to ten. One loop became a flywheel.
Case Study #2: Superhuman’s Human Loop
Superhuman became famous for its onboarding — but few realize it started out of necessity, not luxury.
Rahul Vohra personally onboarded the first 100 users, one by one, on 30-minute calls.
He wasn’t selling. He was learning.
Three questions guided every session:
What’s your current workflow?
What frustrates you most?
What would make you feel “10× faster”?
After 100 interviews, he had something no AI model could generate: pattern recognition.
Users didn’t want automation; they wanted flow.
They didn’t want to type less; they wanted to think less about typing.
That one insight redefined Superhuman’s mission:
“We’re not building an email client. We’re selling time.”
Every product decision since — from shortcuts to split inboxes — traces back to those first 10 users.
Case Study #3: Harvey’s Legal Obsession
When Harvey AI launched, it could’ve been “a copilot for knowledge workers.”
Instead, it chose the toughest vertical: law.
Lawyers are meticulous. They care about accuracy, confidentiality, and precedent. They hate hallucinations.
By embedding within a single law firm early on, Harvey learned the hard edges of reliability and privacy.
That deep trust became its differentiator — and eventually, its advantage in selling to other firms.
Its founders once said:
“We didn’t build a legal model. We built a trust model.”
That trust later let them expand horizontally into adjacent professional services — audit, consulting, and compliance — with instant credibility.
Why This Works: The Psychology Behind It
Humans imitate conviction.
When a small, passionate group falls in love with a product, their energy spreads faster than ads ever could.
Psychologists call this the Contagion of Identity — we adopt tools that make us feel like the people we admire.
That’s why power users aren’t just testers. They’re evangelists.
Their passion gives your product credibility before marketing does.
Bonus Insight: The Network Effect of Learning
Each power user contributes not just engagement but data density.
Their usage patterns refine prompts, fine-tune embeddings, and improve contextual understanding.
The tighter your user loop, the smarter your product gets — naturally.
That’s how vertical AI companies like Hippocratic and Harvey outperform bigger LLM players in narrow fields.
They train less data — but it’s better data.
That’s what makes their models feel alive in context.
What You Can Do This Week
Here’s a short sprint you can start today:
Pick 3 active users — DM them and ask:
“What’s one part of your workflow that still feels painful?”Watch a real session.
Use Loom, Zoom, or a live call. Write down every pause, confusion, or re-prompt.Craft your “One Workflow Sentence.”
12 words max. If you can’t describe it clearly, you haven’t focused enough.Ship one improvement based on a single user’s behavior.
Share it publicly.
Tell your audience what you fixed and why. You’ll attract others who share the same problem.
Repeat weekly. Every small cycle adds clarity.
The AI Builder’s Checklist
Before shipping your next feature, ask:
✅ Does this strengthen our core workflow?
✅ Does it serve the same user archetype?
✅ Will it produce richer data or better signals?
✅ Can I track the improvement clearly?
✅ Would my power users celebrate this?
If any answer is “no,” it’s a distraction disguised as progress.
Reflection
Who’s your Power User Zero?
That one person who perfectly represents the pain you’re solving?
If you had to rebuild your entire product around them — would it make the rest of your users happier, too?
Closing Reflection
When we talk about AI, we talk about scale — billions of queries, millions of users, massive reach.
But real traction doesn’t start with numbers.
It starts with intensity.
Ten users who can’t live without you are worth more than ten thousand who barely remember you.
Every great AI product — from the smallest copilot to the largest foundation model — begins the same way:
“Finally, this gets me.”
Find that person.
Build for them.
Then let the loop do the rest.
See you next time,
— Naseema ✨
What’s the hardest early-stage trap to avoid when building AI products?
That’s all for now. And, thanks for staying with us. If you have specific feedback, please let us know by leaving a comment or emailing us. We are here to serve you!
Join 130k+ AI and Data enthusiasts by subscribing to our LinkedIn page.
Become a sponsor of our next newsletter and connect with industry leaders and innovators.



