Most PMs are using AI. Very few are getting real leverage from it. The gap isn't about prompting skill — it's not about which model you're using or how many courses you've taken. It's about whether you have a system. And almost nobody does.
There's a specific moment I look for in conversations with PMs. I ask them: "How are you using AI in your work right now?"
The answer falls into one of two categories.
Category A: "I use it for drafting. I'll paste in some research and ask it to help me write a PRD section. Or I'll ask it to summarize user interview transcripts. Stuff like that."
Category B: "I have context files loaded with my product, users, and competitive landscape. I run skills that apply frameworks I'd normally do manually. I have an agent that runs competitive analysis every week and drops a brief in my folder."
Category A is roughly 90% of the PMs I talk to. Category B is maybe 10%.
The interesting thing: Category A PMs are often technically capable. They've taken AI courses. They know about prompting. They've tried multiple models. They're not behind on AI knowledge. They're just not getting leverage.
Category B PMs often aren't the most technically sophisticated people on their teams. They're getting leverage not because they know more about AI — but because they built a system.
What the 10% Are Doing Differently
The gap between Category A and Category B comes down to three things:
1. They have shared context.
Category A PMs start every AI session from scratch. Open Claude. Paste in the relevant background. Get output. Close the session. Tomorrow: repeat.
Category B PMs have a set of context files — company.md, product.md, personas.md, competitors.md — that live in their workspace and load automatically when Claude starts. Every AI interaction starts from a foundation of product knowledge. They never re-explain their business.
This single difference compounds enormously over time. The Category A PM might spend 15-20 minutes per session re-establishing context before doing the actual work. Multiply that across 4-5 AI sessions per day, 5 days a week, 50 weeks a year. That's 200+ hours of context re-establishment per year.
2. They run skills, not raw prompts.
Category A PMs type free-form prompts. "Help me write a competitive analysis of Amplitude vs Mixpanel." The prompt quality determines the output quality, and prompt quality varies dramatically depending on the day, the PM's energy level, and how much they remember from their last successful attempt.
Category B PMs run structured skills — commands that encode PM frameworks end-to-end. /competitive-profile-builder doesn't just ask for a competitive analysis. It applies a specific methodology: positioning comparison, feature gap mapping, pricing strategy analysis, go-to-market assessment. The framework is baked in. The output is consistent and complete whether you run it on Monday or Friday, whether you're energized or exhausted.
3. They've automated their repeating intelligence work.
Category A PMs have recurring research tasks they do manually: weekly competitive scans, tracking which competitors are shipping features, synthesizing user feedback. They probably don't do these consistently because they take time they don't have.
Category B PMs have automated these. A scheduled agent runs their competitive scan every Monday and drops a structured brief before standup. User feedback analysis runs at the end of every sprint. They consume intelligence rather than produce it.
The Compounding Effect of a PM System
Here's what makes the Category B approach so much more powerful than it might initially seem: it compounds.
Every context file that gets enriched makes the next AI interaction better. Every skill that gets tuned to your specific domain produces better output this month than it did last month. Every automated workflow that runs on schedule builds institutional knowledge you don't have to re-create.
A Category A PM is perpetually starting over. Each AI session resets to zero context, zero framework, zero institutional memory.
A Category B PM is building a system that gets smarter over time. The AI knows more about their product in month six than in month one. Not because the model changed, but because the context was built up.
The compounding effect is why the productivity gap between the 10% and the 90% isn't linear — it's exponential over 12-18 months.
Why Most PMs Are in Category A
The reason most PMs are in Category A isn't laziness or lack of sophistication. It's that nobody designed a Category B system for them.
Building context files from scratch is work upfront. Sitting down to write your company.md, product.md, personas.md, and competitors.md takes 2-4 hours. That's time a PM doesn't have in a normal sprint. So it never happens.
Finding and building PM-specific skills takes PM framework knowledge. Most PMs don't know how to encode Teresa Torres's continuous discovery framework into a reusable skill. They're not sure what fields to include in a good competitive profile. The barrier to building high-quality PM skills is high if you're starting from scratch.
Automation requires infrastructure. Scheduling agents to run on a cadence requires knowing what infrastructure to use, how to configure it, and how to make outputs end up somewhere useful. That's a technical setup problem most PMs aren't equipped to solve alone.
The system gap is real. It's not a personal failing of individual PMs. It's a structural problem: the tools exist, but the system that makes the tools useful for PMs doesn't come pre-built.
What Closing the Gap Actually Takes
Getting from Category A to Category B isn't about learning more AI skills. It's about making three investments:
Investment 1: Context setup. (~3 hours, one time) Build the four core context files. Company mission, positioning, stage. Product current state, key metrics, roadmap. User personas with real jobs-to-be-done language. Competitive landscape with your actual differentiation. Load them into your workspace once. Update them as things change.
This is the highest-leverage investment you can make. Everything else compounds on top of it.
Investment 2: Skill library. (~1 hour to set up, ongoing) Install or build skills for your three most frequent PM workflows. For most PMs: PRD generation, competitive analysis, user interview synthesis. You don't need 70 skills immediately. You need three skills that work well.
The criteria for a good skill: it encodes a framework you'd apply manually, it uses your context files, and it produces output you can actually use without extensive editing.
Investment 3: One scheduled agent. (~2 hours to set up) Pick one recurring intelligence task you do manually. Competitive monitoring is usually the best starting point because the value is immediate and obvious. Set up an agent to run it on a schedule and drop output somewhere you'll actually read it.
This step is optional in the short term but mandatory in the long term. It's what separates a PM who uses AI from a PM whose work is powered by AI.
The Honest Reckoning
Here's the uncomfortable truth: the PMs who are in Category B right now aren't going back. They're running competitive analysis in minutes that takes their peers hours. They're walking into leadership reviews with PRDs that have already been stress-tested by AI agents. They're consuming weekly intelligence briefs they didn't have to produce.
That compounding advantage doesn't shrink over time. It grows.
If you're in Category A — using AI reactively, one-shot, resetting context every session — this isn't a criticism. It's a description of where most PMs are. The tools were legitimately not there 18 months ago to make Category B work well.
But the tools are there now. The question is whether you build the system that uses them.
The mySecond PM Operating System is the Category B system — pre-built context templates, 70+ PM skills, scheduled agent infrastructure, and onboarding that loads your context in one session. Designed for the PM who's done experimenting and wants the system that compounds.