RICEScoringAIJobs-to-beDoneAIKanoModelAIMoSCoWPriorityAIImpactMappingAIWeightedScoringAI

Stop Manually Applying PM Frameworks. Here's How AI Does It Better.

Ron YangMarch 28, 20269 min read

PM frameworks like Teresa Torres's Opportunity Solution Trees, Marty Cagan's problem-first PRDs, RICE scoring, and Jobs-to-be-Done break down not because they're flawed, but because manual application is inconsistent. Embedding these frameworks into AI skills that run against your product context produces higher-quality, more consistent outputs than any PM can achieve manually — in a fraction of the time.

You know the frameworks. You also know what actually happens when you try to apply them.

You pull up Torres's opportunity mapping structure, open a blank Miro board, and start trying to remember how assumptions map to opportunities. Thirty minutes in, you've got a half-built tree that doesn't connect to your interview data. You tell yourself you'll finish it later. You won't.

"Defining the roadmap, innovation, MRE, PRD from scratch. We have some structured process that is improving through use. It feels more painful than necessary."

The frameworks aren't the problem. The manual application is.


The Real Problem With PM Frameworks

Every PM has a shelf of mental models they believe in but don't consistently use. The gap between knowing a framework and applying it correctly — every time, across every deliverable — is where quality breaks down.

Manual framework application fails for three reasons:

  1. Inconsistency. You apply RICE rigorously on Monday, then eyeball priorities on Thursday when you're behind.
  2. Context loss. The framework needs your personas, your competitive landscape, your product constraints. By the time you've assembled that context, you've lost the thread.
  3. Decay. Frameworks work best when they compound — when your opportunity tree feeds your PRD which feeds your roadmap. Manually maintaining those connections is a full-time job nobody has.

The answer isn't better discipline. It's embedding the frameworks into a system that applies them automatically, with your product context, every time you run a command.


5 Frameworks You Should Stop Applying Manually

1. Teresa Torres's Continuous Discovery — Automated Opportunity Mapping

Torres's Continuous Discovery framework is powerful — but the synthesis step is where most PMs stall. You end up with interview notes in one place, opportunities in another, and an OST that's perpetually "in progress."

When you run /opportunity-solution-tree, the skill applies Torres's framework automatically — mapping a business outcome to user opportunities framed as problems (not solutions), generating testable solutions for each opportunity, and connecting findings to your existing personas and research. We cover interview analysis in depth in our guide to analyzing user interviews with AI.


2. Marty Cagan's Problem-First PRD Structure

Cagan's approach demands that every PRD start with the problem, not the solution. In practice, most PRDs still start with a feature description and work backward.

The /prd-generator skill enforces Cagan's structure as a constraint, not a suggestion — leading with the problem statement grounded in your actual user pain points, success metrics tied to your goals, and competitive context from your loaded profiles. For a detailed before/after comparison, see why your AI-generated PRDs are generic and how to fix it.


3. Gibson Biddle's DHM Model — Automated Competitive Analysis

What it does manually: Biddle's Delight-Hard to Copy-Margin Enhancing model forces you to evaluate every product initiative against three criteria. Does it delight customers? Is it hard for competitors to copy? Does it improve margins? It's a brilliant strategic filter — when you remember to use it.

How AI automates it: The /competitive-profile-builder skill structures competitive analysis through the DHM lens. When you analyze a competitor, the output isn't just a feature comparison table. It maps where competitors delight users, identifies what's defensible versus easily replicated, and surfaces margin dynamics.

Combined with your loaded product context, this means competitive analysis that connects directly to your strategic positioning — not a generic SWOT matrix you'll never look at again.

What changes: Competitive reviews shift from "what features do they have" to "where are they delightful, where are they vulnerable, and what can we build that's hard to copy." That's a fundamentally different strategic conversation.


4. RICE/ICE Prioritization — Automated With Your Actual Data

What it does manually: RICE (Reach, Impact, Confidence, Effort) and ICE (Impact, Confidence, Ease) are the most commonly cited prioritization frameworks in product management. They're also the most commonly fudged. PMs fill in scores based on gut feel, argue about whether something is a "3" or a "4" on impact, and end up with a spreadsheet that confirms whatever they already believed.

How AI automates it: When you run /roadmap-builder with your product context loaded, prioritization scoring draws from actual data:

  • Reach connects to your real user segments and persona definitions
  • Impact is evaluated against your stated product goals and success metrics
  • Confidence is scored based on available evidence — interview data, usage patterns, competitive signals
  • Effort is informed by your product's current technical state

The framework still requires your judgment. But instead of starting from a blank spreadsheet and making up numbers, you start from a structured assessment grounded in your product reality. You review and adjust the scores, rather than fabricating them.

Before (manual RICE):

A PM opens a spreadsheet. Fills in scores 1-5 based on instinct. Debates with the team about whether "Impact" means revenue impact or user impact. Ships the thing the loudest stakeholder wanted anyway.

After (embedded framework):

The scoring pulls from your defined personas, goals, and competitive context. When a stakeholder questions why Feature X scored lower, you can point to the evidence behind each score — not your subjective ranking from Tuesday's meeting.


5. Jobs-to-be-Done Interview Analysis — Automated Synthesis

What it does manually: JTBD analysis requires identifying the functional, emotional, and social jobs users are trying to accomplish. It means parsing interview transcripts for hire/fire moments, progress-making forces, and anxieties. Most PMs either skip the structured analysis entirely or do it inconsistently across interviews.

How AI automates it: The /user-interview-analyzer skill applies JTBD extraction as a core layer of every interview analysis. For each transcript, it identifies:

  • Functional jobs (what the user is trying to accomplish)
  • Emotional jobs (how they want to feel)
  • Goals and frustrations mapped to your product's existing persona definitions
  • Follow-up questions to fill gaps in the next interview

The JTBD framework isn't an optional add-on. It's embedded in how the skill processes every interview, ensuring consistent extraction whether you analyze one transcript or twenty.

"User research, validate products and solutions, time-consuming and constant update of PRDs."

When you're managing multiple products, the consistency of framework application matters more than the sophistication of any single analysis. Embedded frameworks guarantee that consistency.


Why Embedded Frameworks Beat Prompt Engineering

There's a tempting shortcut: just paste a framework description into your ChatGPT prompt. "Use Teresa Torres's Opportunity Solution Tree framework to analyze this transcript." You'll get something that looks right. Headers in the right places. Terminology that matches.

But there are two things prompt-based frameworks can't do.

First, they can't connect to your context. A prompt-engineered Torres analysis doesn't know your personas. It doesn't know that the "switching cost anxiety" a user mentioned maps to a competitive gap you've already identified. It produces a structurally correct but contextually empty output.

Second, they can't compound. When frameworks are embedded in a system — where your interview analysis feeds your opportunity map, which informs your PRD, which shapes your roadmap — each output builds on the last. Prompt engineering starts from zero every time.

This is the difference between a framework you know and a framework that's part of your operating system. One requires discipline and memory. The other just works.


How This Works in Practice

mySecond embeds these frameworks across 70+ PM skills, all running on your loaded product context. You don't configure which framework to use. You don't paste framework descriptions into prompts. You run a command — /prd-generator, /user-interview-analyzer, /roadmap-builder, /competitive-profile-builder — and the right framework is applied with the right context, every time.

The frameworks are the infrastructure. Your judgment is what matters on top.


Frequently Asked Questions

Does automating frameworks mean PMs stop thinking critically?

No. Embedded frameworks handle the structural and synthesis work — the parts that are repetitive and error-prone when done manually. Your judgment still drives which problems to solve, which priorities to ship, and which research to pursue. The framework ensures you're making those decisions with complete, well-structured inputs instead of scattered notes and half-remembered mental models.

Which PM frameworks work best with AI automation?

Structured analytical frameworks automate well because they follow consistent, repeatable patterns. RICE and ICE scoring, JTBD interview extraction, Teresa Torres's opportunity mapping, and Gibson Biddle's DHM analysis all translate directly into AI skills. Highly contextual frameworks like stakeholder negotiation or organizational influence don't automate the same way, because they depend on real-time human dynamics that AI can't observe.

Can I customize which frameworks the skills use?

Yes. Every skill is a markdown file you own and can modify. You can adjust the embedded framework to match your team's specific criteria, combine multiple frameworks into a single skill, or swap one methodology for another. The system is your infrastructure — not a locked product you can't change.

How is this different from using ChatGPT with a framework prompt?

Two critical differences. First, embedded frameworks run against your persistent product context — your personas, competitors, goals, and product state. ChatGPT starts from zero every session. Second, embedded frameworks connect across skills. Your interview analysis informs your PRD, which informs your roadmap. Prompt-based frameworks produce isolated outputs that don't compound over time.

How long does it take to set up embedded frameworks?

If you're using mySecond's PM Operating System, the frameworks are already embedded in the 70+ skills included. Setup means loading your product context — company, product, personas, competitors, and goals — which takes 30-45 minutes. After that, every skill applies the relevant framework automatically against your context. There's no framework configuration step because the frameworks are the skills.



mySecond embeds proven PM frameworks into 70+ reusable skills — so your team gets consistent, context-aware output every time. Browse the skills at mysecond.ai/skills.


Ron Yang is a product leader and the founder of mySecond, the PM Operating System built on Claude. He builds PM infrastructure for product teams at growing companies.