PMM Strategy ✦ Human-written

The Analyst Relations Playbook

Most companies treat analyst relations as a Q4 fire drill. The ones that consistently land Leader treat it as a twelve-month discipline. Here is the playbook I have never written down — until now.

⚡ 60-Second Summary

Kuber Sharma has worked on more than 30 Gartner Magic Quadrant cycles across multiple enterprise software categories, consistently helping companies achieve and maintain Leader positioning. This essay lays out the full-year AR operating model he has refined over two decades — from the reading discipline that starts in January to the Peer Insights cadence that runs all year, the RFI programme management that most teams underestimate, and the relationship architecture that turns analysts into long-term thought partners rather than annual judges. The core argument: AR is not a communications function. It is a positioning architecture discipline, and the companies that treat it that way are the ones that win.

I have worked on more than thirty Gartner Magic Quadrant cycles. Some of those I helped move from Niche Player to Leader. Some I helped stay Leader when the competition was gaining ground and the category was shifting underneath us. A few times, I helped create a category that did not exist yet and positioned the company as the leader of it before anyone else realised the quadrant was coming.

I have never written this playbook down. I have given pieces of it in conversations with PMM teams, in advisory calls with enterprise AI startups, and in hallway conversations at Gartner Symposium. But the full thing — from January to December, all the tactical and strategic work that actually moves quadrants — that has lived in my head and my operational notes for twenty years.

This is the essay I wish someone had given me when I ran my first MQ cycle. Not the how-to-schedule-a-briefing guide. There are a hundred of those. This is the operating model underneath — the year-round discipline that separates companies that earn the Leader spot from companies that campaign for it and wonder why it does not work.

Why most companies get AR wrong

The fundamental error is treating analyst relations as a campaign. Most companies spin up AR activity in Q3, scramble through the evaluation in Q4, and then go quiet until the next cycle starts. I have a name for this pattern: the renewal problem. It is the AR equivalent of showing customer love only during a contract renewal — the analyst sees through it immediately, and it damages rather than builds the relationship.

The companies I have seen consistently land in the Leader quadrant treat AR as a twelve-month discipline. Not because they are obsessive, but because that is what the evaluation actually rewards. The Magic Quadrant is not a test you cram for. It is the output of a relationship you tend all year — a relationship built on the analyst's confidence that you understand the market, that you are executing against a credible vision, and that your customers validate the story you are telling.

That confidence does not get built in a single briefing. It gets built across four quarters of showing up, being useful, and demonstrating consistency between what you say and what you do.

Here is what that looks like, quarter by quarter — and then the specific tactical moves that most AR guides leave out.

Q1: The reading discipline that changes everything

How should a PMM start the year for analyst relations?

January is when you build the foundation that everything else depends on. The single most important thing you can do in Q1 is read everything your analysts have published in the last six months. I mean everything. Research notes, market guides, Hype Cycle placements, Cool Vendors reports, Predicts pieces, competitive landscapes. The analysts covering your MQ — and the two or three adjacent MQs that matter to your category — are publishing constantly. They are telling you, in writing, what they care about, how they think the market is evolving, and what criteria they will likely weight in the next evaluation cycle.

Most PMMs skim the titles and maybe read the executive summary. The ones who win Leader read every piece, highlight the specific language analysts use to describe market shifts, and build a running document of themes. You are looking for signals in three areas.

First, what criteria are they telegraphing will matter more in the next cycle? Analysts signal this months in advance. They will start writing about composability, or AI-native architectures, or platform breadth, or orchestration capabilities six to nine months before those themes show up as formal evaluation criteria. If you are reading their output, you have a roadmap story ready. If you are not, you will be surprised by the criteria — and that surprise will show.

Second, what language do they use? This matters more than most PMMs realise. If the lead analyst describes the market evolution as a shift from task automation to process orchestration, and your positioning still says you help companies automate tasks better, you have already lost. The framing mismatch signals that you are behind the market conversation. You do not want to parrot the analyst's language — that feels performative. But you want your framing to be in conversation with theirs, to show that you see the same market dynamics they see, even if you have a different point of view on where things are heading.

Third, where do you disagree with them? This is counterintuitive, but some of the most productive analyst conversations I have had were ones where I respectfully challenged a published point of view. Analysts do not want vendors who agree with everything they write. They want thought partners. If you have read their work deeply enough to engage with it critically — to say, I see why you framed it this way, but here is what we are seeing in customer conversations that suggests a different trajectory — you have differentiated yourself from the vast majority of vendors who show up and just pitch.

The other Q1 action: schedule your first check-in of the year. This is not the MQ briefing. This is a relationship call. No slides. No demo. A conversation about what you are seeing in the market, where you are investing, and what you are hearing from customers. The goal is to establish a cadence — one that signals AR is a year-round priority, not a seasonal scramble.

Do this in January, when most vendors are still recovering from sales kickoff. Being first signals seriousness. That signal matters more than you think.

Q2: Building the proof that matters before anyone asks for it

When should you start building Peer Insights reviews?

Now. And you should not stop until December.

Here is a truth that most PMMs learn too late: Gartner Peer Insights reviews are not a nice-to-have. They are an increasingly weighted input to the Magic Quadrant evaluation. In many MQs, Gartner has replaced traditional customer reference calls entirely — Peer Insights is now the primary vehicle for reference information. Beyond the MQ itself, Peer Insights feeds into a separate Voice of the Customer report that uses its own methodology to evaluate vendors based entirely on end-user feedback. That report has become a credibility signal in its own right.

The specifics matter here. To be eligible for Gartner's Customers' Choice designation, you need a minimum of twenty eligible reviews within the eligibility window, which typically spans eighteen months. You need an average overall rating of 4.2 stars or above. And Gartner recognises a maximum of seven vendors per market. These are not arbitrary numbers. They mean that a thin, recent-only set of reviews is not enough. You need depth, consistency, and temporal distribution.

The mistake I see over and over is the Q3 scramble. Companies realise the evaluation window is approaching and launch an urgent campaign to get customers to leave reviews. Fifteen reviews land in September, nothing before that. Analysts can see the timestamp clustering. It signals exactly what it is: a coordinated push rather than organic customer sentiment. It does not help your credibility. It may actively hurt it.

What works is a steady, year-round drumbeat. Starting in Q2, identify five to eight customers per quarter who have had genuine success with your platform and would be willing to share their experience. Work with your customer success team to make the ask warm and easy. Spread it out across the year. By the time the formal evaluation begins, you have a rich, temporally distributed set of reviews that reads as organic — because it is.

One more thing on Peer Insights that most PMMs miss: read the reviews your competitors are getting. Religiously. This is free competitive intelligence of remarkable quality. You will learn what customers love about competing products, what frustrates them, and — more valuable than either — what language real enterprise buyers use to describe the problem space. That language should directly inform your positioning. And when the analyst asks you what your customers say about a specific capability, being able to cite Peer Insights themes — in the language your customers actually use — is a level of market awareness that most vendors never demonstrate.

When should you start building the MQ demo?

Also now. And I need to address something that has fundamentally changed the demo game in recent years: in more than half of Magic Quadrants, Gartner has eliminated the live presentation entirely. The default in many evaluations is that the analyst requests pre-recorded demo videos instead.

This changes the preparation calculus significantly. A live presentation allowed for real-time interaction — you could read the room, adjust your emphasis, answer questions on the fly. A pre-recorded demo offers none of that. Each video, and there may be up to twelve individual recordings, must fit within a strict time limit — often seven minutes. You are making your case to a camera, knowing the analyst will watch it after the fact, probably alongside a dozen other vendors' recordings on the same day.

This means your demo must be tighter, more precise, and more ruthlessly edited than a live presentation ever needed to be. Every second counts. Every module must be engineered to answer one specific evaluation criterion, with the evidence front-loaded rather than built up to. The production quality matters too — not in a polished-marketing-video sense, but in a clear audio, clean screen capture, logical flow sense.

"The companies that fumble their analyst demos are not showing bad products. They are showing unrehearsed ones."

I still build what I call a demo menu: a set of six to eight modular demo scenarios, each three to five minutes long when recorded, each focused on a single capability. Whether the format is live or pre-recorded, the preparation philosophy is the same — you build them now and rehearse them over the next two quarters. By the time you are in the evaluation window, every demo module should be so well practised that the person delivering it could do it in their sleep.

Your Q2 analyst check-in follows the same format as Q1 — conversational, no hard sell — but now you have more to share. Early customer wins. Product launches. Roadmap clarity. The analyst should start to see a pattern: you are executing against the story you told them in January. Consistency across quarters is the most underrated AR tactic I know. Analysts evaluate dozens of vendors. The ones who tell a coherent story that evolves credibly from quarter to quarter stand out in ways that no single brilliant briefing can replicate.

Q3: The preparation that separates Leader from Visionary

What should the Q3 analyst check-in accomplish?

By Q3, if you have done the first two quarters well, your analyst knows your narrative, has seen your trajectory, and has formed a mental model of where you fit in the market. The Q3 check-in is where you shift from relationship mode to preparation mode. It is more structured: here is our vision, here is our execution evidence, here is our roadmap. And you ask the analyst a direct question: what would you want to see in the formal evaluation?

Not every analyst will answer that directly. But most will give you signals. They will ask follow-up questions in specific areas — and those areas are exactly where you need to be strongest. Listen to what they probe. That is your preparation priority list.

"If you are surprised by the evaluation criteria, you have not been paying attention."

How should you manage the RFI process?

This is the part that catches most teams off guard. The Gartner MQ RFI — the formal questionnaire that forms the backbone of your evaluation submission — is not a form you fill out over a weekend. Across the cycles I have been involved in, the RFI consistently requires 150 to 200 or more hours of cross-functional work. It covers everything: product capabilities, differentiators, vision, market strategy, customer retention, services and support, financial performance. Think of it as a sales RFP of two hundred questions, except the buyer is the most influential research firm in enterprise technology.

The programme management discipline around the RFI is where many teams fail — not because they lack good answers, but because they lack good process. Here is what works.

First, establish a dedicated cross-functional team early, led by a project lead from product marketing. You need product management for capability and roadmap questions, engineering for technical depth, customer success for retention and satisfaction data, sales for go-to-market evidence, and finance for business health metrics. Draft a RACI matrix before the RFI arrives so that when it does, every question has an owner, a reviewer, and a deadline.

Second, build internal deadlines that are two weeks ahead of every Gartner deadline. This is non-negotiable. The RFI responses need cross-functional review, executive sign-off, and often legal clearance on financial disclosures. Two weeks of buffer is the difference between a thoughtful submission and a rushed one.

Third — and this is a nuance I have learned the hard way — do not be constrained by the questionnaire. The RFI is comprehensive, but it is the same RFI that every vendor in the market receives. It is not tailored to your story. You should answer every question completely, but you should also go beyond the questions to present your point of view on the market, your differentiated narrative, and the evidence that supports it. Attach relevant artifacts: case studies, architecture diagrams, customer outcome data, third-party validation. The vendors who treat the RFI as a compliance exercise get compliance-level scores. The vendors who treat it as a storytelling opportunity — while still answering every question concisely and with specific metrics — are the ones who move quadrants.

Does it matter when you schedule your MQ demo?

Yes, and this is a tactic I rarely see discussed. When you are offered a window for your formal evaluation briefing or demo submission, do not just take whatever slot is available. Think about sequencing.

In my experience, going first is disadvantageous. The analyst is still calibrating their mental model. Everything you show will be implicitly compared to whatever comes next, and the analyst is actively looking for differentiators they have not seen yet. Going last is also risky — by then the analyst has formed strong opinions, is fatigued from weeks of evaluations, and you are fighting against confirmation bias. The sweet spot is the middle-to-late window, where the analyst has enough context to appreciate your differentiators but has not locked in their rankings.

You cannot always control this. But when you can influence the timing, it matters.

How should you prepare reference customers for analyst calls?

Even as Peer Insights takes over much of the reference function, many analysts still request direct customer conversations — particularly for vendors they are considering for a significant position change. When this happens, it is not the time to improvise.

Identify three to five customers who can speak articulately about business outcomes rather than just product features, who represent a range of industries and use cases to demonstrate platform breadth, and who have been prepared — not scripted — to address the specific criteria that matter in this evaluation.

The distinction between preparing and scripting is critical. Analysts can detect a coached customer conversation immediately, and it damages your credibility worse than a mediocre reference. What works is a twenty-minute preparation call where you remind the customer what the evaluation is about, suggest two or three themes they might want to highlight from their experience, and make clear that it is a conversation, not a performance. The best reference calls I have seen are the ones where the customer talks about their problems and their outcomes in their own words, with genuine enthusiasm and genuine candour about what could be better. That authenticity is worth more than any rehearsed talking point.

One tactical note: when the analyst requests references, offer double the number they ask for. If they want three, provide six across a mix of industries and company sizes. This gives the analyst choices and signals confidence — you are not scrambling to find three customers who will say nice things. You have a deep bench of advocates.

Q4: Executing from a position of strength

What makes the formal evaluation feel different when you have done the work all year?

Everything. If you have been running this operating model since January, the formal evaluation feels like a structured conversation with someone who already knows your story — because they do. You are not introducing yourself. You are not explaining your market. You are demonstrating execution against a narrative the analyst has been tracking for nine months.

This is the unfair advantage of the year-round approach, and it is difficult to overstate how much it changes the dynamic. The vendors who show up cold in Q4 are pitching. The ones who have been showing up all year are continuing a dialogue. The analyst already has a mental model of your strengths, your trajectory, and your market position. Your job in the formal evaluation is to confirm and deepen that model — not to build it from scratch.

What does the ideal MQ presentation look like?

Whether you are delivering a live presentation or submitting pre-recorded materials, the structural principles are the same. The single biggest mistake I see is the sixty-slide deck. Companies try to speed-run through every feature, every customer logo, every performance metric. The analyst's attention collapses by slide twelve because they are not processing information anymore — they are enduring a monologue.

What works is a tight narrative — fifteen to twenty slides for a live session, or a structured set of recordings for a pre-recorded format — built around a single story arc. The most effective presentations I have been involved in follow a structure I think of as Context, Conflict, Resolution. Context: here is how the market is evolving and what enterprise buyers need. Conflict: here is the gap between what buyers need and what the current generation of solutions provides. Resolution: here is how our approach closes that gap, with evidence from customers who have seen it work.

That structure works because it is the structure analysts themselves use when they write research. You are making it easy for them to translate your story into their framework.

Build in breathing room for questions. In a live session, the best MQ presentations feel like a structured dialogue — the analyst interrupting with questions, probing specific areas, testing their hypotheses about the market against your evidence. That is not a failure of your presentation flow. That is the presentation working as designed. For pre-recorded formats, anticipate the questions the analyst would ask and address them proactively within the recording. Think of each video module as your answer to a question the analyst has not asked yet but will.

How rehearsed should the MQ demo be?

Rehearsed until the person giving it is bored. I mean this literally. When the demo has been run so many times that the presenter finds it tedious, that is when it becomes natural. That is when they can handle an unexpected question at step four without losing the thread of the narrative. That is when the demo feels confident rather than anxious.

Time it. Cut it down. Cut it again. The best MQ demos I have seen were under twenty minutes for live formats and under seven minutes per module for pre-recorded ones. They showed exactly what the analyst needed to see and nothing more. Excess is not thoroughness. It is noise.

"The MQ is not a test you cram for. It is a relationship you tend all year."
Full-Year AR Checklist
Q1 — January–March
  • Read all analyst research from the last 6 months
  • Build a running doc of emerging criteria signals and language
  • Identify analyst POVs worth engaging critically
  • Schedule Q1 relationship call — no slides, no demo
  • Identify secondary analyst and adjacent MQ analysts
Q2 — April–June
  • Launch Peer Insights cadence — 5–8 customers this quarter
  • Read competitors' Peer Insights reviews
  • Build demo menu — 6–8 modular scenarios, 3–5 min each
  • Schedule Q2 relationship call — share early wins and launches
  • Use at least one inquiry — save it for a question worth asking
Q3 — July–September
  • Schedule Q3 prep check-in — ask what the analyst wants to see
  • Assemble RFI cross-functional team + draft RACI matrix
  • Set internal deadlines 2 weeks ahead of Gartner's
  • Lock demo scenarios — begin weekly rehearsals
  • Pre-brief 6+ reference customers (2× what analyst will ask for)
Q4 — Evaluation window
  • Submit RFI — answer every question, exceed the questionnaire
  • Target middle-to-late eval window — not first, not last
  • Deliver demo — 15–20 slides live, 7 min/module pre-recorded
  • Confirm references prepared, not scripted
  • Use remaining inquiry time — ask about post-eval market direction

The relationship architecture

What separates transactional AR from the kind that compounds?

Everything I have described so far is process — the operational mechanics of running an MQ cycle well. But the process only works if it sits on top of a genuine relationship with the analyst. And this is where most AR programmes fall apart, because most companies treat the analyst relationship as a means to an end rather than an end in itself.

The best analyst relationships I have built over twenty years share a common quality: they are two-way. The vendor provides value to the analyst — market data, customer insights, a credible point of view on where the market is heading — and the analyst provides value to the vendor — unfiltered market feedback, competitive context, a preview of how buyers are thinking about the category. That exchange, when it is genuine, creates a flywheel that compounds over years.

What actually builds trust is showing up with something useful and expecting nothing immediate in return. Share a piece of primary market data that is genuinely interesting — not cherry-picked to make you look good, but informative about the market. Offer to connect the analyst with one of your engineers or product leaders for a technical deep-dive on a topic they are researching — even if that topic is not directly related to your evaluation. Provide thoughtful, specific feedback on a piece of research they have published, including areas where you think they have it wrong.

This is how you become a source rather than a subject. A source is someone the analyst actively wants to hear from because the conversation advances their understanding of the market. A subject is someone the analyst has to talk to as part of their evaluation process. The distinction is enormous, and it shows up in the quality of the write-up you receive.

How do you build relationships across the analyst team?

Every MQ has a lead analyst, but there is usually a secondary analyst and sometimes a broader contributing team. The lead analyst gets all the vendor attention. The secondary analyst often has significant influence on the evaluation but hears from far fewer companies. A thirty-minute call with the secondary analyst — where you share genuine market insights rather than a pitch — can be disproportionately impactful. This is not about gaming the process. It is about ensuring that more of the evaluation team has direct context on your story, rather than relying on the lead analyst to relay it.

Similarly, do not limit your engagement to the analyst who covers your specific MQ. If that analyst also covers two adjacent markets, engaging with their broader research agenda builds intellectual rapport that carries into your evaluation. It signals that you are a market thinker, not just a vendor optimising for a placement.

Competing on strengths, not against weaknesses

How should you handle competitive conversations with analysts?

This deserves its own section because it is one of the most consequential — and most frequently botched — aspects of analyst engagement.

The instinct, when sitting across from an analyst who also briefs your biggest competitor, is to point out what the competitor does poorly. Do not do this. I have seen it backfire dozens of times across thirty MQ cycles. It always backfires, for the same reason: the analyst talks to every vendor in the market. They will hear the competitor's version of the same story. If you have been negative, you do not damage the competitor's credibility — you damage your own.

The right frame is to compete on your strengths, not against their weaknesses. Think about what you want the analyst to be able to say about you in the MQ write-up. Not what you want them to say about your competitor — what you want them to say about you. Instead of saying "Competitor X cannot do multi-step orchestration," say "Our approach to multi-step orchestration works this way, and here are three customers who have deployed it in production with these outcomes." You have made the same competitive point without ever mentioning the competitor. And you have given the analyst something they can use: a proof point with evidence.

When the analyst directly asks you about a competitor — and they will — acknowledge the competitor's strengths before explaining where your approach diverges. "They have built a strong business in X, and they do Y well. Where we differ is in Z, and the reason that difference matters for enterprises is..." This kind of answer builds enormous credibility because it demonstrates that you understand the competitive landscape honestly rather than through the lens of your own marketing materials.

The tactics that compound

There are a set of specific moves that I have seen work across dozens of MQ cycles. They are not widely discussed, and individually they seem minor. Cumulatively, they are the difference between a good AR programme and a great one.

Use inquiries, not just briefings. A briefing is you talking to the analyst. An inquiry is you asking the analyst a question — and the analyst is contractually obligated to give you their honest assessment. Most companies dramatically over-index on briefings and barely use their inquiry time. This is backwards. Inquiries let you ask specific questions about market trends, competitive positioning, and evaluation criteria — and get candid answers you will never hear in a briefing format. Use them strategically. Save them for the questions only the analyst can answer: how are you seeing buyer priorities shift? What capability gaps are you hearing about most? Where do you think this category is heading that most vendors are not prepared for?

Become a source, not just a subject. Analysts are perpetually working on research that is in draft or under development — topics they are exploring but have not published yet. In your quarterly check-ins, ask what they are working on. If you have data, customer stories, or a point of view that is relevant to their in-progress research, offer it. Many analysts also produce regular podcasts, blogs, and video content. Offering access to your subject-matter experts, proprietary data, or exclusive insights for these channels builds a relationship that extends well beyond the MQ evaluation.

Share primary market data. Analysts are perpetually hungry for primary data about market trends. If you have aggregated, anonymised data from your customer base that reveals something interesting about the market — adoption rates, deployment patterns, use case distribution, failure modes — share it. Not as a sales tactic. As a contribution to their research. I have seen this single move change the tenor of an entire analyst relationship. You go from being a vendor who wants a better score to being a company that helps the analyst understand the market. That shift pays dividends well beyond a single evaluation cycle.

Do not make it a tick-box exercise. Every tactic I have described — the quarterly check-ins, the Peer Insights programme, the reading discipline, the demo preparation, the reference customer management — can be executed as a compliance exercise. None of that will work. The difference between the companies that earn Leader and the companies that perform the motions of pursuing Leader is the difference between genuine engagement and process compliance. Analysts can feel it. And over a twelve-month cycle, the cumulative effect of showing up authentically versus showing up performatively is the difference between a write-up that says "strong vision with consistent execution" and one that says "comprehensive capabilities with limited differentiation."

What I would do differently

No playbook is perfect, and I have made every mistake I have described in this essay at least once.

I have been too slow to start the reading discipline in Q1 and been surprised by criteria shifts I should have anticipated. I have clustered Peer Insights reviews in Q3 and seen the analyst's scepticism in real time. I have let a demo go under-rehearsed because we ran out of preparation time, and watched the presenter lose their thread when the analyst asked an unexpected question at minute eight. I have submitted RFI responses that were technically complete but narratively flat — every question answered, but the overall story lost in the compliance exercise.

The most expensive mistake, though, was underestimating how much time it takes to build internal alignment around the AR investment. The reading, the quarterly check-ins, the demo preparation, the Peer Insights programme, the RFI coordination, the reference customer preparation — this is real operational work that requires commitment from product marketing, product management, customer success, engineering, and executive leadership. In my experience, the RFI alone requires 150 to 200 hours of cross-functional effort. The full twelve-month programme is a significant resource commitment.

The organisations where this operating model works best are the ones where the PMM leading AR has explicit executive support — from their CMO and CPO — to invest the time and to pull cross-functional resources when needed. The organisations where it fails are the ones that treat AR as a side-of-desk activity for a mid-level PMM who is also running three launches and a competitive programme. That PMM will do their best, and their best will not be enough, because the year-round discipline requires dedicated focus that fragmented attention cannot provide.

The real lesson from thirty Magic Quadrants

The operating model I have described is not complicated. Read deeply. Show up consistently. Be genuinely useful. Build your proof all year. Manage the RFI like the strategic asset it is. Compete on your strengths with evidence, not against competitors with accusations. Prepare obsessively. Respect the analyst's intelligence and their time. And do all of it because you find it genuinely valuable, not because it is on a checklist.

The reason most companies do not do this is not that it is hard to understand. It is that it requires sustained commitment in quarters where there is no visible payoff. The reading you do in January does not produce a measurable result until October. The Peer Insights review you seed in April does not visibly move the needle until it shows up in a temporally distributed set of reviews nine months later. The quarterly check-in in May feels like time you could be spending on pipeline generation.

The companies that earn the Leader spot are the ones willing to invest in a process whose payoff is diffuse and delayed. That is, ultimately, the real competitive advantage in analyst relations. Not better products, not better decks, not better demos — although all of those matter. The advantage is the organisational patience to treat AR as what it actually is: a twelve-month discipline that compounds.

I have seen it work thirty times. It works.


Kuber Sharma has worked on 30+ Gartner Magic Quadrant cycles across multiple enterprise software categories, consistently helping companies achieve and maintain Leader positioning. He writes Positioned, a newsletter on AI-era product marketing strategy, and is Sr. Director of Product Marketing at UiPath. Previously at Microsoft Azure, Salesforce, and Tableau.

← All Essays
Data Strategy
Zero Copy Category Creation: How We Won
Six months to create a product category that didn't exist and hit Gartner MQ Leader. Here's the full GTM model.
PMM Strategy
B2B Positioning Framework That Actually Works
After 20 years positioning enterprise products, one mental model consistently holds. Here is what it looks like in practice.
PMM Strategy
Microsoft Azure PMM: What 7 Years Taught Me
I launched 12+ products at Microsoft Azure. Most underperformed at launch. Seven years of enterprise PMM lessons.