play icon for videos

Stakeholder analysis: methods, steps, and how to keep it alive

A working guide to stakeholder analysis. Influence, interest, impact, sentiment. Mendelow grid, six steps, and why most analyses go stale by month three.

US
Pioneering the best AI-native application & portfolio intelligence platform
Updated
May 4, 2026
360 feedback training evaluation
Use Case
STAKEHOLDER ANALYSIS
A stakeholder list names who matters. A stakeholder analysis weighs influence and impact. Most analyses freeze the moment they are built.

This guide explains how to build a stakeholder analysis that survives past month three. The four lenses (influence, interest, impact, sentiment), the six steps from identification to refresh, and the methods (Mendelow grid, salience model) that frame the work. Worked examples come from community-health partnerships, foundation initiatives, and project-management contexts. No prior background needed.

THE FOUR LENSES

Four lenses turn a stakeholder list into an analysis

A working stakeholder analysis sees each stakeholder through four lenses, not one. Influence weighs how much they can change a decision. Interest weighs how much they care. Impact tracks the bidirectional change between stakeholder and program. Sentiment captures how they feel about it now, in their own words. The Mendelow grid stops at lens two; a working analysis tracks all four and updates them on a cadence.

CAUSAL DEPTH →
01
Influence
How much can they move the decision?

A score for formal authority, informal reach, and recent track record. Stored next to the reasoning that produced the score.

02
Interest
How invested are they in the outcome?

A score derived from participation patterns, stated priorities, and time spent engaging. Captures stake without confusing it with power.

03
Impact
How does change flow in both directions?

A bidirectional measure: program effect on the stakeholder, stakeholder effect on the program. Tied to outcome data, not stored separately.

04
Sentiment
How do they feel right now, in their words?

Open-text evidence pulled from interviews, survey responses, and engagement notes. The lens that explains why a score moved.

CAPABILITY UNDER EACH LENS
Clean role mapping with a stable stakeholder ID across every record
Longitudinal engagement data, not a one-time score
Outcome data linked to each stakeholder record
Mixed-method evidence: scores plus open-text reasoning

The first two lenses produce a snapshot. The third and fourth turn the snapshot into a working analysis that updates when the world changes.

Static frameworks like the Mendelow power-interest grid and the salience model name the first two lenses well. The third and fourth lenses, plus refresh history, are what distinguish a stakeholder analysis from a stakeholder map. Standardized reporting and basic queries fall under the descriptive surface of lens one and two; lenses three and four require linked, longitudinal data.
DEFINITIONS

Stakeholder analysis, in plain terms

Five definitions, ordered to build understanding. Start with what a stakeholder analysis is, then what stakeholder impact analysis adds, then how the methods (Mendelow, salience) name the work, then how stakeholder intelligence reframes the practice for ongoing data.

Q.01

What is a stakeholder analysis?

A stakeholder analysis identifies every group that can affect a program or be affected by it, weighs their influence and interest, and tracks how those weights change as the program runs. The output is a working map that informs who to engage, how often, and with what message.

It is not a one-time deliverable. The same analysis gets reviewed at every major decision point and updated when roles, postures, or relationships shift. An analysis without a refresh history is a snapshot, not an analysis.

Q.02

Stakeholder analysis meaning

The phrase has a literal meaning and a working meaning. Literally: an exercise to systematically catalogue stakeholders and weigh their relative importance. In working practice: a continuous process that pairs scores with reasoning, ties stakeholder records to outcomes, and updates the picture as the program runs.

The literal meaning is what most teams produce. The working meaning is what the program actually needs once decisions start landing on the analysis. Confusing the two is the source of most stakeholder-analysis fatigue.

Q.03

What is stakeholder impact analysis?

Stakeholder impact analysis measures how program activity changes each stakeholder's situation and how each stakeholder's posture changes program outcomes. It is bidirectional: program effect on stakeholder, stakeholder effect on program.

A complete impact analysis ties stakeholder records to outcome data so a shift in one is visible alongside the other. The standard sequence: identify stakeholders, document their interests and concerns, score the influence each can exert, weigh the impact the program will have on each, name the resulting strategic challenges. The fifth step is what distinguishes an impact analysis from a generic stakeholder list.

Q.04

How do you do a stakeholder analysis?

Six steps. Identify every affected group; cast wide first and trim later. Categorize by role and relationship. Score each group on influence and interest using a documented scale. Engage at a depth matched to the score. Track impact bidirectionally, linking stakeholder records to outcome data. Refresh on a defined cadence.

The discipline is in step six. Most teams complete steps one through five and never return. The analyses that stay useful past month three are the ones with a refresh history attached. Quarterly is the floor for active programs; phase-gate refresh is standard in project-management contexts.

Q.05

What is stakeholder intelligence?

Stakeholder intelligence is the practice of treating stakeholder analysis as a continuous data system, not a one-time deliverable. Each stakeholder has a persistent record. Engagement data flows in over time. Influence, interest, and sentiment are recalculated as new evidence arrives.

The intelligence frame replaces the static analysis with a working picture that changes when reality changes. The vocabulary is newer than "stakeholder analysis", and it is most often used by teams running programs at scale where a quarterly refresh is too slow and a real-time dashboard is the working tool.

RELATED BUT DIFFERENT

Stakeholder mapping

A visual layout of who the stakeholders are and how they connect. Maps answer who and how. Analysis adds the weights and the time axis. A map without weights is a diagram; an analysis without a map skips the relational view.

Stakeholder sentiment analysis

A focused read on how stakeholders feel right now, often from open-text or interview data. One lens of a full analysis, not a substitute. Sentiment without influence and interest scores leaves the program guessing at who matters.

Stakeholder assessment

Common synonym for stakeholder analysis. Some sources use "assessment" to emphasize the evaluative step (scoring) and "analysis" to emphasize the diagnostic step (what to do with the scores). The distinction is not strict; what matters is whether scores have reasoning attached.

Stakeholder profiling

A descriptive activity: gathering background on each stakeholder (organizational role, interests, history with the program). Profiling produces inputs for analysis. It is not analysis on its own, because no scoring or weighing has happened.

DESIGN PRINCIPLES

Six principles that separate a working analysis from a deck slide

The principles are independent of method. Whether the team uses Mendelow's grid, the salience model, or a custom framework, these six laws decide whether the analysis stays useful past the first month.

01 · IDENTIFICATION

Identify before you weigh

Cast wide first; trim later, with reasons.

A list of stakeholders that excludes skeptics, unfunded voices, or quiet dissenters is a list of allies, not stakeholders. The first pass should overshoot. Trimming happens on the second pass with documented criteria.

Why it matters: the stakeholders who get cut early are the ones who surface late, usually as a problem.
02 · WEIGHTING

Influence and impact are different

A stakeholder can be high-impact and low-influence, or vice versa.

Beneficiaries often have high impact and low influence. Funders often have high influence and low impact. Conflating the two collapses two lenses into one and produces a flat ranking that mismatches reality.

Why it matters: the four-quadrant grid only works if the axes mean different things.
03 · CADENCE

Track over time, not once

A stakeholder analysis without a refresh history is a snapshot.

Stakeholders move quadrants. Influence rises with new authority. Interest fades when a program's target shifts. Without a refresh cadence, the analysis hardens into a document that no one references after the second meeting.

Why it matters: quarterly is the floor for active programs. Phase-gate refresh is the floor for projects.
04 · EVIDENCE

Numbers next to reasons

A score without reasoning is opinion in a costume.

Each influence and interest score should sit next to the open-text reasoning that produced it. Six months later, the team needs to know what the score was based on so they can decide whether the underlying conditions have changed.

Why it matters: reviews of analyses without reasoning collapse into debates about the score, not the situation.
05 · ENGAGEMENT

Engagement is a loop

Outputs of an analysis become inputs of the next one.

Engaging a stakeholder produces new evidence: how they responded, what they asked, whether they showed up. That evidence belongs in the next refresh, not in a separate engagement log that no one cross-references.

Why it matters: separate engagement and analysis logs create the same disconnect that hurts every program: data in two places, conclusions in neither.
06 · IDENTITY

Mismatched IDs equals no analysis

If you cannot match the same stakeholder across records, the analysis collapses.

"County Health Department" in the survey, "DPH" in the engagement log, "Public Health Office" in the funder report. Without a stable stakeholder ID across every record, the refresh discovers three stakeholders where there is one, and the influence score is the average of three guesses.

Why it matters: identity is the cheapest fix to design in and the most expensive fix to retrofit.
METHOD CHOICES

Six choices that decide whether the analysis stays useful

Each row is a design choice that the team makes once, often without realizing it is a design choice. The broken column shows the workflow most teams fall into. The working column shows the alternative. The first choice (identity) controls every other choice; if stakeholders cannot be matched across records, none of the rest works.

The choice
BROKEN The way most teams do it
WORKING The way that survives a refresh
What this decides
Stakeholder identity
How a stakeholder is named across records
A free-text name on each form. "County Health Dept", "DPH", "Public Health" all live in different files. The reconciliation work happens once, by hand, before each review.
A persistent stakeholder ID assigned at first contact and used on every record after. The ID survives staff turnover, name changes, and merged organizations.
Whether refresh is possible at all, or whether each refresh starts from scratch.
Scoring scale
How influence and interest get quantified
A 1-to-5 scale with no anchors. Different reviewers score the same stakeholder a 2 and a 4. The team averages the scores, which hides the disagreement.
A 1-to-5 scale with explicit anchors per level (e.g., "5 = can veto a board decision"). Disagreements get logged and resolved before the score is final.
Whether scores are comparable across teams and across refreshes, or whether they drift quietly.
Sentiment evidence
How sentiment gets captured
A multiple-choice question on a feedback form. "How satisfied are you?" with five options. The answer is a number with no context.
An open-text response, captured in the stakeholder's own words, coded after collection by a defined scheme. Sentiment lives next to the score it explains.
Whether the team can explain why a score moved, or only see that it did.
Refresh cadence
When the analysis gets reviewed
Annually, in time for the funder report. Stakeholder shifts within the year are caught only when someone raises them in a meeting, weeks or months after they happen.
Quarterly review at minimum, with a triggered refresh on major events (regulatory change, board turnover, funding shift). The review takes an hour because the data is already linked.
Whether the analysis catches a posture shift in week 14 or in the year-end report.
Impact direction
What the analysis tracks
One-directional: how the program affects each stakeholder. The reverse direction (how each stakeholder affects the program) is in the project manager's head.
Bidirectional: program effect on stakeholder, and stakeholder effect on program, both linked to outcome data. Each side is measured against a baseline.
Whether the analysis explains why outcomes drifted, or only that they drifted.
Method choice
Which framework anchors the work
Mendelow's grid printed once, never reviewed. The team treats the four quadrants as final categories, even when stakeholders should have moved.
Mendelow as a starting frame, salience model as a check on it, and a refresh process that updates positions over time. The method is the frame, not the conclusion.
Whether the analysis survives the situation that prompted it, or expires when the situation changes.
COMPOUNDING EFFECT

The first choice (stakeholder identity) controls every other choice. With unstable IDs, scores cannot be compared across refreshes, sentiment cannot be tied to a specific stakeholder over time, and the bidirectional impact link breaks. Fix the identity layer first. The other five choices follow naturally.

A WORKED EXAMPLE

A community-health partnership catches a regulator shift in week 14

A regional diabetes-prevention initiative running across six stakeholder groups: a county health department, a hospital system, two employer wellness programs, an advocacy nonprofit, participating residents, and a state philanthropic funder. Two-year program. The team built a stakeholder analysis at kickoff and refreshed it quarterly. Mid-cycle, an analysis refresh catches a state-regulator posture change before it becomes an enrollment crisis.

We had six stakeholder groups when we started. By week 14 we noticed something the dashboard caught but the quarterly report would have missed: the state health department, which we had marked as monitor at kickoff, started showing up in three different engagement records with new questions about reporting standards. The influence score got bumped, the sentiment notes flagged a tone change, and we caught a posture shift two months before the new state guidance landed. If we had waited for the year-end report we would have been redesigning the intake forms in week 38 instead of week 14.

Program manager, regional diabetes-prevention initiative, mid-cycle review
TWO AXES, BOUND AT THE STAKEHOLDER ID
QUANTITATIVE
Influence and interest scores
  • Influence score 1-5 with documented anchors
  • Interest score 1-5 derived from engagement history
  • Quadrant position (Mendelow grid)
  • Score history: every change with a date
⟷ bound by stakeholder ID
QUALITATIVE
Sentiment and reasoning
  • Open-text sentiment from each engagement
  • Reasoning behind every score change
  • Interview excerpts coded by theme
  • Engagement notes from staff and partners
SOPACT SENSE PRODUCES

A working analysis that updates on its own cadence

  • A persistent record per stakeholder. Every score, sentiment note, and engagement event lands on the same record. No reconciliation work before a refresh.
  • Score changes with reasoning attached. Every quadrant movement carries the open-text reasoning that produced it. Six months later, the team can re-evaluate the change without rebuilding context.
  • A mid-cycle dashboard that catches drift. When sentiment in three engagement records starts to trend, the dashboard flags it before the quarterly review. The state-regulator shift surfaced this way.
  • Outcome data linked to stakeholder records. Enrollment numbers, completion rates, and self-reported behavior change are visible on the same page as the influence and interest scores. Bidirectional impact stays traceable.
WHY TRADITIONAL TOOLBOX FAILS

Five disconnected tools, an annual reconciliation

  • Stakeholder grid in a slide deck. Built once at kickoff, edited twice in two years, never updated as positions shift in real time.
  • Engagement notes in a shared doc. Written by whoever was on the call, in different formats. Searching for "state health department" returns four different stakeholder names referring to the same entity.
  • Survey data in a spreadsheet. Pre and post surveys ran cleanly. The link from survey records back to the stakeholder grid lives in the program manager's head.
  • Outcome data in the funder's portal. Only visible to the grants team, exported once a quarter, never cross-referenced against stakeholder posture until the year-end report writes itself.

The regulator-shift detection in week 14 is not a clever feature. It is a structural property of an analysis where stakeholder records, engagement evidence, and outcome data live in the same system with the same ID. In a five-tool toolbox the same shift gets caught at year-end, after the enrollment damage has already happened. The integration is the analysis.

PROGRAM CONTEXTS

Three program shapes, three different stakeholder analyses

The framework is the same. The program shape changes which lens carries the most weight, how often the analysis refreshes, and which stakeholders get the high-attention engagement track. Three contexts where the work looks different.

01

Foundation strategic initiative

A focus area, three to five years, multiple grantees

A program officer launches a five-year initiative on workforce development. Stakeholders include grantees (10-20 organizations), the foundation board, peer funders, public-policy actors, and the populations the grants are meant to serve. Influence varies sharply: the board can change the strategy, peer funders can co-invest or withdraw, grantees can succeed or struggle independent of foundation guidance.

The trap is treating grantees as passive recipients rather than as stakeholders with their own influence on the initiative's direction. A foundation that does not measure how grantees shape the strategy will discover, mid-initiative, that the strategy has drifted in directions no one explicitly chose. The discovery is often public.

What works: a stakeholder analysis that scores grantees on both their influence on the initiative and the influence of the initiative on them. Engagement runs as a loop, with grantee feedback informing the next strategy review. Sentiment across grantees is captured as open text and coded; aggregate shifts trigger a refresh. Influence and interest scores get reviewed at every grant renewal.

A SPECIFIC SHAPE
A workforce-development initiative scores its 14 grantees on influence over strategy direction (low to high) and impact received from the initiative (low to high). Quarterly grantee surveys feed sentiment data; engagement notes from program-officer site visits feed reasoning. Three grantees move quadrants between year-one and year-two reviews; one funder peer joins as co-investor in year three based on a sentiment shift the analysis caught early.
02

Multi-agency public-health partnership

A health initiative, two to four years, six to twelve agencies

A regional initiative on diabetes, opioid response, or maternal health pulls together a county health department, hospital systems, employer wellness programs, an advocacy group, and a state regulator. Each agency has its own authority structure, its own reporting requirements, and its own posture toward the initiative's goals. Sentiment shifts when leadership changes; influence shifts when state policy changes.

The trap is treating the partnership as a fixed set of agencies and missing the postural changes inside each. A hospital partner with a new chief medical officer is not the same hospital partner six months later. A state regulator that rolls out new reporting standards has shifted from monitor to manage closely whether or not the team noticed.

What works: a stakeholder analysis where each agency is a persistent record, sentiment notes are captured at every cross-agency meeting, and the influence score gets bumped automatically when key role-holders change. Outcome data (enrollment, completion, behavior change) sits next to the stakeholder records, so a posture shift and an outcome shift become visible together rather than in separate quarterly reports.

A SPECIFIC SHAPE
A diabetes-prevention initiative tracks six agency stakeholders plus aggregate participant cohorts. Quarterly score reviews are augmented by triggered refreshes on board changes, state-policy events, and material funding shifts. The state regulator moves from monitor to manage closely in week 14 of year one, after a sentiment-trend flag in three engagement records prompts a score review two months before new state guidance lands.
03

Corporate ESG and community engagement

An ESG program with community, regulator, and investor stakeholders

A corporate sustainability program touches several stakeholder categories at once: communities near operations, environmental regulators, ESG-focused investors, employee resource groups, supply-chain partners, and advocacy organizations. Influence is uneven and shifts on news cycles. Sentiment from communities and investors can move in opposite directions on the same announcement.

The trap is over-weighting investor sentiment because it is the easiest to capture (analyst reports, earnings calls) and under-weighting community sentiment because it is the hardest (door-to-door, town halls, advocacy-led communications). The analysis ends up tracking the noisy stakeholders well and the consequential ones poorly.

What works: a stakeholder analysis that assigns each category equal evidentiary weight even when the data shapes are different. Community sentiment captured as open-text in town-hall notes is coded with the same rigor as analyst reports. Influence scores account for legitimacy and urgency (the salience model lenses), not power alone. Refresh cadence is monthly at minimum; quarterly is too slow for an ESG context where a single news cycle can shift an investor relationship and a community relationship simultaneously.

A SPECIFIC SHAPE
A regional manufacturing site tracks 22 stakeholder records across community groups, regulators, investors, advocacy organizations, and supply-chain partners. Sentiment evidence comes from town-hall transcripts, regulatory comment periods, analyst reports, and supplier audit feedback. Monthly score refresh, with triggered refresh on news events affecting any high-influence record. Two community advocacy groups move from monitor to keep informed in year two after sentiment-coded transcripts flag emerging concerns about water-use practices.
A NOTE ON TOOLS
Sopact Sense Smartsheet Salesforce Stakeholder Circle Borealis Jambo Excel and Google Sheets

Most tools that show up in a search for stakeholder analysis software handle the first half of the work well. They produce stakeholder lists, render Mendelow grids, attach engagement notes, and export the result as a slide-ready deliverable. The architectural gap is the second half: keeping the analysis alive as the program runs. Persistent stakeholder IDs that hold across surveys, interviews, and outcome data; sentiment evidence captured as open text and coded by theme; refresh history visible alongside the current scores; and a link from each stakeholder record to the program outcomes that record affects.

Sopact Sense treats stakeholder analysis as one surface of a continuous data system, not a separate workspace. Each stakeholder is a record connected to surveys, interviews, engagement notes, and outcome data. Influence and interest scores live next to the open-text reasoning that produced them. When a regulator changes posture or a board member rotates, the score gets updated and the change is visible alongside the outcome data it affects. Refresh history is part of the analysis, not a separate worksheet that drifts out of sync.

FAQ

Stakeholder analysis questions, answered

Questions that come up in the first read, ordered to build understanding from definitions through methods to tooling. Every answer here matches the JSON-LD FAQ schema verbatim, so search engines see the same text the reader sees.

Q.01

What is a stakeholder analysis?

A stakeholder analysis identifies every group that can affect a program or be affected by it, weighs their influence and interest, and tracks how those weights change as the program runs. It produces a working map that informs who to engage, how often, and with what message. The output is not a one-time deliverable. It is a living record that gets reviewed at every major decision point and updated when roles, postures, or relationships shift.

Q.02

What is the difference between stakeholder mapping and stakeholder analysis?

Stakeholder mapping is a visual layout of who the stakeholders are and how they relate. Stakeholder analysis adds the weights: influence, interest, impact, sentiment. A map answers who exists and how they connect. An analysis answers who matters most right now, how that has changed since the last review, and what the program should do about it. Most teams produce a map and stop. Analysis is the work that turns the map into a decision tool.

Q.03

What is stakeholder impact analysis?

Stakeholder impact analysis measures how program activity changes each stakeholder's situation and how each stakeholder's posture changes program outcomes. It is bidirectional. The program acts on stakeholders, and stakeholders act on the program. A complete impact analysis ties stakeholder records to outcome data so a shift in one is visible alongside the other, instead of living in separate spreadsheets that are reconciled once a year.

Q.04

What is stakeholder intelligence?

Stakeholder intelligence is the practice of treating stakeholder analysis as a continuous data system, not a one-time deliverable. Each stakeholder has a persistent record. Engagement data flows in over time. Influence, interest, and sentiment are recalculated as new evidence arrives. The intelligence frame replaces the static analysis with a working picture that changes when reality changes.

Q.05

How do you conduct a stakeholder analysis?

Six steps. Identify every affected group. Categorize by role and relationship. Score on influence and interest using a documented scale. Engage at a depth matched to the score. Track each stakeholder's impact and how the program changes their situation. Refresh scores on a defined cadence, quarterly at minimum for active programs. The discipline is in step six. Most teams complete steps one through five and never return.

Q.06

What are the steps in a stakeholder impact analysis?

Identify the stakeholder groups, document each group's interests and concerns in their own words, score the influence each group can exert on program decisions, weigh the impact the program is likely to have on each group, and decide what the resulting strategic challenges mean for activity choices. The fifth step is the one that distinguishes an impact analysis from a generic stakeholder list. Without a strategic implication, the analysis is descriptive only.

Q.07

What does the Mendelow power-interest grid measure?

The Mendelow grid plots stakeholders on two axes, power on one and interest on the other, producing four quadrants. Manage closely (high power, high interest). Keep satisfied (high power, low interest). Keep informed (low power, high interest). Monitor (low power, low interest). The grid is a useful starting frame. It misses two things a working analysis needs: a time axis showing how positions change, and a sentiment lens showing why a stakeholder sits where they do.

Q.08

When conducting a stakeholder analysis, what does influence measure?

Influence measures how much a stakeholder can change a program decision. It accounts for formal authority (board seats, regulatory power, funding control), informal authority (network position, expertise, public voice), and recent track record. A high influence score does not mean the stakeholder is using that influence today. It means they could. The interest score, weighed alongside, captures whether they are likely to.

Q.09

What are common stakeholder analysis methods?

The Mendelow power-interest grid is the most cited. The salience model adds legitimacy and urgency as a third and fourth axis. The stakeholder circle identifies inner-ring decision-makers and outer rings of influence. Each method produces a snapshot. None of them, on their own, builds the time axis that turns a snapshot into a living analysis. Picking a method is the start of the work, not the end of it.

Q.10

What are good stakeholder analysis examples?

A community-health initiative scoring its hospital partner, county health department, and patient advocacy group on influence and interest, then refreshing scores after a state policy change. An infrastructure project tracking landowner sentiment in addition to formal-comment counts. A foundation initiative weighing grantee influence on program design alongside funder influence on grant terms. The pattern in every case: scores plus reasons, updated on a schedule, tied to outcomes.

Q.11

What tools work for stakeholder analysis?

A spreadsheet works for a one-time grid. The work breaks down at refresh. Tools that work over time give each stakeholder a persistent record, capture both numeric scores and open-text reasoning, link to outcome data, and surface what changed since the last review. The category names vary (stakeholder management, CRM, impact platforms), but the four capabilities are the test. Without persistent records and refresh history, the tool is a digital spreadsheet.

Q.12

How does Sopact handle stakeholder analysis?

Sopact Sense gives every stakeholder a persistent record connected across surveys, interviews, and outcome data. Influence and interest scores live next to the open-text reasoning that produced them. When a regulator changes posture or a funder reallocates priorities, the score gets updated and the change is visible alongside outcome metrics. The refresh history is part of the analysis, not a separate worksheet. Mid-cycle dashboards make a stakeholder shift visible in week 14, not in the year-end report.

Q.13

Can I run a stakeholder analysis in Excel or Google Sheets?

For a one-time analysis on a small program, yes. The workflow breaks at three points. Stakeholder names drift across versions of the file. Scores get updated without anyone recording why. The link between the analysis and the outcome data lives in someone's head. Two refreshes in, the file is unreliable. The signal that you have outgrown a spreadsheet is when you cannot answer the question why did this score change without asking the person who changed it.

Q.14

What does stakeholder analysis look like in project management?

In project management the analysis sits at the kickoff and gets revisited at each phase gate. The lens emphasizes influence over outcome impact, because a project ships on a fixed timeline. Most project managers run the Mendelow grid once and rely on instinct from there. The teams that ship cleanly track sentiment between phase gates and refresh scores when team composition or scope changes. Phase-gate refresh is the floor; weekly check-ins are common on programs with more than fifty stakeholders.

BRING YOUR STAKEHOLDER LIST · LEAVE WITH A LIVING ANALYSIS

See what your stakeholder analysis looks like when it refreshes on a cadence

A 60-minute working session. Bring your current stakeholder list (a spreadsheet, a deck, a CRM export, whatever you have). Leave with a draft analysis structured for refresh, with influence and interest scoring anchored, sentiment evidence captured as open text, and a wired link to your outcome data. No procurement decision. No deck. The platform is open in front of you the whole hour.

FORMAT
60 minutes, video call, screen-shared. Open working session, not a sales pitch.
WHAT TO BRING
Your current stakeholder list in any format. Outcome data optional but useful.
WHAT YOU LEAVE WITH
A draft stakeholder analysis structured for quarterly refresh, plus a written outline of next steps.