I facilitated this workshop, authored a facilitation script and structured the core design activities. These materials guided live discussions, kept the group focused on user needs and managed each session’s timeframe. My facilitation and our activities were evaluated both by participants and room coaches at the end of each session.
I coached this workshop, observing the delivery of my facilitation script and more directly guiding design activities I'd structured. I rated the performance of the facilitator and the workshop activities, while mine was evaluated both by participants and the other room coach at the end of each session.
Many large enterprises have the resources—and often the conviction—to insist on building their own in-house tools, believing these custom solutions will perfectly solve their unique challenges. Yet in practice, most rely on specialized consultants to produce expensive, technical recommendations and high-fidelity and mockups. These "solutions" often serve more to appease project sponsors than to validate real user needs.
Over time, these high-fidelity assets can balloon into weeks of stakeholder presentations and precise Gantt manufacturing schedules—with each milestone increasingly disconnected from the project’s true user outcomes. In the end, these teams wind up with a highly organized, unvalidated backlog that reflects little more than assumptions from the project’s inception.
Guide workshop participants to:
A collaborative problem-framing session aimed at aligning project sponsors and the workshop facilitation team on the project’s direction and scope. We used those insights to shape the assumptions survey, both reflecting and validating the focus areas defined in the initial problem-framing session.
Under the guidance of our design research specialists, we took a quantitative approach to mapping each stakeholder’s assumptions. This method goes beyond “gut-feel” prioritization by effectively normalizing every data point on a shared scale. We also factored in a standard measure of agreement among respondents—to emphasize which assumptions had widespread consensus.
We began by establishing the project’s real context: who uses the tool, why they rely on it, and which tasks matter most. Stakeholders collaborated with real (or proxy) end-users to map out key pain points, culminating in a proto-persona and journey map centered on a representative user role.
We gauged success by confirming that sponsors, engineers, and design leads were aligned on the user’s primary concerns and goals. By the end of this phase, each team had identified at least two “moments that matter,” creating clear opportunities to reduce friction and validate assumptions in later workshop steps.
We took the moments that matter most and used them to write testable hypotheses—structured statements like, "We believe we can achieve [business outcome] if users like [proto-persona] can get [user outcome] from [feature]." This forced sponsors, engineers, and designers to align on which assumptions truly mattered. Rather than letting a wish list dictate the backlog, we prioritized the highest-risk, highest-value ideas first.
We guaged success by confirming each hypothesis was written in an unambiguous manner, ensuring it tied directly to user needs and delivered a measurable business outcome. Each team selected at least one hypothesis to focus on, producing a concrete backlog of top-priority features to explore.
As each team homed in on one critical user flow or key interaction to validate, a product deign lead (me) worked directly with them to develop streamlined, interactive prototypes.
Before investing in a full-scale research program, we used hallway testing—recruiting representative users from within the organization—to gather timely, targeted feedback on each concept. In Lean UX terms, this is where we transform our riskiest, most valuable assumptions into a testable hypothesis and specify success criteria that clarify exactly what we need users to do to prove we’re on the right or wrong track.
Success of each workshop was determined by each team’s ability to learn quickly: they balanced speed and rigor by creating a concise test plan and defining a measurable outcome tied to their hypothesis (e.g., “Users must complete the task in under 2 minutes” or “80% of testers must find the AI recommendations intuitive”). At the end of the workshop, each team presented actionable insights based on real user reactions, along with concrete next steps—ensuring that every decision moving forward was grounded in evidence, not guesswork.
Every participant team produced an assumption backlog, ranking each belief by risk and value. This ensured resources went to testing the highest-impact concerns first.
Both organizations involved actual or proxy end-users early and often—through empathy mapping, journey discussions, or targeted prototype testing.
In mapping user flows, each team pinpointed at least two “moments that matter” where the tool or AI solution could significantly reduce friction
Teams translated assumptions into We believe we can achieve [Business Outcome] if users like [Proto-Persona] can [User Outcome] with [Feature].—aligning design, engineering, and business around measurable outcomes.
Originally a two-week workshop, we compressed our Lean UX format into a five-day remote model with pre-work and daily outcomes. This let cross-functional teams engage without stepping out of delivery.
At Allstate, the product Waterfall ran ahead of dev with scenarios and synthetic data. PG&E extended the model, deepening hypothesis development alongside test-driven development and their Agile practices.
While the workshop’s primary aim was to de-risk design by validating assumptions early, the final deliverables went beyond sticky notes and user maps. Each team received polished UI artifacts derived directly from the highest-priority hypotheses. These interactive screens, which aligned to actual data availability by delivery phase (as illustrated in our milestone-based rollout), gave clients something concrete to share internally—sharpening executive conversations and helping win support for follow-on investment.