feat(agent): MVE Experiment Designer#976
Conversation
feat(instructions): introduce MVE coaching conventions for Experiment Designer chore(collections): include Experiment Designer in experimental collections chore(collections): update experimental collection YAML to reference new agent and instructions 🔧 - Generated by Copilot
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #976 +/- ##
==========================================
- Coverage 87.42% 86.03% -1.40%
==========================================
Files 44 30 -14
Lines 7803 5326 -2477
==========================================
- Hits 6822 4582 -2240
+ Misses 981 744 -237
Flags with carried forward coverage won't be shown. Click here to find out more. 🚀 New features to boost your workflow:
|
|
@mattdot ... can you look at the hifi and lofi prototype builders in design thinking and see if this covers your needs first? |
…mum Viable Experiments
@WilliamBerryiii not quite. It kind of proposes testing assumptions, but it doesn't really do it with the scientific rigor I'd expect from a true MVE. It feels more like it's proposing a vibe check of the assumptions rather than an experiment result that we have rock solid confidence in. |
One last set of questions (I should have asked earlier but has to think about it) ... where do you think this goes from a collections perspective after it's run in the experimental phase? More Coding Focused? Data Science too? Should this agent's artifact (the experiment.md) be handed off to the PRD-builder and/or Task Researcher for the implementation phase? You've got more experience in this space, are the experiments you're running more of a "rough PRD" scale or more of a "if we had enough tokens, we could probably get this through a task researcher run" 😂 ... This really comes down to do you want the experiment to run PRD -> *-Backlog-Manager for entry into the backlog or go right to coding (or both). |
The output of this is really a plan and hypothesis to go do an experiment on. Once you actually do the experiment, the results of the experiment would be used much like other research could be used, as inputs to PRD or ADR. For the collections, I could see this in the Data Science and Project Planning collections. |
|
@mattdot - should I update this to exit with a hand off document for the ADO and GH backlog managers? Do you anticipate that the experiment generates work items or do we go right to task researcher/planner/implementor/reviewer for workflow execution? |
Pull Request
Description
Adds a new conversational coaching agent that guides users through designing a Minimum Viable Experiment (MVE). The agent follows a structured, phase-based process — from problem discovery and hypothesis formation through viability vetting to a complete experiment plan. It helps users translate unknowns and assumptions into crisp, testable hypotheses, evaluates experiment feasibility, and produces actionable MVE plans with session tracking via .copilot-tracking. Includes the agent definition (experiment-designer.agent.md) and companion instructions (experiment-designer.instructions.md) covering MVE domain knowledge, vetting criteria, and experiment type reference.
Related Issue(s)
Closes #973
Type of Change
Select all that apply:
Code & Documentation:
Infrastructure & Configuration:
AI Artifacts:
prompt-builderagent and addressed all feedback.github/instructions/*.instructions.md).github/prompts/*.prompt.md).github/agents/*.agent.md).github/skills/*/SKILL.md)Other:
.ps1,.sh,.py)Sample Prompts (for AI Artifact Contributions)
User Request:
Execution Flow:
Phase 1 — Problem & Context Discovery: Agent asks probing questions about the problem statement, customer context, business case, unknowns, and constraints. Creates a tracking directory at .copilot-tracking/mve/{date}/{experiment-name}/ and writes context.md.
Phase 2 — Hypothesis Formation: Agent guides user to translate unknowns into testable hypotheses using the format "We believe [assumption]. We will test this by [method]. We will know we are right/wrong when [measurable outcome]." Prioritizes hypotheses by risk and impact. Writes hypotheses.md.
Phase 3 — MVE Vetting & Red Flag Check: Agent applies four vetting criteria (business sense, crisp problem statement, Responsible AI, clear next steps) and checks against nine red flag patterns (demos, skipping ahead, solved problems, mini-MVP, etc.). Writes vetting.md. If fundamental problems found, returns to Phase 1 or 2.
Phase 4 — Experiment Design: Agent helps choose experiment type, define technical approach, set measurable success/failure criteria per hypothesis, scope timeline to weeks, and plan post-experiment evaluation. Writes experiment-design.md.
Phase 5 — MVE Plan Output: Agent consolidates all phase outputs into a single mve-plan.md document for stakeholder review. Iterates based on user feedback, returning to earlier phases if needed.
Output Artifacts:
context.md — Problem statement, customer context, business justification
hypotheses.md — Prioritized testable hypotheses with assumption/method/outcome
vetting.md — Vetting criteria results and red flag assessment
experiment-design.md — Approach, scope, timeline, resources, success criteria
mve-plan.md — Consolidated plan document for stakeholder review
Business Case
{Why this experiment matters, what decision it informs}
Success Indicators:
The .copilot-tracking/mve/{date}/{experiment-name}/ directory contains all five markdown artifacts (context.md, hypotheses.md, vetting.md, experiment-design.md, mve-plan.md)
Each hypothesis follows the three-part format: assumption, test method, measurable outcome
Hypotheses are prioritized by risk and impact with clear rationale
Vetting results explicitly address all four criteria and flag any red flags encountered
Success and failure criteria are defined per hypothesis with quantitative thresholds
The experiment is scoped to weeks (not months) with explicit out-of-scope boundaries
mve-plan.md includes next steps for both validated and invalidated outcomes
The agent challenged vague problem statements or untestable hypotheses rather than accepting them uncritically
For detailed contribution requirements, see:
Testing
I've used it for a few MVE opportunities to help refine our hypotheses and plan our MVE.
Checklist
Required Checks
AI Artifact Contributions
/prompt-analyzeto review contributionprompt-builderreviewRequired Automated Checks
The following validation commands must pass before merging:
npm run lint:mdnpm run spell-checknpm run lint:frontmatternpm run validate:skillsnpm run lint:md-linksnpm run lint:psnpm run plugin:generate(can't run dev container, hoping ci/cd pipeline checks these :) )
Security Considerations
Additional Notes