Skip to content

feat(agent): MVE Experiment Designer#976

Open
mattdot wants to merge 10 commits intomicrosoft:mainfrom
mattdot:main
Open

feat(agent): MVE Experiment Designer#976
mattdot wants to merge 10 commits intomicrosoft:mainfrom
mattdot:main

Conversation

@mattdot
Copy link
Member

@mattdot mattdot commented Mar 11, 2026

Pull Request

Description

Adds a new conversational coaching agent that guides users through designing a Minimum Viable Experiment (MVE). The agent follows a structured, phase-based process — from problem discovery and hypothesis formation through viability vetting to a complete experiment plan. It helps users translate unknowns and assumptions into crisp, testable hypotheses, evaluates experiment feasibility, and produces actionable MVE plans with session tracking via .copilot-tracking. Includes the agent definition (experiment-designer.agent.md) and companion instructions (experiment-designer.instructions.md) covering MVE domain knowledge, vetting criteria, and experiment type reference.

Related Issue(s)

Closes #973

Type of Change

Select all that apply:

Code & Documentation:

  • Bug fix (non-breaking change fixing an issue)
  • New feature (non-breaking change adding functionality)
  • Breaking change (fix or feature causing existing functionality to change)
  • Documentation update

Infrastructure & Configuration:

  • GitHub Actions workflow
  • Linting configuration (markdown, PowerShell, etc.)
  • Security configuration
  • DevContainer configuration
  • Dependency update

AI Artifacts:

  • Reviewed contribution with prompt-builder agent and addressed all feedback
  • Copilot instructions (.github/instructions/*.instructions.md)
  • Copilot prompt (.github/prompts/*.prompt.md)
  • Copilot agent (.github/agents/*.agent.md)
  • Copilot skill (.github/skills/*/SKILL.md)

Note for AI Artifact Contributors:

  • Agents: Research, indexing/referencing other project (using standard VS Code GitHub Copilot/MCP tools), planning, and general implementation agents likely already exist. Review .github/agents/ before creating new ones.
  • Skills: Must include both bash and PowerShell scripts. See Skills.
  • Model Versions: Only contributions targeting the latest Anthropic and OpenAI models will be accepted. Older model versions (e.g., GPT-3.5, Claude 3) will be rejected.
  • See Agents Not Accepted and Model Version Requirements.

Other:

  • Script/automation (.ps1, .sh, .py)
  • Other (please describe):

Sample Prompts (for AI Artifact Contributions)

User Request:

  • "I have an idea for [feature/product/approach] but I'm not sure if it will work. Help me design an experiment to validate it before we commit to building it."
  • "We need to test whether [assumption] is true before starting development"
  • "Help me design an MVE for [project/feature]"
  • "Our customer wants us to build X, but there are unknowns around data feasibility / architecture / LLM capability — can we experiment first?"
  • "I want to validate my hypothesis about [topic] with a structured experiment"

Execution Flow:

Phase 1 — Problem & Context Discovery: Agent asks probing questions about the problem statement, customer context, business case, unknowns, and constraints. Creates a tracking directory at .copilot-tracking/mve/{date}/{experiment-name}/ and writes context.md.
Phase 2 — Hypothesis Formation: Agent guides user to translate unknowns into testable hypotheses using the format "We believe [assumption]. We will test this by [method]. We will know we are right/wrong when [measurable outcome]." Prioritizes hypotheses by risk and impact. Writes hypotheses.md.
Phase 3 — MVE Vetting & Red Flag Check: Agent applies four vetting criteria (business sense, crisp problem statement, Responsible AI, clear next steps) and checks against nine red flag patterns (demos, skipping ahead, solved problems, mini-MVP, etc.). Writes vetting.md. If fundamental problems found, returns to Phase 1 or 2.
Phase 4 — Experiment Design: Agent helps choose experiment type, define technical approach, set measurable success/failure criteria per hypothesis, scope timeline to weeks, and plan post-experiment evaluation. Writes experiment-design.md.
Phase 5 — MVE Plan Output: Agent consolidates all phase outputs into a single mve-plan.md document for stakeholder review. Iterates based on user feedback, returning to earlier phases if needed.

Output Artifacts:

context.md — Problem statement, customer context, business justification
hypotheses.md — Prioritized testable hypotheses with assumption/method/outcome
vetting.md — Vetting criteria results and red flag assessment
experiment-design.md — Approach, scope, timeline, resources, success criteria
mve-plan.md — Consolidated plan document for stakeholder review

<!-- markdownlint-disable-file -->
# MVE Context: {experiment-name}

## Problem Statement
{User's refined problem statement}

## Customer & Stakeholder Context
{Customer details, priority level, sponsors}

## Known Constraints
{IP, data access, timeline constraints}

## Assumptions & Unknowns
- Unknown 1: ...
- Assumption 1: ...

Business Case

{Why this experiment matters, what decision it informs}

Success Indicators:

The .copilot-tracking/mve/{date}/{experiment-name}/ directory contains all five markdown artifacts (context.md, hypotheses.md, vetting.md, experiment-design.md, mve-plan.md)
Each hypothesis follows the three-part format: assumption, test method, measurable outcome
Hypotheses are prioritized by risk and impact with clear rationale
Vetting results explicitly address all four criteria and flag any red flags encountered
Success and failure criteria are defined per hypothesis with quantitative thresholds
The experiment is scoped to weeks (not months) with explicit out-of-scope boundaries
mve-plan.md includes next steps for both validated and invalidated outcomes
The agent challenged vague problem statements or untestable hypotheses rather than accepting them uncritically

For detailed contribution requirements, see:

Testing

I've used it for a few MVE opportunities to help refine our hypotheses and plan our MVE.

Checklist

Required Checks

  • [x ] Documentation is updated (if applicable)
  • [x ] Files follow existing naming conventions
  • [x ] Changes are backwards compatible (if applicable)
  • [N/A ] Tests added for new functionality (if applicable)

AI Artifact Contributions

  • Used /prompt-analyze to review contribution
  • [x ] Addressed all feedback from prompt-builder review
  • [x ] Verified contribution follows common standards and type-specific requirements

Required Automated Checks

The following validation commands must pass before merging:

  • Markdown linting: npm run lint:md
  • Spell checking: npm run spell-check
  • Frontmatter validation: npm run lint:frontmatter
  • Skill structure validation: npm run validate:skills
  • Link validation: npm run lint:md-links
  • PowerShell analysis: npm run lint:ps
  • Plugin freshness: npm run plugin:generate

(can't run dev container, hoping ci/cd pipeline checks these :) )

Security Considerations

  • [x ] This PR does not contain any sensitive or NDA information
  • [N/A ] Any new dependencies have been reviewed for security issues
  • [N/A ] Security-related scripts follow the principle of least privilege

Additional Notes

mattdot added 4 commits March 10, 2026 16:13
feat(instructions): introduce MVE coaching conventions for Experiment Designer

chore(collections): include Experiment Designer in experimental collections

chore(collections): update experimental collection YAML to reference new agent and instructions

🔧 - Generated by Copilot
@mattdot mattdot requested a review from a team as a code owner March 11, 2026 20:11
@codecov-commenter
Copy link

codecov-commenter commented Mar 11, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 86.03%. Comparing base (17bbe7a) to head (b283627).

Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             main     #976      +/-   ##
==========================================
- Coverage   87.42%   86.03%   -1.40%     
==========================================
  Files          44       30      -14     
  Lines        7803     5326    -2477     
==========================================
- Hits         6822     4582    -2240     
+ Misses        981      744     -237     
Flag Coverage Δ
pester 86.03% <ø> (ø)
pytest ?

Flags with carried forward coverage won't be shown. Click here to find out more.
see 14 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@WilliamBerryiii
Copy link
Member

@mattdot ... can you look at the hifi and lofi prototype builders in design thinking and see if this covers your needs first?

@mattdot
Copy link
Member Author

mattdot commented Mar 12, 2026

@mattdot ... can you look at the hifi and lofi prototype builders in design thinking and see if this covers your needs first?

@WilliamBerryiii not quite. It kind of proposes testing assumptions, but it doesn't really do it with the scientific rigor I'd expect from a true MVE. It feels more like it's proposing a vibe check of the assumptions rather than an experiment result that we have rock solid confidence in.

@WilliamBerryiii
Copy link
Member

WilliamBerryiii commented Mar 13, 2026

@mattdot ... can you look at the hifi and lofi prototype builders in design thinking and see if this covers your needs first?

@WilliamBerryiii not quite. It kind of proposes testing assumptions, but it doesn't really do it with the scientific rigor I'd expect from a true MVE. It feels more like it's proposing a vibe check of the assumptions rather than an experiment result that we have rock solid confidence in.

One last set of questions (I should have asked earlier but has to think about it) ... where do you think this goes from a collections perspective after it's run in the experimental phase? More Coding Focused? Data Science too?
https://microsoft.github.io/hve-core/docs/getting-started/install#collection-packages

Should this agent's artifact (the experiment.md) be handed off to the PRD-builder and/or Task Researcher for the implementation phase? You've got more experience in this space, are the experiments you're running more of a "rough PRD" scale or more of a "if we had enough tokens, we could probably get this through a task researcher run" 😂 ... This really comes down to do you want the experiment to run PRD -> *-Backlog-Manager for entry into the backlog or go right to coding (or both).

@WilliamBerryiii WilliamBerryiii added this to the v3.2.0 milestone Mar 13, 2026
@mattdot
Copy link
Member Author

mattdot commented Mar 13, 2026

@mattdot ... can you look at the hifi and lofi prototype builders in design thinking and see if this covers your needs first?

@WilliamBerryiii not quite. It kind of proposes testing assumptions, but it doesn't really do it with the scientific rigor I'd expect from a true MVE. It feels more like it's proposing a vibe check of the assumptions rather than an experiment result that we have rock solid confidence in.

One last set of questions (I should have asked earlier but has to think about it) ... where do you think this goes from a collections perspective after it's run in the experimental phase? More Coding Focused? Data Science too? https://microsoft.github.io/hve-core/docs/getting-started/install#collection-packages

Should this agent's artifact (the experiment.md) be handed off to the PRD-builder and/or Task Researcher for the implementation phase? You've got more experience in this space, are the experiments you're running more of a "rough PRD" scale or more of a "if we had enough tokens, we could probably get this through a task researcher run" 😂 ... This really comes down to do you want the experiment to run PRD -> *-Backlog-Manager for entry into the backlog or go right to coding (or both).

The output of this is really a plan and hypothesis to go do an experiment on. Once you actually do the experiment, the results of the experiment would be used much like other research could be used, as inputs to PRD or ADR.

For the collections, I could see this in the Data Science and Project Planning collections.

@WilliamBerryiii
Copy link
Member

@mattdot - should I update this to exit with a hand off document for the ADO and GH backlog managers? Do you anticipate that the experiment generates work items or do we go right to task researcher/planner/implementor/reviewer for workflow execution?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat(agents): Minimum Viable Experiment Designer

3 participants