The current Specification-Driven Development (SDD) framework in Spec Kit is highly effective at automating the path from "Intent" to "Delivery". However, it currently lacks a native mechanism to:
-
Distinguish between building for learning (uncertainty) vs. building for delivery (certainty). For example - for a requirement that [NEEDS CLARIFICATION] the assumption is they are cleared ahead of implementation. How could we allow for a learning plan that gets clarity through the agentic loop? (e.g. through learning stories, spikes, etc. )
-
Encourage closing the feedback loop using real-world Validation after an implementation slice is delivered. (it leads to "working tested software" but not necessarily "useful software")
How could we weave into Spec-kit some ideas like:
- Focusing on outcomes/behaviors (who does what by how much) rather than outputs/deliverables?
- risk/uncertainty/clarification taxonomy - e.g. level of uncertainty, type of uncertainty (desirability, viability to run efficiently, feasibility to build, time to build)
- Derisking through discovery when appropriate?
- Maintaing the spec as a living document throughout the discovery/derisking process.
Value Proposition
- Reduced Waste: Prevents over-engineering features with low product conviction.
- Improved Alignment: Forces a shared understanding of what we know vs. what we are guessing.
- Full Life-cycle SDD: Moves SDD from a technical execution framework to a product development operating model.
Would love to get feedback on this direction and whether it aligns with the long-term vision of Spec Kit.
PS
some ideas for how to go about doing this below:
1. Conviction-Based Specification
Update the /speckit.specify command and the spec-template.md to:
- Assess and declare a Conviction Level (High, Medium, Low).
- Replace generic "Assumptions" with categorized Leap of Faith Assumptions (LOFAs) (Value, Usability, Feasibility, Viability).
- Include "Hypotheses to Validate" within User Stories for low/medium conviction features.
2. Strategic Planning (Learning vs. Delivery)
Update the /speckit.plan command to branch its strategy based on conviction:
- Low/Medium Conviction -> A Learning-Oriented Plan prioritized for telemetry, scaffolding, and experiment validation.
- High Conviction -> A Delivery-Oriented Plan prioritized for robustness and scale.
3. Closing the Loop: The /speckit.validate Command
Add a new command to finalize the "SDD Cycle":
/speckit.validate: Takes real-world signals/telemetry/feedback and compares them against the LOFAs in the spec.
- Outputs a Validation Report that recommends a Pivot, Persevere, or Kill decision.
Proposed Solution
How could we weave into Spec-kit some ideas like:
- Focusing on outcomes/behaviors (who does what by how much) rather than outputs/deliverables?
- risk/uncertainty/clarification taxonomy - e.g. level of uncertainty, type of uncertainty (desirability, viability to run efficiently, feasibility to build, time to build)
- Derisking through discovery when appropriate?
- Maintaing the spec as a living document throughout the discovery/derisking process.
Alternatives Considered
No response
Component
Spec templates (BDD, Testing Strategy, etc.)
AI Agent (if applicable)
None
Use Cases
When working on products/features with medium/high uncertainty, especially related to desirability/value.
Acceptance Criteria
No response
Additional Context
No response
The current Specification-Driven Development (SDD) framework in Spec Kit is highly effective at automating the path from "Intent" to "Delivery". However, it currently lacks a native mechanism to:
Distinguish between building for learning (uncertainty) vs. building for delivery (certainty). For example - for a requirement that [NEEDS CLARIFICATION] the assumption is they are cleared ahead of implementation. How could we allow for a learning plan that gets clarity through the agentic loop? (e.g. through learning stories, spikes, etc. )
Encourage closing the feedback loop using real-world Validation after an implementation slice is delivered. (it leads to "working tested software" but not necessarily "useful software")
How could we weave into Spec-kit some ideas like:
Value Proposition
Would love to get feedback on this direction and whether it aligns with the long-term vision of Spec Kit.
PS
some ideas for how to go about doing this below:
1. Conviction-Based Specification
Update the
/speckit.specifycommand and thespec-template.mdto:2. Strategic Planning (Learning vs. Delivery)
Update the
/speckit.plancommand to branch its strategy based on conviction:3. Closing the Loop: The
/speckit.validateCommandAdd a new command to finalize the "SDD Cycle":
/speckit.validate: Takes real-world signals/telemetry/feedback and compares them against the LOFAs in the spec.Proposed Solution
How could we weave into Spec-kit some ideas like:
Alternatives Considered
No response
Component
Spec templates (BDD, Testing Strategy, etc.)
AI Agent (if applicable)
None
Use Cases
When working on products/features with medium/high uncertainty, especially related to desirability/value.
Acceptance Criteria
No response
Additional Context
No response