Automated sprint release notes generation for multiple product engineering teams using GitHub Projects and Claude AI.
Sprint Summary is a production-ready Node.js CLI tool that generates intelligent weekly sprint reports by:
- GitHub Integration: Fetches completed tickets from GitHub Projects with full context analysis
- Intelligent Issue Analysis: Analyzes completion signals (merged PRs, commits, resolution patterns) with confidence scoring
- AI-Powered Reports: Generates context-aware summaries using Claude (Anthropic's LLM)
- Flexible Reporting: Brief (default) or verbose reports with historical comparison capabilities
- Human Review Workflow: Interactive terminal-based report approval and iteration process
-
Installation
npm install
-
Setup Environment
cp .env.example .env # Edit .env with your API keys -
Initialize Configuration
npm run start init
-
Generate Summary
# Brief report (default) for single team npm run start report --team copilot # Verbose report for all teams npm run start report --all --verbose
- Node.js 18+
- Anthropic API key
- GitHub Personal Access Token with repo and project permissions
ANTHROPIC_API_KEY=your_anthropic_api_key
GITHUB_TOKEN=your_github_tokenThe report commands are the production-ready, full-featured commands with advanced analysis, human review workflow, and historical comparison capabilities.
Generate intelligent sprint report for a specific team with comprehensive GitHub analysis.
# Most common usage - brief report with historical comparison and human review
npm run start report --team copilot
# Brief report for specific date range (skips historical comparison)
npm run start report --team copilot --from 2025-07-28 --to 2025-08-08
# Verbose report with detailed technical insights
npm run start report --team copilot --verbose
# Report with goals from file
npm run start report --team copilot --goals-file sprint-goals.txt
# Skip human review for automated workflows
npm run start report --team copilot --no-reviewGenerate reports for all configured teams with parallel processing.
# Brief reports for all teams with historical comparison
npm run start report --all
# Verbose reports for all teams with specific date range
npm run start report --all --from 2025-07-28 --to 2025-08-08 --verbose
# All teams with goals file and additional output
npm run start report --all --goals-file team-goals.md --output ./extra-report.mdVerify all integrations and system health.
# Check GitHub API, Anthropic API, and storage access
npm run start report --healthFine-tune report generation with specialized options.
# Extended historical comparison (default: 2 weeks)
npm run start report --team copilot --comparison-weeks 4
# Multiple output locations
npm run start report --team copilot --output ./reports/extra-copy.md
# Combined advanced options
npm run start report --all --verbose --goals-file goals.txt --comparison-weeks 3 --no-reviewSimple goal-focused reports using the legacy processing pipeline. Always requires interactive goals input.
# Legacy single team (always prompts for goals)
npm run start generate --team copilot
# Legacy all teams with goals file
npm run start generate-all --goals-file sprint-goals.txt
# Legacy with date range and verbose output
npm run start generate --team copilot --from 2025-07-01 --to 2025-07-31 --verbose --format json# Initial setup and configuration
npm run start init| Feature | report Commands |
generate Commands |
|---|---|---|
| Architecture | ReportPipeline (modern, full-featured) | SprintProcessor (legacy, simple) |
| Goals Input | Optional via --goals-file |
Always required (interactive or file) |
| Human Review | Interactive approval workflow | None |
| Historical Comparison | Smart comparison with previous reports | None |
| Issue Analysis | Advanced completion signal detection | Basic |
| Date Range Logic | Conditional comparison skipping | Standard |
| Output Formats | Markdown with metadata storage | Markdown or JSON |
| Health Checking | Built-in system diagnostics | None |
| Performance | Optimized for large projects | Basic |
| Use Case | Production daily reports | Simple goal-focused summaries |
| Option | Description | Example |
|---|---|---|
--team <id> |
Generate report for specific team | --team copilot |
--all |
Generate reports for all teams | --all |
--verbose |
Generate detailed report (default: brief) | --verbose |
--from <date> |
Start date (YYYY-MM-DD) | --from 2025-07-28 |
--to <date> |
End date (YYYY-MM-DD) | --to 2025-08-08 |
--goals-file <file> |
Sprint goals file path | --goals-file goals.txt |
--review |
Enable human review workflow (default) | --review |
--no-review |
Disable human review workflow | --no-review |
--comparison-weeks <num> |
Historical comparison period (default: 2) | --comparison-weeks 4 |
--output <path> |
Additional output file path | --output ./extra.md |
--health |
Check system health and integrations | --health |
| Option | Description | Example |
|---|---|---|
--team <id> |
Target team ID (required for single team) | --team copilot |
--goals-file <file> |
Sprint goals file (otherwise prompts interactively) | --goals-file goals.txt |
--verbose |
Generate detailed report (default: brief) | --verbose |
--from <date> |
Start date (YYYY-MM-DD) | --from 2025-07-28 |
--to <date> |
End date (YYYY-MM-DD) | --to 2025-08-08 |
--format <format> |
Output format: markdown (default) or json | --format json |
Teams are configured in config/teams.json:
{
"teams": [
{
"id": "team-alpha",
"name": "Alpha Team",
"github_project_url": "https://github.com/orgs/company/projects/1"
}
]
}Reports are automatically saved with organized naming and metadata:
output/
├── sprint-summary-copilot-2025-08-08.md # Team-specific reports
├── sprint-summary-all-teams-2025-08-08.md # Multi-team reports
└── sprint-summary-copilot-2025-07-28-to-2025-08-08.md # Date range reports
Historical reports are maintained for comparison and audit purposes:
reports/ # Git-ignored directory
├── copilot/
│ ├── 2025-08-08.json # Report metadata with GitHub links
│ └── 2025-08-08.md # Report content
└── datasets/
├── 2025-08-08.json
└── 2025-08-08.md
- Brief Reports: Concise summaries with completed issues and filtered items
- Verbose Reports: Comprehensive analysis with technical insights and impact assessment
- Historical Comparison: Automatic detection of repeated work across weekly reports
- Confidence Scoring: Low-confidence issues flagged separately with percentage scores
- GitHub Integration: Direct links to issues, PRs, and commits for verification
CLI Layer: Commander.js-based interface with comprehensive command options and help GitHub Integration: Complete Projects v2 API integration with issue analysis and timeline processing Issue Analysis: Conservative completion signal detection with confidence scoring LLM Integration: Context-aware report generation using Claude with historical comparison Human Review: Interactive terminal workflow for report approval and iteration Storage System: File-based report storage with metadata tracking and comparison capabilities
- Production Ready: Handles large GitHub projects (1600+ issues) with performance optimization
- Conservative Analysis: Differentiates real work completion from administrative closures
- Flexible Reporting: Brief daily reports or detailed comprehensive analysis
- Date Range Intelligence: Skips historical comparison when specific dates are provided
- Error Handling: Comprehensive error handling with graceful degradation and recovery suggestions
- Clone the repository
- Install dependencies:
npm install - Copy environment template:
cp .env.example .env - Add your API keys to
.env - Initialize configuration:
npm run start init
# Run full test suite
npm test
# Run specific test categories
npm test -- --testNamePattern="Brief|Verbose|Date.*Range"
# Run integration tests with real GitHub data
npm run test:integration# Development mode
npm run dev
# Linting
npm run lint
# Integration test setup
npm run setup:integrationMIT