Skip to content

Commit 718983a

Browse files
authored
feat(cli): add CLI interface with start and validate commands (#34)
* feat(cli): add CLI interface with start and validate commands Add a new command-line interface to improve user experience and provide better server management capabilities. This change includes: - Add new CLI module using Click framework - Implement 'start' command with configurable host, port, and responses file - Add 'validate' command to check responses.yml structure - Update config to support environment variable for responses file path - Update README with comprehensive CLI documentation - Add CLI entry point in pyproject.toml The CLI now supports: - mockllm start [--responses FILE] [--host HOST] [--port PORT] [--reload] - mockllm validate <responses_file> - mockllm --version - mockllm --help Bump version for pypi publish Breaking Changes: None Dependencies Added: click>=8.1.0 * Add CLI capability
1 parent e0e59ef commit 718983a

File tree

5 files changed

+183
-19
lines changed

5 files changed

+183
-19
lines changed

README.md

+35-10
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,6 @@ For streaming responses, the lag is applied per-character with slight random var
6060

6161
The server automatically detects changes to `responses.yml` and reloads the configuration without restarting the server.
6262

63-
6463
## Installation
6564

6665
### From PyPI
@@ -91,26 +90,52 @@ poetry install --without dev # Install without development dependencies
9190

9291
## Usage
9392

94-
1. Set up the responses.yml
93+
### CLI Commands
94+
95+
MockLLM provides a command-line interface for managing the server and validating configurations:
96+
97+
```bash
98+
# Show available commands and options
99+
mockllm --help
100+
101+
# Show version
102+
mockllm --version
103+
104+
# Start the server with default settings
105+
mockllm start
95106
107+
# Start with custom responses file
108+
mockllm start --responses custom_responses.yml
109+
110+
# Start with custom host and port
111+
mockllm start --host localhost --port 3000
112+
113+
# Validate a responses file
114+
mockllm validate responses.yml
115+
```
116+
117+
### Quick Start
118+
119+
1. Set up the responses.yml:
96120
```bash
97121
cp example.responses.yml responses.yml
98122
```
99123

100-
2. Start the server:
124+
2. Validate your responses file (optional):
101125
```bash
102-
poetry run python -m mockllm
126+
mockllm validate responses.yml
103127
```
104-
Or using uvicorn directly:
128+
129+
3. Start the server:
105130
```bash
106-
poetry run uvicorn mockllm.server:app --reload
131+
mockllm start --responses responses.yml
107132
```
108133

109-
The server will start on `http://localhost:8000`
134+
The server will start on `http://localhost:8000` by default.
110135

111-
3. Send requests to the API endpoints:
136+
### API Endpoints
112137

113-
### OpenAI Format
138+
#### OpenAI Format
114139

115140
Regular request:
116141
```bash
@@ -137,7 +162,7 @@ curl -X POST http://localhost:8000/v1/chat/completions \
137162
}'
138163
```
139164

140-
### Anthropic Format
165+
#### Anthropic Format
141166

142167
Regular request:
143168
```bash

0 commit comments

Comments
 (0)