Add /llm POST Endpoint #27
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description:
Implements a new
/llmPOST endpoint for generic LLM inference using Groq Cloud (LLaMA 3.1). Separates request and response models intosrc/core/api/models/llm/request.pyandsrc/core/api/models/llm/response.py. Tests are organized intests/llm/unit/,tests/llm/integration/, andtests/llm/e2e/. Fixed test discovery and async mocking issues.Changes:
LLMRequestinsrc/core/api/models/llm/request.py.LLMResponseinsrc/core/api/models/llm/response.py.src/core/api/routes/llm.pywith new import paths.src/main.pyto include the LLM router.tests/llm/unit/test_llm_models.py.tests/llm/integration/test_llm_routes.py.tests/llm/e2e/test_llm_workflow.py.tests/llm/conftest.pyfor fixtures.pytest.inito configure pytest-asyncio, setpythonpath = ., and addtestpaths = tests/llmande2emarker.docs/llm.md/llm/and usingAsyncMock.Testing:
LLMRequestandLLMResponsemodels./llmendpoint with mocked Groq responses.pytest tests/llm/).Checklist: