You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/release_notes.rst
-3Lines changed: 0 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -10,14 +10,11 @@ Release date: May 19, 2025
10
10
* AI Quick Actions: Use defined-metadata to include configuration for fine-tuned models.
11
11
* AI Quick Actions: Support for embedding models in a multi model deployment.
12
12
* AI Quick Actions: Fixed a bug in multi-model deployment to use model artifact json directly instead of accessing service bucket when creating a new grouped model.
13
-
* AI Quick Actions: Add support for using fine-tuned models in a multi-model deployment.
14
-
* AI Quick Actions: Enhancement to load GPU shape information using OCI Object Storage bucket and a local JSON file for multi-model deployment.
15
13
* AI Quick Actions telemetry improvements and enhancement to use threadpool instead of creating unbounded number of threads for telemetry.
16
14
* AI Quick Actions: Support for ``list`` API for compute capacity reservations to onboard Bring-your-own-reservations (BYOR).
17
15
* AI Quick Actions: Fixed a bug which now allows multiple parameters for deployment parameters.
18
16
* AI Quick Actions: Enhances the model deployment logic for vLLM architecture version.
19
17
* AI Quick Actions: Enhances functionality to retrieve deployment configuration for fine-tuned models.
20
-
* AI Quick Actions: Additional support for Llama-4 models for multi-model deployment.
0 commit comments