Releases: oracle/accelerated-data-science
Releases · oracle/accelerated-data-science
ADS 2.8.7
- Added support for leveraging pools in the Data Flow applications.
- Added support for token-based authentication.
- Revised help information for
opctl
commands.
ADS 2.8.6
- Resolved an issue in
ads opctl build-image job-local
when the build ofjob-local
would get stuck. Updated the Python version to 3.8 in the base environment of thejob-local
image. - Fixed a bug that prevented the support of defined tags for Data Science job runs.
- Fixed a bug in the
entryscript.sh
ofads opctl
that attempted to create a temporary folder in the/var/folders
directory. - Added support for defined tags in the Data Flow application and application run.
- Deprecated the old
ModelDeploymentProperties
andModelDeployer
classes, and their corresponding APIs. - Enabled the uploading of large size model artifacts for the
ModelDeployment
class. - Implemented validation for shape name and shape configuration details in Data Science jobs and Data Flow applications.
- Added the capability to create
ADSDataset
using the Pandas accessor. - Provided a prebuilt watch command for monitoring Data Science jobs with
ads opctl
. - Eliminated the legacy
ads.dataflow
package from ADS.
2.8.5
ADS
- Added support for
key_content
attribute inads.set_auth()
for the API KEY authentication. - Fixed bug in
ModelEvaluator
when it returned incorrect ROC AUC characteristics. - Fixed bug in
ADSDataset.suggest_recommendations()
API, when it returned an error if the target wasn't specified. - Fixed bug in
ADSDataset.auto_transform()
API, when an incorrect sampling was suggested for imbalanced data.
2.8.4
ADS
- Added support for creating ADSDataset from pandas dataframe.
- Added support for multi-model deployment using Triton.
- Added support for model deployment local testing in
ads opctl
CLI. - Added support in
ads opctl
CLI to generate starter YAML specification for the Data Science Job, Data Flow Application, Data Science Model Deployment and ML Pipeline services. - Added support for invoking model prediction locally with
predict(local=True)
. - Added support for attaching customized score.py when preparing model.
- Added status check for model deployment delete/activate/deactivate APIs.
- Added support for training and verifying SparkPipelineModel in Dataflow.
- Added support for generating score.py for GPU model deployment.
- Added support for setting defined tags in Data Science jobs.
- Improved model deployment progress bar.
- Fixed bug when using
ads opctl
CLI to run jobs locally. - Fixed bug in Dataflow magic when using archive_uri in dataflow config.
2.8.3
ADS
- Added support for custom containers (Bring Your Own Container or BYOC) and environment variables for
ads.model.GenericModel
. - Added default values for configuring parameters in
ads.model.ModelDeployment
, such as default flex shape, ocpus, memory in gbs, bandwidth, and instance count. - Added support for
ads.jobs.NotebookRuntime
to use directory as job artifact. - Added support for
ads.jobs.PythonRuntime
andads.jobs.GitPythonRuntime
to use shell script as entrypoint.
2.8.2
ADS
- Remove support for Python 3.7.
- Improved the DataScienceMode.create() to support timeout argument and auto extract region from the signer and signer config.
- Support Jupyter Notebook as
entrypoint
when defining Data Science jobs withPythonRuntime
andGitPythonRuntime
. - Support environment variable substitution in Data Science job names and output URI.
- Support JSON serialization of list/dictionary when assigning them as Data Science jobs environment variables.
- Support saving the notebook to output URI even if the job run failed when running a Data Science job using
NotebookRuntime
. - Added
job.build()
method to Data Science job to load default values from environment. - Added
DataScienceJob.fast_launch_shapes()
method to list fast launch shapes available for Data Science job. - Added :doc:`HuggingFacePipelineModel class to support prepare, save, deploy and predict for HuggingFace pipelines.
- Updated Data Science job run YAML representation to include configurations inherited from the job.
- Fixed custom conda environment not showing in Data Science Job YAML specification.
- Fixed an issue where model saving was failing in notebook session without ipywidgets installed.
- Fixed "Unknown archive format" error in ads.jobs.PythonRuntime, when the source code folder name ends with "zip". List of supported archive files are: "zip", "tar.gz", "tar" and "tgz".
2.8.1
ADS
- Fixed a bug for
ads opctl run
when--auth
flag is passed and image is built by ADS. - Fixed a bug in
GenericModel.save()
when the work requests are not successfully populated. - Fixed a bug in
DataScienceModel.create()
to when the provenance metadata is not provided.
2.8.0
ADS
- Added support for the
machine learning pipelines
feature. - Fixed a bug in
fetch_training_code_details()
. When git commit is empty string, set it as None to avoid service error. - Fixed a bug in
fetch_training_code_details()
. Use the folder oftraining_script_path
as the artifact directory, instead of.
.
2.7.3
ADS
- Added support for the model version set feature.
- Added
--job-info
option toads opctl run
CLI to save job run information to a YAML file. - Added the AuthContext class. It supports API key configuration, resource principal, and instance principal authentication. In addition, predefined signers, callable signers, or API keys configurations from specified locations.
- Added
restart_deployment()
method to the framework-specific classes. Update model deployment associated with the model. - Added
activate()
anddeactivate()
method to the model deployment classes. - Fixed a bug in
to_sql()
. The string length for the column created in Oracle Database table was counting characters, not bytes. - Fixed a bug where any exception that occurred in a notebook cell printed "ADS Exception" even if the ADS code was not responsible for the error.