OpenVINO™ Execution Provider for ONNXRuntime 5.3
Description:
OpenVINO™ Execution Provider For ONNXRuntime v5.3 Release based on the latest OpenVINO™ 2024.1 Release and OnnxRuntime 1.18.0 Release.
For all the latest information, Refer to our official documentation:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html
Announcements:
OpenVINO™ version upgraded to 2024.1. This provides functional bug fixes, and new features from the previous release.
This release supports ONNXRuntime 1.18.0 with the latest OpenVINO™ 2024.1 release.
Please refer to the OpenVINO™ Execution Provider For ONNXRuntime build instructions for information on system pre-requisites as well as instructions to build from source.
https://onnxruntime.ai/docs/build/eps.html#openvino
Modifications:
- Supports OpenVINO 2024.1.
- Supports NPU as a device option.
- Separating Device/Precision Device will be CPU, GPU, NPU and inference precision will be set as provider option . CPU_FP32, GPU_FP32 options are deprecated.
- Importing Precompiled Blobs to OpenVINO. It will be possible to import Precompiled Blobs to OpenVINO.
- OVEP Windows Logging Support for NPU. It is possible to obtain NPU Profiling information from debug build of OpenVINO.
- Packages support NPU on Windows.
- Supports Priority through Runtime Provider Option.
Samples:
https://github.com/microsoft/onnxruntime-inference-examples
Python Package:
https://pypi.org/project/onnxruntime-openvino/
Installation and usage Instructions on Windows:
pip install onnxruntime-openvino
/* Steps If using python openvino package to set openvino runtime environment */
pip install openvino==2024.1.0
<Add these 2 lines in the application code>
import onnxruntime.tools.add_openvino_win_libs as utils
utils.add_openvino_libs_to_path()
ONNXRuntime APIs usage:
Please refer to the link below for Python/C++ APIs:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html#configuration-options