From 8f84b41953a03e61e30b3eb765f336aa1763e9c0 Mon Sep 17 00:00:00 2001 From: Karol Blaszczak Date: Thu, 6 Feb 2025 11:18:58 +0100 Subject: [PATCH] [DOCS] Updating references to OV docs (#3250) Updating links to refer to 2025 version of docs Co-authored-by: sgolebiewski-intel --- README.md | 2 +- docs/Installation.md | 2 +- docs/ModelZoo.md | 2 +- .../weights_compression/Usage.md | 6 +++--- .../torch/sparsity/movement/MovementSparsity.md | 8 ++++---- 5 files changed, 10 insertions(+), 10 deletions(-) diff --git a/README.md b/README.md index 65bbb7f3e5c..32ee76e4106 100644 --- a/README.md +++ b/README.md @@ -514,4 +514,4 @@ You can opt-out at any time by running the following command in the Python envir `opt_in_out --opt_out` -More information available on [OpenVINO telemetry](https://docs.openvino.ai/2024/about-openvino/additional-resources/telemetry.html). +More information available on [OpenVINO telemetry](https://docs.openvino.ai/2025/about-openvino/additional-resources/telemetry.html). diff --git a/docs/Installation.md b/docs/Installation.md index 7130e784762..d03d7f76191 100644 --- a/docs/Installation.md +++ b/docs/Installation.md @@ -5,7 +5,7 @@ We suggest to install or use the package in the [Python virtual environment](htt NNCF supports multiple backends. Follow the corresponding installation guides and ensure your system meets the required specifications for your chosen backend: -- OpenVINO™: [Install Guide](https://docs.openvino.ai/2024/get-started/install-openvino.html), [System Requirements](https://docs.openvino.ai/2024/about-openvino/release-notes-openvino/system-requirements.html) +- OpenVINO™: [Install Guide](https://docs.openvino.ai/2025/get-started/install-openvino.html), [System Requirements](https://docs.openvino.ai/2025/about-openvino/release-notes-openvino/system-requirements.html) - ONNX: [Install Guide](https://onnxruntime.ai/docs/install/) - PyTorch: [Install Guide](https://pytorch.org/get-started/locally/#start-locally) - TensorFlow: [Install Guide](https://www.tensorflow.org/install/) diff --git a/docs/ModelZoo.md b/docs/ModelZoo.md index 9dad44aaacc..8555ad42643 100644 --- a/docs/ModelZoo.md +++ b/docs/ModelZoo.md @@ -2,7 +2,7 @@ Ready-to-use **Compressed LLMs** can be found on [OpenVINO Hugging Face page](https://huggingface.co/OpenVINO#models). Each model card includes NNCF parameters that were used to compress the model. -**INT8 Post-Training Quantization** ([PTQ](../README.md#post-training-quantization)) results for public Vision, NLP and GenAI models can be found on [OpenVino Performance Benchmarks page](https://docs.openvino.ai/2024/about-openvino/performance-benchmarks.html). PTQ results for ONNX models are available in the [ONNX](#onnx) section below. +**INT8 Post-Training Quantization** ([PTQ](../README.md#post-training-quantization)) results for public Vision, NLP and GenAI models can be found on [OpenVino Performance Benchmarks page](https://docs.openvino.ai/2025/about-openvino/performance-benchmarks.html). PTQ results for ONNX models are available in the [ONNX](#onnx) section below. **Quantization-Aware Training** ([QAT](../README.md#training-time-compression)) results for PyTorch and TensorFlow public models can be found below. diff --git a/docs/usage/post_training_compression/weights_compression/Usage.md b/docs/usage/post_training_compression/weights_compression/Usage.md index bcf89c9fc80..bd16fe06ef6 100644 --- a/docs/usage/post_training_compression/weights_compression/Usage.md +++ b/docs/usage/post_training_compression/weights_compression/Usage.md @@ -676,9 +676,9 @@ Accuracy/footprint trade-off for `microsoft/Phi-3-mini-4k-instruct`: ### Additional resources -- [LLM Weight Compression](https://docs.openvino.ai/2024/openvino-workflow/model-optimization-guide/weight-compression.html) -- [Large Language Model Inference Guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html) -- [Inference with Hugging Face and Optimum Intel](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide/llm-inference-hf.html) +- [LLM Weight Compression](https://docs.openvino.ai/2025/openvino-workflow/model-optimization-guide/weight-compression.html) +- [Large Language Model Inference Guide](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-genai.html) +- [Inference with Hugging Face and Optimum Intel](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-optimum-intel.html) - [Optimum Intel documentation](https://huggingface.co/docs/optimum/intel/inference) - [Large Language Models Weight Compression Example](https://github.com/openvinotoolkit/nncf/blob/develop/examples/llm_compression/openvino/tiny_llama) - [Tuning Ratio and Group Size Example](https://github.com/openvinotoolkit/nncf/blob/develop/examples/llm_compression/openvino/tiny_llama_find_hyperparams) diff --git a/nncf/experimental/torch/sparsity/movement/MovementSparsity.md b/nncf/experimental/torch/sparsity/movement/MovementSparsity.md index f98a3aa622b..800b466827c 100644 --- a/nncf/experimental/torch/sparsity/movement/MovementSparsity.md +++ b/nncf/experimental/torch/sparsity/movement/MovementSparsity.md @@ -2,7 +2,7 @@ [Movement Pruning (Sanh et al., 2020)](https://arxiv.org/pdf/2005.07683.pdf) is an effective learning-based unstructured sparsification algorithm, especially for Transformer-based models in transfer learning setup. [Lagunas et al., 2021](https://arxiv.org/pdf/2109.04838.pdf) extends the algorithm to sparsify by block grain size, enabling structured sparsity which can achieve device-agnostic inference acceleration. -NNCF implements both unstructured and structured movement sparsification. The implementation is designed with a minimal set of configuration for ease of use. The algorithm can be applied in conjunction with other NNCF algorithms, e.g. quantization-aware training and knowledge distillation. The optimized model can be deployed and accelerated via [OpenVINO](https://docs.openvino.ai/2024/index.html) toolchain. +NNCF implements both unstructured and structured movement sparsification. The implementation is designed with a minimal set of configuration for ease of use. The algorithm can be applied in conjunction with other NNCF algorithms, e.g. quantization-aware training and knowledge distillation. The optimized model can be deployed and accelerated via [OpenVINO](https://docs.openvino.ai/2025/index.html) toolchain. For usage explanation of the algorithm, let's start with an example configuration below which is targeted for BERT models. @@ -37,11 +37,11 @@ This diagram is the sparsity level of BERT-base model over the optimization life 1. **Unstructured sparsification**: In the first stage, model weights are gradually sparsified in the grain size specified by `sparse_structure_by_scopes`. This example will result in _BertAttention layers (Multi-Head Self-Attention)_ being sparsified in 32 by 32 block size, whereas _BertIntermediate, BertOutput layers (Feed-Forward Network)_ will be sparsified in its row or column respectively. The sparsification follows a predefined warmup schedule where users only have to specify the start `warmup_start_epoch` and end `warmup_end_epoch` and the sparsification strength proportional to `importance_regularization_factor`. Users might need some heuristics to find a satisfactory trade-off between sparsity and task performance. For more details on how movement sparsification works, please refer the original papers [1, 2] . -2. **Structured masking and fine-tuning**: At the end of first stage, i.e. `warmup_end_epoch`, the sparsified model cannot be accelerated without tailored HW/SW but some sparse structures can be totally discarded from the model to save compute and memory footprint. NNCF provides mechanism to achieve structured masking by `"enable_structured_masking": true`, where it automatically resolves the structured masking between dependent layers and rewinds the sparsified parameters that does not participate in acceleration for task modeling. In the example above, the sparsity level has dropped after `warmup_end_epoch` due to structured masking and the model will continue to fine-tune thereafter. Currently, the automatic structured masking feature was tested on **_BERT, DistilBERT, RoBERTa, MobileBERT, Wav2Vec2, Swin, ViT, CLIPVisual_** architectures defined by [Hugging Face's transformers](https://huggingface.co/docs/transformers/index). Support for other architectures is not guaranteed. Users can disable this feature by setting `"enable_structured_masking": false`, where the sparse structures at the end of first stage will be frozen and training/fine-tuning will continue on unmasked parameters. Please refer next section to realize model inference acceleration with [OpenVINO](https://docs.openvino.ai/2024/index.html) toolchain. +2. **Structured masking and fine-tuning**: At the end of first stage, i.e. `warmup_end_epoch`, the sparsified model cannot be accelerated without tailored HW/SW but some sparse structures can be totally discarded from the model to save compute and memory footprint. NNCF provides mechanism to achieve structured masking by `"enable_structured_masking": true`, where it automatically resolves the structured masking between dependent layers and rewinds the sparsified parameters that does not participate in acceleration for task modeling. In the example above, the sparsity level has dropped after `warmup_end_epoch` due to structured masking and the model will continue to fine-tune thereafter. Currently, the automatic structured masking feature was tested on **_BERT, DistilBERT, RoBERTa, MobileBERT, Wav2Vec2, Swin, ViT, CLIPVisual_** architectures defined by [Hugging Face's transformers](https://huggingface.co/docs/transformers/index). Support for other architectures is not guaranteed. Users can disable this feature by setting `"enable_structured_masking": false`, where the sparse structures at the end of first stage will be frozen and training/fine-tuning will continue on unmasked parameters. Please refer next section to realize model inference acceleration with [OpenVINO](https://docs.openvino.ai/2025/index.html) toolchain. -## Inference Acceleration via [OpenVINO](https://docs.openvino.ai/2024/index.html) +## Inference Acceleration via [OpenVINO](https://docs.openvino.ai/2025/index.html) -Optimized models are compatible with OpenVINO toolchain. Use `compression_controller.export_model("movement_sparsified_model.onnx")` to export model in onnx format. Sparsified parameters in the onnx are in value of zero. Structured sparse structures can be discarded during ONNX translation to OpenVINO IR using [Model Conversion](https://docs.openvino.ai/2024/openvino-workflow/model-preparation/convert-model-to-ir.html) with utilizing [pruning transformation](https://docs.openvino.ai/2024/documentation/legacy-features/transition-legacy-conversion-api.html#transform). Corresponding IR is compressed and deployable with [OpenVINO Runtime](https://docs.openvino.ai/2024/openvino-workflow/running-inference.html). To quantify inference performance improvement, both ONNX and IR can be profiled using [Benchmark Tool](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/benchmark-tool.html). +Optimized models are compatible with OpenVINO toolchain. Use `compression_controller.export_model("movement_sparsified_model.onnx")` to export model in onnx format. Sparsified parameters in the onnx are in value of zero. Structured sparse structures can be discarded during ONNX translation to OpenVINO IR using [Model Conversion](https://docs.openvino.ai/2025/openvino-workflow/model-preparation/convert-model-to-ir.html) with utilizing [pruning transformation](https://docs.openvino.ai/2025/openvino-workflow/model-optimization-guide/compressing-models-during-training/filter-pruning.html). Corresponding IR is compressed and deployable with [OpenVINO Runtime](https://docs.openvino.ai/2025/openvino-workflow/running-inference.html). To quantify inference performance improvement, both ONNX and IR can be profiled using [Benchmark Tool](https://docs.openvino.ai/2025/get-started/learn-openvino/openvino-samples/benchmark-tool.html). ## Getting Started