Skip to content
This repository was archived by the owner on Jun 3, 2025. It is now read-only.
This repository was archived by the owner on Jun 3, 2025. It is now read-only.

Structured pruning for yolov5s model #2359

Closed
@Imran686

Description

@Imran686

I did unstructured pruning for yolov5s and saved it as onnx but it did not give good improvements in terms of inference speed and model size. I would like to do structured pruning using sparseML library. How that could be done? I tried using the modifier on the following link: https://github.com/neuralmagic/sparseml/blob/main/src/sparseml/pytorch/sparsification/pruning/modifier_pruning_structured.py.

I created a .yaml file as follows:

Structured pruning

modifiers:

  • !StructuredPruningModifier
    param_groups = [
    # Convolutional layers
    {
    'name': 'conv',
    'params': [
    'model.model.model.0.conv.weight', 'model.model.model.1.conv.weight',
    'model.model.model.2.cv1.conv.weight', 'model.model.model.2.cv2.conv.weight',
    ........
    ],
    'prune_ratio': 0.5 # Example pruning ratio for conv layers
    },
    # Batch normalization layer
    {
    'name': 'bn',
    'params': [
    'model.model.model.0.bn.weight', 'model.model.model.0.bn.bias',
    'model.model.model.1.bn.weight', 'model.model.model.1.bn.bias',
    ........
    ],
    'prune_ratio': 0.3 # Example pruning ratio for batch norm layers
    }
    ]
    mask_type: filter
    init_sparsity: 0.05
    final_sparsity: 0.8
    start_epoch: 0.0
    end_epoch: 10.0
    update_frequency: 1.0
    params: ALL_PRUNABLE
    leave_enabled: True
    inter_func: cubic

But I am unable to apply it to my model. The following code I used for fine tuning unstructured pruned model:

!sparseml.yolov5.train
--weights zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none
--recipe zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/pruned75_quant-none
--teacher-weights zoo:cv/detection/yolov5-l/pytorch/ultralytics/coco/base-none
--data coco128.yaml
--hyp hyps/hyp.scratch-low.yaml --cfg yolov5s.yaml --patience 0 --gradient-accum-steps 4

How would I do the same in the case of structured pruning?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions