Skip to content

TFOpLambda not supported in INT8 Quantization Aware Training (Mobilenetv3) #1145

Open
@pedrofrodenas

Description

@pedrofrodenas

Describe the bug

I cannot quantize Mobilenetv3 from keras2 because the hard-swish activation fuction is implemented as a TFOpLambda.

System information

tensorflow version: 2.17
tf_keras version: 2.17
tensorflow_model_optimization version: 0.8.0

TensorFlow Model Optimization version installed from pip

Python version: Python 3.9.19

Describe the expected behavior

Quantization aware training can be applied to keras.applications.MobileNetV3Small using tfmot.quantization.keras.quantize_model

Describe the current behavior

When some layer is a TFOpLambda the following error raises:

AttributeError: Exception encountered when calling layer "tf.operators.add" (type TFOpLambda).

'list' object has no attribute 'dtype'

Call arguments received by layer "tf.operators.add" (type TFOpLambda):
• x=['tf.Tensor(shape=(None, 112, 112, 16), dtype=float32)']
• y=3.0
• name=None

Code to reproduce the issue

import os
os.environ["TF_USE_LEGACY_KERAS"] = "1"

import tf_keras as keras

model = keras.applications.MobileNetV3Small(
        input_shape=tuple([224,224,3]),
        alpha=1.0,
        minimalistic=False,
        include_top=True,
        weights="imagenet",
        input_tensor=None,
        classes=1000,
        pooling=None,
        dropout_rate=0.2,
        classifier_activation="softmax",
        include_preprocessing=True,
    )


import tensorflow_model_optimization as tfmot

quantize_model = tfmot.quantization.keras.quantize_model

# q_aware stands for for quantization aware.
q_aware_model = quantize_model(model)

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions