Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Didn't find op for builtin opcode 'SHAPE' & Failed to get registration from op code SHAPE (TFMIC-42) #99

Open
3 tasks done
ImpulseHu opened this issue Nov 10, 2024 · 1 comment

Comments

@ImpulseHu
Copy link

ImpulseHu commented Nov 10, 2024

Checklist

  • Checked the issue tracker for similar issues to ensure this is not a duplicate
  • Read the documentation to confirm the issue is not addressed there and your configuration is set correctly
  • Tested with the latest version to ensure the issue hasn't been fixed

How often does this bug occurs?

always

Expected behavior

As the title says, i create a project and run it, but an error occurs.

By the way, i want to know, which version of TensorFlow was used to convert the tflite model in these examples[helloworld/micro_speech/person_detection]?

Actual behavior (suspected bug)

The following is the code snippet of ESP:

void setup() {
    // Map the model into a usable data structure. This doesn't involve any
    // copying or parsing, it's a very lightweight operation.
    model = tflite::GetModel(g_model);
    if (model->version() != TFLITE_SCHEMA_VERSION) {
        MicroPrintf("Model provided is schema version %d not equal to supported "
                    "version %d.", model->version(), TFLITE_SCHEMA_VERSION);
        return;
    }

    // Pull in only the operation implementations we need.
    static tflite::MicroMutableOpResolver<2> resolver;
    if (resolver.AddUnidirectionalSequenceLSTM() != kTfLiteOk) {
        return;
    }
    if (resolver.AddFullyConnected() != kTfLiteOk) {
        return;
    }

    // Build an interpreter to run the model with.
    static tflite::MicroInterpreter static_interpreter(
        model, resolver, tensor_arena, kTensorArenaSize);
    interpreter = &static_interpreter;

      // Allocate memory from the tensor_arena for the model's tensors.
    TfLiteStatus allocate_status = interpreter->AllocateTensors();
    if (allocate_status != kTfLiteOk) {
        MicroPrintf("AllocateTensors() failed");
        return;
    }

    // Obtain pointers to the model's input and output tensors.
    input = interpreter->input(0);
    output = interpreter->output(0);
}

The following is the code snippet of Python to build model and save to tflite:

def build_model():
    model = Sequential()
    model.add(LSTM(64, return_sequences=True, input_shape=(60, 6)))
    model.add(LSTM(32))
    model.add(Dense(8))
    model.add(Dense(gesture_classes_num, activation='softmax'))
    model.compile(optimizer='adam', loss='categorical_crossentropy')
    return model

def save_as_tflite(model, filename):
    converter = tf.lite.TFLiteConverter.from_keras_model(model)
    converter.optimizations = [tf.lite.Optimize.DEFAULT]
    converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
    converter.experimental_new_converter = True
    converter.allow_custom_ops = True

    tflite_model = converter.convert()
    with open(filename, 'wb') as f:
        f.write(tflite_model)

Error logs or terminal output

Didn't find op for builtin opcode 'SHAPE'
Failed to get registration from op code SHAPE

AllocateTensors() failed

Steps to reproduce the behavior

The current environment is as follows:
esp-idf: 5.2.0
esp-tflite-micro: 1.3.2
chip: esp32-s3 16MFlash and 8M PSRAM

python: 3.10.0
tensorflow: 2.16.1

Project release version

1.3.2

System architecture

Intel/AMD 64-bit (modern PC, older Mac)

Operating system

Linux

Operating system version

Window11

Shell

ZSH

Additional context

No response

@github-actions github-actions bot changed the title Didn't find op for builtin opcode 'SHAPE' & Failed to get registration from op code SHAPE Didn't find op for builtin opcode 'SHAPE' & Failed to get registration from op code SHAPE (TFMIC-42) Nov 10, 2024
@vikramdattu
Copy link
Collaborator

Hello @ImpulseHu the tflite-micro is compatible with v1.0 ops as well as 2.x. It is however, focused on int8 optimised versions. You should quantise the model using selective OPs. I would do some changes to your code to quantise the model. You may find the relevant code here: https://ai.google.dev/edge/litert/models/post_training_integer_quant#convert_using_integer-only_quantization

Once you quantise the model and embed it in the program, you can then pull the OPs used by the model. To know these OPs, you can take help of Netron visualiser or run the script to do this for you.

The model should now run without issues. If you want to manually do this, in your case, for example, you will at least need to add the SHAPE OP (shows in the error) additionally as follows:

    // Pull in only the operation implementations we need.
    static tflite::MicroMutableOpResolver<3> resolver; // 3 = Number of OPs to be registered
    if (resolver.AddUnidirectionalSequenceLSTM() != kTfLiteOk) {
        return;
    }
    if (resolver.AddFullyConnected() != kTfLiteOk) {
        return;
    }
    if (resolver.AddShape() != kTfLiteOk) {
        return;
    }

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants