In Tensorflow, I'm getting errors converting Google's BigTransfer model into Tensorflow Lite

I would like to use the model from Google's BigTransfer paper on device.

Paper: https://arxiv.org/abs/1912.11370

Code: https://github.com/google-research/big_transfer/blob/master/colabs/big_transfer_tf2.ipynb

Here is my TF Lite code:

def representative_data_gen():
  for x, _ in validation_ds.take(QUANTIZATION_REPRESENTATIVE_DATASET_SIZE):
    yield [x]
    
converter = tf.lite.TFLiteConverter.from_saved_model(MODEL_DIR)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_data_gen

tflite_model = converter.convert()

Here is the error I get:

<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
        tf.SquaredDifference {device = ""}

It looks like Tensorflow Lite can't do group_norm because there are no ops for tf.SquaredDifference. Any ideas on how to convert Google's BiT model into Tensorflow Lite?

1 answer

  • answered 2021-01-11 21:57 Meghna Natraj

    Try using TF select ops. However, you may needs to ensure your TFLite interpreter has these select ops for inference.