前言
TensorFlow Lite 提供了轉換 TensorFlow 模型,並在移動端(mobile)、嵌入式(embeded)和物聯網(IoT)設備上運行 TensorFlow 模型所需的所有工具。之前想部署tensorflow模型,需要轉換成tflite模型。
實現過程
1.不同模型的調用函數接口稍微有些不同
# Converting a SavedModel to a TensorFlow Lite model. converter = lite.TFLiteConverter.from_saved_model(saved_model_dir) tflite_model = converter.convert() # Converting a tf.Keras model to a TensorFlow Lite model. converter = lite.TFLiteConverter.from_keras_model(model) tflite_model = converter.convert() # Converting ConcreteFunctions to a TensorFlow Lite model. converter = lite.TFLiteConverter.from_concrete_functions([func]) tflite_model = converter.convert()
2. 完整的實現
import tensorflow as tf saved_model_dir = './mobilenet/' converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir) converter.experimental_new_converter = True tflite_model = converter.convert() open('model_tflite.tflite', 'wb').write(tflite_model)
其中,
@classmethod from_saved_model( cls, saved_model_dir, signature_keys=None, tags=None )
另外
For more complex SavedModels, the optional parameters that can be passed into TFLiteConverter.from_saved_model() are input_arrays, input_shapes, output_arrays, tag_set and signature_key. Details of each parameter are available by running help(tf.lite.TFLiteConverter).
對於如何查看模型的操作op,可查看here;
help(tf.lite.TFLiteConverter)結果

Help on class TFLiteConverterV2 in module tensorflow.lite.python.lite: class TFLiteConverterV2(TFLiteConverterBase) | TFLiteConverterV2(funcs, trackable_obj=None) | | Converts a TensorFlow model into TensorFlow Lite model. | | Attributes: | allow_custom_ops: Boolean indicating whether to allow custom operations. | When false any unknown operation is an error. When true, custom ops are | created for any op that is unknown. The developer will need to provide | these to the TensorFlow Lite runtime with a custom resolver. | (default False) | target_spec: Experimental flag, subject to change. Specification of target | device. | optimizations: Experimental flag, subject to change. A list of optimizations | to apply when converting the model. E.g. `[Optimize.DEFAULT] | representative_dataset: A representative dataset that can be used to | generate input and output samples for the model. The converter can use the | dataset to evaluate different optimizations. | experimental_enable_mlir_converter: Experimental flag, subject to change. | Enables the MLIR converter instead of the TOCO converter. | | Example usage: | | ```python | # Converting a SavedModel to a TensorFlow Lite model. | converter = lite.TFLiteConverter.from_saved_model(saved_model_dir) | tflite_model = converter.convert() | | # Converting a tf.Keras model to a TensorFlow Lite model. | converter = lite.TFLiteConverter.from_keras_model(model) | tflite_model = converter.convert() | | # Converting ConcreteFunctions to a TensorFlow Lite model. | converter = lite.TFLiteConverter.from_concrete_functions([func]) | tflite_model = converter.convert() | ``` | | Method resolution order: | TFLiteConverterV2 | TFLiteConverterBase | builtins.object | | Methods defined here: | | __init__(self, funcs, trackable_obj=None) | Constructor for TFLiteConverter. | | Args: | funcs: List of TensorFlow ConcreteFunctions. The list should not contain | duplicate elements. | trackable_obj: tf.AutoTrackable object associated with `funcs`. A | reference to this object needs to be maintained so that Variables do not | get garbage collected since functions have a weak reference to | Variables. This is only required when the tf.AutoTrackable object is not | maintained by the user (e.g. `from_saved_model`). | | convert(self) | Converts a TensorFlow GraphDef based on instance variables. | | Returns: | The converted data in serialized format. | | Raises: | ValueError: | Multiple concrete functions are specified. | Input shape is not specified. | Invalid quantization parameters. | | ---------------------------------------------------------------------- | Class methods defined here: | | from_concrete_functions(funcs) from builtins.type | Creates a TFLiteConverter object from ConcreteFunctions. | | Args: | funcs: List of TensorFlow ConcreteFunctions. The list should not contain | duplicate elements. | | Returns: | TFLiteConverter object. | | Raises: | Invalid input type. | | from_keras_model(model) from builtins.type | Creates a TFLiteConverter object from a Keras model. | | Args: | model: tf.Keras.Model | | Returns: | TFLiteConverter object. | | from_saved_model(saved_model_dir, signature_keys=None, tags=None) from builtins.type | Creates a TFLiteConverter object from a SavedModel directory. | | Args: | saved_model_dir: SavedModel directory to convert. | signature_keys: List of keys identifying SignatureDef containing inputs | and outputs. Elements should not be duplicated. By default the | `signatures` attribute of the MetaGraphdef is used. (default | saved_model.signatures) | tags: Set of tags identifying the MetaGraphDef within the SavedModel to | analyze. All tags in the tag set must be present. (default set(SERVING)) | | Returns: | TFLiteConverter object. | | Raises: | Invalid signature keys. | | ---------------------------------------------------------------------- | Data descriptors inherited from TFLiteConverterBase: | __dict__ | dictionary for instance variables (if defined) | | __weakref__ | list of weak references to the object (if defined)
問題:
使用tf_saved_model中生成mobilenet網絡模型轉換成tfLite能夠成功,為什么使用另一個設計的模型進行轉換卻出現問題了呢??
Traceback (most recent call last): File "pb2tflite.py", line 9, in <module> tflite_model = converter.convert() File "~/.local/lib/python3.7/site-packages/tensorflow_core/lite/python/lite.py", line 428, in convert "invalid shape '{1}'.".format(_get_tensor_name(tensor), shape_list)) ValueError: None is only supported in the 1st dimension. Tensor 'images' has invalid shape '[None, None, None, None]'.
facebox模型節點:
(tf_test) ~/workspace/test_code/github_test/faceboxes-tensorflow$ saved_model_cli show --dir model/detector/ --all MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs: signature_def['__saved_model_init_op']: The given SavedModel SignatureDef contains the following input(s): The given SavedModel SignatureDef contains the following output(s): outputs['__saved_model_init_op'] tensor_info: dtype: DT_INVALID shape: unknown_rank name: NoOp Method name is: signature_def['serving_default']: The given SavedModel SignatureDef contains the following input(s): inputs['images'] tensor_info: dtype: DT_FLOAT shape: (-1, -1, -1, -1) name: serving_default_images:0 The given SavedModel SignatureDef contains the following output(s): outputs['boxes'] tensor_info: dtype: DT_FLOAT shape: (-1, 100, 4) name: StatefulPartitionedCall:0 outputs['num_boxes'] tensor_info: dtype: DT_INT32 shape: (-1) name: StatefulPartitionedCall:1 outputs['scores'] tensor_info: dtype: DT_FLOAT shape: (-1, 100) name: StatefulPartitionedCall:2 Method name is: tensorflow/serving/predict
mobilenet的模型節點
~/workspace/test_code/github_test/faceboxes-tensorflow/mobilenet$ saved_model_cli show --dir ./ --all MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs: signature_def['__saved_model_init_op']: The given SavedModel SignatureDef contains the following input(s): The given SavedModel SignatureDef contains the following output(s): outputs['__saved_model_init_op'] tensor_info: dtype: DT_INVALID shape: unknown_rank name: NoOp Method name is: signature_def['serving_default']: The given SavedModel SignatureDef contains the following input(s): inputs['input_1'] tensor_info: dtype: DT_FLOAT shape: (-1, 224, 224, 3) name: serving_default_input_1:0 The given SavedModel SignatureDef contains the following output(s): outputs['act_softmax'] tensor_info: dtype: DT_FLOAT shape: (-1, 1000) name: StatefulPartitionedCall:0 Method name is: tensorflow/serving/predict
得到大神指點,tflite是靜態圖,需要指定hwc的值,在此謝過,那么問題來了,怎么指定hwc呢?
import tensorflow as tf saved_model_dir = './model/detector/' model = tf.saved_model.load(saved_model_dir) concrete_func = model.signatures[tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY] concrete_func.inputs[0].set_shape([1, 512, 512, 3]) converter = tf.lite.TFLiteConverter.from_concrete_functions([concrete_func]) # converter.experimental_new_converter = True tflite_model = converter.convert() open('model_tflite_facebox.tflite', 'wb').write(tflite_model)
error
Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If those are native TensorFlow operators, you might be able to use the extended runtime by passing --enable_select_tf_ops, or by setting target_ops=TFLITE_BUILTINS,SELECT_TF_OPS when calling tf.lite.TFLiteConverter(). Otherwise, if you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ADD, AVERAGE_POOL_2D, CONCATENATION, CONV_2D, MAXIMUM, MINIMUM, MUL, NEG, PACK, RELU, RESHAPE, SOFTMAX, STRIDED_SLICE, SUB, UNPACK. Here is a list of operators for which you will need custom implementations: TensorListFromTensor, TensorListReserve, TensorListStack, While.
TensorFlow Lite 已經內置了很多運算符,並且還在不斷擴展,但是仍然還有一部分 TensorFlow 運算符沒有被 TensorFlow Lite 原生支持。這些不被支持的運算符會給 TensorFlow Lite 的模型轉換帶來一些阻力。
import tensorflow as tf saved_model_dir = './model/detector/' model = tf.saved_model.load(saved_model_dir) concrete_func = model.signatures[tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY] concrete_func.inputs[0].set_shape([1, 512, 512, 3]) converter = tf.lite.TFLiteConverter.from_concrete_functions([concrete_func]) converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS] # converter.experimental_new_converter = True tflite_model = converter.convert() open('model_tflite_facebox.tflite', 'wb').write(tflite_model)
還是有點問題。。。
參考
2. stackoverflow_how-to-create-a-tflite-file-from-saved-model-ssd-mobilenet;
3. tfv1-模型文件轉換;
5. tf_saved_model;
8. ops_select;
完