Quantizing with Custom Layers - 2.0 English

Vitis AI User Guide (UG1414)

Document ID
Release Date
2.0 English

Tensorflow 2 provides a lot of common built-in layers to build the machine learning models, as well as easy ways to for you to write your own application-specific layers either from scratch or as the composition of existing layers. `Layer` is one of the central abstractions in tf.keras, subclassing Layer is the recommended way to create custom layers. Please refer to tensorflow user guide for more information.

Vai_q_tensorflow2 provides support for new custom layers via subclassing, including quantizing models with custom layers and an experimental support for quantizing the custom layers with custom quantize strategies.

Note: Custom model via subclassing tf.keras.Model is not supported by vai_q_tensorflow2 in this release, please flatten it to layers.

Quantizing models with custom layers

vai_q_tensorflow2 provides interfaces to load the custom layers that are available in some models. For example:

class MyCustomLayer(keras.layers.Layer):

    def __init__(self, units=32, **kwargs):
        super(MyLayer, self).__init__(kwargs)
        self.units = units

    def build(self, input_shape):
        self.w = self.add_weight(
            shape=(input_shape[-1], self.units),
        self.b = self.add_weight(
            shape=(self.units,), initializer="zeros", trainable=True, name='b')

    def call(self, inputs):
        return tf.matmul(inputs, self.w) + self.b

    def get_config(self):
        base_config = super(MyLayer, self).get_config()
        config = {"units": self.units}
        return dict(list(base_config.items()) + list(config.items()))

# Here is a float model with custom layer "MyCustomLayer", use custom_objects argument in tf.keras.models.load_model to load it.
float_model = tf.keras.models.load_model(‘float_model.h5’, custom_objects={'MyCustomLayer': MyCustomLayer})

Here, a float model contains a custom layer named "MyCustomLayer". The custom_objects argument in the tf.keras.model.load_model API is needed to load it. Similarly, the VitisQuantizer class provides the 'custom_objects' argument to handle the custom layers. The following code is an example. The argument custom_objects is a dict containing the {"custom_layer_class_name":"custom_layer_class"}, multiple custom layers should be separated by a comma. Moreover, add_shape_info should also be set to True for the quantize_model API when quantizing models with custom layers to add shape inference information for them.

from tensorflow_model_optimization.quantization.keras import vitis_quantize
# Register the custom layer to VitisQuantizer by custom_objects argument.
quantizer = vitis_quantize.VitisQuantizer(float_model, custom_objects={'MyCustomLayer': MyCustomLayer})
quantized_model = quantizer.quantize_model(calib_dataset=calib_dataset, calib_step=100, calib_batch_size=10, add_shape_info=True)

During the quantization, these custom layers will be wrapped by CustomLayerWrapper and kept unquantized. You can find a complete example here.

Note: When calling the dump_model API to dump golden results for data checking during deployment, we need to set dump_float=True to dump float weights and activation for the custom layers, since they are not quantized.

(Experimental) Quantizing custom layers with custom quantize strategy

With the default quantize strategy, the custom layers are not quantized and continue to exist as float during the quantization as they are not in the list of supported APIs of vai_q_tensorflow2. An interface named 'custom_quantize_strategy' is provided for advanced users to build custom quantize strategies to run quantize experiments. The custom quantize strategy is a Dict object containing the quantize strategy items as a JSON file of the Dict.

The default quantize strategy provides an example of the quantize strategy. The custom quantize strategy follows the same format. However, the same item in the custom quantize strategy will override the one in the default strategy, but new items will be added to the quantize strategy.

With this feature, you can quantize the 'MyCustomLayer' layer from the previous example:

# Define quantizer with custom quantize strategy, which quantizes w,b and outputs 0 of MyCustomLayer objects.
my_quantize_strategy = {
    "quantize_registry_config": {
        "layer_quantize_config": [{
            "layer_type": "__main__.MyCustomLayer",
            "quantizable_weights": ["w", "b"],
            "weight_quantizers": [
                "quantizer_type": "LastValueQuantPosQuantizer","quantizer_params": {"bit_width": 8, "method": 1, "round_mode": 0}, 
                "quantizer_type": "LastValueQuantPosQuantizer", "quantizer_params": {"bit_width": 8, "method": 1, "round_mode": 0}
            "quantizable_outputs": ["0"],
            "output_quantizers": [
                "quantizer_type": "LastValueQuantPosQuantizer", "quantizer_params": {"bit_width": 8, "method": 1, "round_mode": 1}
quantizer = vitis_quantize.VitisQuantizer(model, custom_objects={'MyLayer': MyLayer}, custom_quantize_strategy=my_quantize_strategy)

# The following quantization process are all the same as before, here we do normal PTQ as an example
quantized_model = quantizer.quantize_model(calib_dataset=calib_dataset, calib_step=100, calib_batch_size=10)