Generating the Quantized Model - 1.4.1 English

Vitis AI User Guide (UG1414)

Document ID
UG1414
Release Date
2021-12-13
Version
1.4.1 English

After the successful execution of the vai_q_tensorflow command, one output file is generated in the ${output_dir} location:

  • quantize_eval_model.pb is used to evaluate the CPU/GPUs, and can be used to simulate the results on hardware. Run import tensorflow.contrib.decent_q explicitly to register the custom quantize operation because tensorflow.contrib is now lazy-loaded.
Table 1. vai_q_tensorflow Output Files
No. Name Description
1 deploy_model.pb Quantized model for the Vitis AI compiler (extended TensorFlow format) for targeting DPUCZDX8G implementations.
2 quantize_eval_model.pb Quantized model for evaluation (also, the Vitis AI compiler input for most DPU architectures, like DPUCAHX8H, DPUCAHX8L, and DPUCADF8H).