Caffe Workflow - 1.3 English

Vitis AI User Guide (UG1414)

Document ID
UG1414
Release Date
2021-02-03
Version
1.3 English

To generate the quantized inference model and reference result, follow these steps:

  1. Generate the quantized inference model by running the following command to quantize the model.
    vai_q_caffe quantize -model float/test_quantize.prototxt \
    -weights float/trainval.caffemodel                       \
    -output_dir quantize_model                               \
    -keep_fixed_neuron			                           \
    2>&1 | tee ./log/quantize.log

    The following files are generated in the quantize_model folder.

    • deploy.caffemodel
    • deploy.prototxt
    • quantize_train_test.caffemodel
    • quantize_train_test.prototxt
  2. Generate the reference result by running the following command to generate reference data.
    DECENT_DEBUG=5 vai_q_caffe test -model quantize_model/dump.prototxt \
    -weights quantize_model/quantize_train_test.caffemodel              \
    -test_iter 1                                                        \
    2>&1 | tee ./log/dump.log

    This creates the dump_gpu folder and files as shown in the following figure.

  3. Generate the DPU xmodel by running the following command to generate DPU xmodel file.
    vai_c_caffe --prototxt quantize_model/deploy.prototxt       \
    --caffemodel quantize_model/deploy.caffemodel               \
    --arch /opt/vitis_ai/compiler/arch/DPUCAHX8H/U50/arch.json  \
    --output_dir compile_model                                  \
    --net_name resnet50
  4. Generate the DPU inference result by running the following command to generate the DPU inference result.
    env XLNX_ENABLE_DUMP=1  XLNX_ENABLE_DEBUG_MODE=1           \
    	xilinx_test_dpu_runner ./compile_model/resnet50.xmodel \
    	./dump_gpu/data.bin 2>result.log 1>&2

    For xilinx_test_dpu_runner, the usage is as follow:

    xilinx_test_dpu_runner  <model_file> <input_data> 

    After the above command runs, the DPU inference result and the comparing result result.log are generated. The DPU inference results are under dump folder.

  5. Crosscheck the reference result and the DPU inference result.

    The crosscheck mechanism is to first make sure input(s) to one layer is identical to reference and then the output(s) is identical too. This can be done with commands like diff, vimdiff, and cmp. If two files are identical, diff and cmp will return nothing in the command line.

    1. Check the input of DPU and GPU, make sure they use the same input data.
    2. Use xir tool to generate a picture for displaying the network's structure.
      Usage: xir svg <xmodel> <svg>
      Note: In Vitis AI docker environment, execute the following command to install the required library.
      sudo apt-get install graphviz

      The following figure is part of the ResNet50 model structure generated by xir_cat.

    3. View the xmodel structure image and find out the last layer name of the model.

      For this model, the name of the last layer is `subgraph_fc1000_fixed_(fix2float)`.

      1. Search the keyword fc1000 under dump_gpu and dump. You will find the reference result file fc1000.bin under dump_gpu and DPU inference result 0.fc1000_inserted_fix_2.bin under dump/subgraph_fc1000/output/.
      2. Diff the two files.

        If the last layer's crosscheck fails, then you have to do the crosscheck from the first layer until you find the layer where the crosscheck fails.

      Note: For the layers that have multiple input or output (e.g., res2a_branch1), input correctness should be checked first and then check the output.
    4. Submit the files to Xilinx if the DPU cross check fail.

      If a certain layer proves to be wrong on the DPU, prepare the following files as one package for further analysis by factory and send it to Xilinx with a detailed description.

      • Float model and prototxt file
      • Quantized model, such as deploy.caffemodel, deploy.prototxt, quantize_train_test.caffemodel, and quantize_train_test.prototxt.