When yiu use their own models, it is important to note that your model framework should be within the scope supported by the Vitis™ AI Library. The following is an introduction of how to deploy a retrained YOLOv3 Caffe model to ZCU102 platform based on Vitis AI Library step by step.
- Download the corresponding docker image from https://github.com/Xilinx/Vitis-AI.
- Load and run the docker.
- Create a folder and place the float model under it on the host side, then use AI Quantizer tool to do the quantization. For more details, see Vitis AI User Guide in the Vitis AI User Documentation (UG1431).
- Use AI Compiler tool to do the model compiling to get the elf file, such as yolov3_custom.elf. For more information, see Vitis AI User Guide in the Vitis AI User Documentation (UG1431).
- Create the meta.json file, as shown in the
following
{ "target": "DPUv2", "lib": "libvart-dpu-runner.so", "filename": "yolov3_custom.elf", "kernel": [ "yolov3_custom" ], "config_file": "yolov3_custom.prototxt" }
- Create the yolov3_custom.prototxt, as shown in the
following.
model { name: "yolov3_custom" kernel { name: "yolov3_custom" mean: 0.0 mean: 0.0 mean: 0.0 scale: 0.00390625 scale: 0.00390625 scale: 0.00390625 } model_type : YOLOv3 yolo_v3_param { num_classes: 20 anchorCnt: 3 conf_threshold: 0.3 nms_threshold: 0.45 biases: 10 biases: 13 biases: 16 biases: 30 biases: 33 biases: 23 biases: 30 biases: 61 biases: 62 biases: 45 biases: 59 biases: 119 biases: 116 biases: 90 biases: 156 biases: 198 biases: 373 biases: 326 test_mAP: false } }
Note: The <model_name>.prototxt only take effect when you use AI Library API_1.When you use AI Library API_2, the parameter of the model needs to be loaded and read manually by the program. See the ~/Vitis-AI/Vitis-AI-Library/demo/yolov3/demo_yolov3.cpp for details. - Create the demo_yolov3.cpp file. See ~/Vitis-AI/Vitis-AI-Library/demo/yolov3/demo_yolov3.cpp for reference.
- Create a build.sh file as shown below,
or copy one from the AI Library
demo and modify
it.
#/bin/sh CXX=${CXX:-g++} $CXX -std=c++11 -O3 -I. -o demo_yolov3 demo_yolov3.cpp -lopencv_core -lopencv_video -lopencv_videoio -lopencv_imgproc -lopencv_imgcodecs -lopencv_highgui -lglog -lxnnpp-xnnpp -lvitis_ai_library-model_config -lprotobuf -lvitis_ai_library-dpu_task
- Exit the docker tool system and start the docker runtime system.
- Cross compile the program, generate executable file demo_yolov3.
$sh -x build.sh
- Create model folder under /usr/share/vitis_ai_library/models on the target
side.
#mkdir yolov3_custom /usr/share/vitis_ai_library/models
Note that /usr/share/vitis_ai_library/models is the default location for the program to read the model. You can also place the model folder in the same directory as the executable program.
- Copy the yolov3_custom.elf, yolov3_custom.prototxt
and meta.json to the target and put them under /usr/share/vitis_ai_library/models/yolov3_custom.
$scp yolov3_custom.elf yolov3_custom.prototxt meta.json root@IP_OF_BOARD:/usr/share/vitis_ai_library/models/yolov3_custom
- Copy the executable program to the target board using
scp.
$scp demo_yolov3 root@IP_OF_BOARD:~/
- Execute the program on the target board and get the following results.
Before running the program, make sure the target board has the AI Library installed, and prepare the
images you want to
test.
#./demo_yolov3 yolov3_custom sample.jpg