Legacy DNNDK Examples - 1.1 English

Vitis AI User Guide (UG1414)

Document ID
UG1414
Release Date
2020-03-23
Version
1.1 English

To keep forward compatibility, Vitis AI still supports the application of DNNDK for deep learning applications development over edge DPU. The legacy DNNDK C++/Python examples for ZCU102 and ZCU104 are avialable in https://github.com/Xilinx/Vitis-AI/tree/master/mpsoc/vitis_ai_dnndk_samples

After downloading the samples from github to the host machine, copy them into the /workspace/vitis_ai/ folder. These examples can be built with Arm GCC cross-compilation toolchains.

The users can follow the steps below to set up the host cross compilation environment for DNNDK examples:

  1. Download petalinux_sdh.sh from https://www.xilinx.com/bin/public/openDownload?filename=sdk.sh.
  2. Run the command below to install Arm GCC cross-compilation toolchain environment.
    ./petalinux_sdk.sh
  3. Run the command below to setup environment
    source /opt/petalinux/2019.2/environment-setup-aarch64-xilinx-linux
  4. Download DNNDK runtime package vitis-ai_v1.1_dnndk.tar.gz from https://github.com/Xilinx/Vitis-AI and install it into PetaLinux system sysroot.
    tar -xzvf vitis-ai_v1.1_dnndk.tar.gz
    cd vitis-ai_v1.1_dnndk
    sudo ./install.sh $SDKTARGETSYSROOT

The DNNDK runtime package vitis-ai_v1.1_dnndk.tar.gz downloaded in this step need to be copied to the ZCU102 or ZCU104 board, and then follow the steps below to set up the environment on the board.:

tar -xzvf vitis-ai_v1.1_dnndk.tar.gz
cd vitis-ai_v1.1_dnndk
./install.sh

The following table briefly describes all DNNDK examples.

Table 1. DNNDK Examples
Example Name Models Framework Notes
resnet50 ResNet50 Caffe Image classification with Vitis AI advanced C++ APIs.
resnet50_mt ResNet50 Caffe Multi-threading image classification with Vitis AI advanced C++ APIs.
tf_resnet50 ResNet50 TensorFlow Image classification with Vitis AI advanced Python APIs.
mini_resnet_py Mini-ResNet TensorFlow Image classification with Vitis AI advanced Python APIs.
inception_v1 Inception-v1 Caffe Image classification with Vitis AI advanced C++ APIs.
inception_v1_mt Inception-v1 Caffe Multi-threading image classification with Vitis AI advanced C++ APIs.
inception_v1_mt_py Inception-v1 Caffe Multi-threading image classification with Vitis AI advanced Python APIs.
mobilenet MiblieNet Caffe Image classification with Vitis AI advanced C++ APIs.
mobilenet_mt MiblieNet Caffe Multi-threading image classification with Vitis AI advanced C++ APIs.
face_detection DenseBox Caffe Face detetion with Vitis AI advanced C++ APIs.
pose_detection SSD, Pose detection Caffe Pose detection with Vitis AI advanced C++ APIs.
video_analysis SSD Caffe Traffic detection with Vitis AI advanced C++ APIs.
adas_detection YOLO-v3 Caffe ADAS detection with Vitis AI advanced C++ APIs.
segmentation FPN Caffe Semantic segmentation with Vitis AI advanced C++ APIs.
split_io SSD TensorFlow DPU split IO memory model programming with Vitis AI advanced C++ APIs.
debugging Inception-v1 TensorFlow DPU debugging with Vitis AI advanced C++ APIs.
tf_yolov3_voc_py YOLO-v3 TensorFlow Object detection with Vitis AI advanced Python APIs.

You must follow the descriptions in the following table to prepare several images before running the samples on the evaluation boards.

Table 2. Images Preparation for DNNDK Samples
Image Directory Note
vitis_ai_dnndk_samples/dataset/image500_640_480/ Download several images from ImageNet dataset and scale to the same resolution 640*480.
vitis_ai_dnndk_samples2/ image_224_224/ Download one image from ImageNet dataset and scale to resolution 224*224.
vitis_ai_dnndk_samples/ image_32_32/ Download several images from CIFAR-10 dataset https://www.cs.toronto.edu/~kriz/cifar.html.
vitis_ai_dnndk_samples/resnet50_mt/image/ Download one image from ImageNet dataset.
vitis_ai_dnndk_samples/ mobilenet_mt/image/ Download one image from ImageNet dataset.
vitis_ai_dnndk_samples/ inception_v1_mt/image/ Download one image from ImageNet dataset.
vitis_ai_dnndk_samples/ debugging/decent_golden/dataset/images/ Download one image from ImageNet dataset and save it as cropped_224x224.jpg.
vitis_ai_dnndk_samples/ tf_yolov3_voc_tf//image/ Download one image from VOC dataset http://host.robots.ox.ac.uk/pascal/VOC/ and save it as input.jpg.

This subsequent section illustrates how to run DNDNK examples, using the ZCU102 board as the reference as well. The samples stay under the directory /workspaces/vitis_ai/vitis_ai_dnndk_samples/. After all the samples are built by Arm GCC cross-compilation toolchains via running script ./build.sh zcu102 under the folder of each sample, it is recommended to copy the whole directory /workspaces/vitis_ai/vitis_ai_dnndk_samples to ZCU102 board directory /home/root/. The users can choose to copy one single DPU hybrid executable from docker container to the evaluation board for running. Pay attention that the dependent image folder dataset or video folder video should be copied together, and the folder structures should also be kept as expected. Note that the users should run ./build.sh zcu104 for each DNNDK sample for ZCU104 board.

For the sake of simplicity, the directory of /home/root/vitis_ai_dnndk_samples/ is replaced by$dnndk_sample_base in the following descriptions.

ResNet-50

$dnndk_sample_base/resnet50 contains an example of image classification using Caffe ResNet-50 model. It reads the images under the $dnndk_sample_base/dataset/image500_640_480 directory and outputs the classification result for each input image. You can then launch it with the./resnet50 command.

Video Analytics

An object detection example is located under the $dnndk_sample_base/video_analysis directory. It reads image frames from a video file and annotates detected vehicles and pedestrians in real-time. Launch it with the command ./video_analysis video/structure.mp4 (where video/structure.mp4 is the input video file).

ADAS Detection

An example of object detection for ADAS (Advanced Driver Assistance Systems) application using YOLO-v3 network model is located under the directory $dnndk_sample_base/adas_detection. It reads image frames from a video file and annotates in real-time. Launch it with the ./adas_detection video/adas.avi command (where video/adas.mp4 is the input video file).

Semantic Segmentation

An example of semantic segmentation in the $dnndk_sample_base/segmentation directory. It reads image frames from a video file and annotates in real-time. Launch it with the ./segmentation video/traffic.mp4 command (where video/traffic.mp4 is the input video file).

Inception-v1 with Python

$dnndk_sample_base/inception_v1_mt_py contains a multithreaded image classification example of Inception-v1 network developed with the advanced Python APIs. With the command python3 inception_v1_mt.py 4, it will run with four threads. The throughput (in fps) will be reported after it completes.

Different from C++ examples, Incpetion-v1 model is compiled to DPU ELF file first and then it is transformed into the DPU shared library libdpumodelinception_v1.so with the following command on the evaluation board. dpu_inception_v1_*.elf means to include all DPU ELF files in case that Incpetion-v1 is compiled into several DPU kernels by VAI_C compiler. Refer to the section of DPU Shared Library for more details.

 aarch64-xilinx-linux-gcc -fPIC -shared \	
 dpu_inception_v1_*.elf -o libdpumodelinception_v1.so

Within the Vitis AI cross compilation environment on host, use the following command for this purpose instead.

source /opt/petalinux/2019.2/environment-setup-aarch64-xilinx-linux
aarch64-xilinx-linux-gcc \
--sysroot=$SDKTARGETSYSROOT \
-fPIC -shared dpu_inception_v1_*.elf -o libdpumodelinception_v1.so
Note: The thread number for best throughput of multithread Incepion-v1 example varies among evaluation boards because the DPU computation power and core number are differently equipped. Use dexplorer -w to view DPU signature information for each evaluation board.

minResNet with Python

$dnndk_sample_base/mini_resnet_py contains the image classification example of TensorFlow minResNet network developed with Vitis AI advanced Python APIs. With the command python3 mini_resnet.py, the results of top-5 labels and corresponding probabilities are displayed. miniResNet is described in the second book Practitioner Bundle of the Deep Learning for CV with Python series. It is a customization of the original ResNet-50 model and is also well explained in the third book ImageNet Bundle from the same book’s series.

YOLO-v3 with Python

$dnndk_sample_base/tf_yolov3_voc_py contains the object detection example of TensorFlow YOLOv3 network developed with Vitis AI advanced Python APIs. With the command python3 tf_yolov3_voc.py, the result image after object detection is displayed.