Before running the Vitis™ AI Library examples on Edge or on Cloud, download the vitis_ai_library_r1.1_image.tar.gz and vitis_ai_library_r1.1_video.tar.gz from https://github.com/Xilinx/Vitis-AI. The images or videos used in the following example can be found in both packages.
For Edge
There are two ways to compile a program. One is to cross-compile the program through the host and the other is to compile the program directly on the target board. Both methods have advantages and disadvantages. In this section, we compile and run the examples directly on the target machine.
- Copy the sample and demo from host to the target using scp with the following command.
$scp -r ~/Vitis-AI/Vitis-AI-Library/overview root@IP_OF_BOARD:~/
- Enter the extracted directory of example in target board and then compile
each of the examples.
#cd ~/overview/samples/facedetect
- Run the
example.
#./test_jpeg_facedetect densebox_640_360 sample_facedetect.jpg
If the above executable program does not exist, run the following command to compile and generate the corresponding executable program.
#bash -x build.sh
- View the running results.
There are two ways to view the results. One is to view the results by printing information, while the other is to view images by downloading the sample_facedetect_result.jpg image as shown in the following image.
Figure 1. Face Detection Example
- To run the video example, run the following
command:
#./test_video_facedetect densebox_640_360 video_input.mp4 -t 8 Video_input.mp4: The video file's name for input.The user needs to prepare the videofile. -t: <num_of_threads>
- To test the program with a USB camera as input, run the following
command:
#./test_video_facedetect densebox_640_360 0 -t 8 0: The first USB camera device node. If you have multiple USB camera, the value might be 1,2,3 etc. -t: <num_of_threads>
Important: Enable X11 forwarding with the following command (suppose in this example that the host machine IP address is 192.168.0.10) when logging in to the board using an SSH terminal because all the video examples require a Linux windows system to work properly.#export DISPLAY=192.168.0.10:0.0
- To test the performance of model, run the following
command:
#./test_performance_facedetect densebox_640_360 test_performance_facedetect.list -t 8 -s 60 -t: <num_of_threads> -s: <num_of_seconds>
For more parameter information, enter
-h
for viewing. The following image shows the result of performance testing in 8 threads.Figure 2. Face Detection Performance Test Result - To check the version of Vitis
AI Library, run the following
command:
#vitis_ai
- To run the demo, refer to Application Demos.
For Cloud
- Load and Run Docker Container.
$./docker_run.sh -X xilinx/vitis-ai-cpu:<x.y.z>
- Enter the directory of sample and then compile it. Take facedetect as an example.
$cd /workspace/vitis-ai/vitis_ai_library/overview/samples/facedetect $bash -x build.sh
- Run the
sample.
$./test_jpeg_facedetect densebox_640_360 sample_facedetect.jpg
- If you want to run the program in batch mode, which means that the DPU
processes multiple images at once to prompt for processing performance, you
have to compile the entire Vitis AI Library according to "Setting Up the
Host" section. Then the batch program will be generated under
build_dir_default.Enter
build_dir_default, take facedetect as an example,
execute the following
command.
$./test_facedetect_batch densebox_640_360 <img1_url> [<img2_url> ...]
- To run the video example, run the following
command:
$./test_video_facedetect densebox_640_360 video_input.mp4 -t 8 Video_input.mp4: The video file's name for input.The user needs to prepare the videofile. -t: <num_of_threads>
- To test the performance of model, run the following
command:
#./test_performance_facedetect densebox_640_360 test_performance_facedetect.list -t 8 -s 60 -t: <num_of_threads> -s: <num_of_seconds>
For more parameter information, enter
-h
for viewing.Note that the performance test program is automatically run in batch mode.