To run an example for the Alveo U200 and U250 Data Center Accelerator
Cards, use these steps:
- Load and run the Docker
Container.
$./docker_run.sh -X xilinx/vitis-ai-cpu:<x.y.z>
- Download and untar the model directory (vai_lib_u200_u250_models.tar.gz)
package.
$cd /workspace/Vitis-AI-Library $wget -O vai_lib_u200_u250_models.tar.gz https://www.xilinx.com/bin/public/openDownload?filename=vai_lib_u200_u250_models.tar.gz $sudo tar -xvf vai_lib_u200_u250_models.tar.gz --absolute-names
Note: All models will download to the /usr/share/vitis_ai_library/models directory. Currently supported networks are classification, facedetect, facelandmark, reid, and yolov3. - To download a minimal validation set for Imagenet2012 using Collective Knowledge (CK), refer to the Alveo examples.
- Set up the
environment.
$source /workspace/alveo/overlaybins/setup.sh $export LD_LIBRARY_PATH=$HOME/.local/${taget_info}/lib/:$LD_LIBRARY_PATH
- Make sure to compile the entire Vitis AI
Library according to the For Cloud (Alveo U50LV/U55C Cards, Versal VCK5000 Card) section. Run the
classification image test
example.
$HOME/build/build.${taget_info}/${project_name}/test_classification <model_dir> <img_path>
For example:
$~/build/build.Ubuntu.18.04.x86_64.Release/Vitis-AI-Library/classification/test_classification inception_v1 <img_path>
- Run the classification accuracy test
example.
$HOME/build/build..${taget_info}/${project_name}/test_classification_accuracy <model_dir> <img_dir_path> <output_file>
For example:
$~/build/build.Ubuntu.18.04.x86_64.Release/Vitis-AI-Library/classification/test_classification_accuracy inception_v1 <img_dir_path> <output_file>