Getting Started with Vitis AI Profiler - 1.4 English

Vitis AI User Guide (UG1414)

Document ID
Release Date
1.4 English
  • System Requirements
    • Hardware
      1. Support MPSoC (DPUCZDX8G)
      2. Support Alveo (DPUCAHX8H)
    • Software
      1. Support VART v1.2+
      2. Support Vitis AI Library v1.2+
  • Installing
    • Deploy the web server (Vitis AI Profiler) on PC or local server
      1. Clone the Vitis AI Profiler Project from github repo.
        # cd /path-to-vitis-ai/Vitis-AI-Profiler
      2. Requirements
        • Python 3.6+
        • Flask which you can get from
          # pip3 install --user flask
      3. Start the web Server on PC or Local Network Server.
        # python3 [–ip []] [--port [8008]]

        By default (run with no argument) the IP address is and port number is 8008, so user can access the web page by http://localhost:8008

    • Preparing debug environment for VAI trace in MPSoC platform
      1. Configure and Build Petalinux
        • Run petalinux-config -c kernel and Enable these for linux kernel
          General architecture-dependent options ---> [*] Kprobes
          Kernel hacking  ---> [*] Tracers
          Kernel hacking  ---> [*] Tracers  --->
          				[*]   Kernel Function Tracer
          				[*]   Enable kprobes-based dynamic events
          				[*]   Enable uprobes-based dynamic events
        • Run petalinux-config -c rootfs and enable this for root-fs
          user-packages  --->  modules   --->
          				[*]   packagegroup-petalinux-self-hosted
        • Run petalinux-build
      2. Install vart runtime, vaitrace will be installed into /usr/bin/xlnx/vaitrace
      3. Create a symbolic link
        #ln -s /usr/bin/xlnx/vaitrace/ /usr/bin/vaitrace
    • Installing VAI trace
      1. Install vart runtime, vaitrace will be installed into /usr/bin/xlnx/vaitrace
      2. Create a symbolic link
        #ln -s /usr/bin/xlnx/vaitrace/ /usr/bin/vaitrace
  • Starting trace with vaitrace:

    Use the “test_performance” program of Vitis AI Library’s yolov3 sample as an example

    • Download and Install Vitis AI Library
    • Entry the directory of yolov3 sample
      # cd /usr/share/vitis_ai_library/samples/yolov3/
    • Start testing and tracing

      See readme file and the test command is:

      # vaitrace –t 5 –o ~/trace_yolo_v3.xat  ./test_performance_yolov3 yolov3_voc ./test_performance_yolov3.list
      1. The first argument [-t 5] stands for tracing for 5 seconds.
      2. The second argument [-o ~/trace_yolo_v3.xat] stands for saving the tracing data to home directory and named as trace_yolo_v3.xat, if no specified, the tracing data file(.xat) will be saved in the same directory with the executable file.
      3. Copy the .xat file from home directory to your PC(via scp or nfs.)
  • Upload and Get Result in Vitis AI Profiler
    • Open the Vitis AI Profiler with your browser ( .
    • Click 'Browse' button to select the .xat file then click ‘Upload’
      Figure 1. .xat File Upload Page
    • Now there four parts on the Profiler UI
      • Timeline: The timeline is categorized by hardware components users can tell hardware utilization with just at a single glance. All Vitis AI Library relative tasks are high-lighted, and most of the other progress are filtered out when tracing for higher efficiency, so the trace info of other threads may be inaccuracy.

        For CPU tasks in timeline, different color indicates different thread, click on color block in the timeline will jump to and show the details in the tree-grid table below

      • Throughput: Show real-time Vitis AI Library inference FPS
      • AXI traffic: Total AXI read and write bandwidth, all six AXI ports included for Zynq MPSoC, not supported for cloud solution now.
      • Hardware Information: Show CPU/Memory/CU relative information