Event Profile APIs for Graph Inputs and Outputs - 2024.1 English

AI Engine Tools and Flows User Guide (UG1076)

Document ID
UG1076
Release Date
2024-06-27
Version
2024.1 English
You can collect profile statistics of your design by calling event APIs in your PS host code. These event APIs are available both during simulation and when you run the design in hardware.
Note: The event APIs introduced in this section should be considered incompatible with profiling methods in the previous section. The event APIs are inserted in the host code. The designer is responsible to control when the event starts or how it stops.

The AI Engine has hardware performance counters and can be configured to count hardware events for measuring performance metrics. You can use the event API together with the graph control API to profile certain performance metrics during a controlled period of graph execution. The event API supports only platform I/O ports (PLIO & GMIO) to measure performance metrics such as platform I/O port bandwidth, graph throughput, and graph latency.

The event API tracks the events that occurred on the stream switch of the nets that cross the AI Engine - PL interfaces. The events on the stream switch of the nets include idle, running and stall, as shown in following figure.

Figure 1. Events on Nets
  • When there is no data passing the stream switch, the stream switch is in an idle state.
  • When there is data passing the stream switch, the stream switch is in a running state.
  • When all the FIFOs on the net are full, the stream switch is in a stall state.
  • When the transfer of data resumes, the stream switch returns to a running state.

The following graph shows an example of data being sent from the mm2s PL kernel to the AI Engine. It also shows the graph sending data from the AI Engine to the s2mm PL kernel.

Figure 2. Example Graph

Different ports can go through the same AI Engine - PL interface column, which shares the performance counters in the interface. You may check the array view in Vitis IDE to see which columns are those ports routing through. The following picture shows an array view of the above example, and note that the stream switches in red circle are where the event API is monitoring.

Figure 3. Example Array View
Note:
  • The input buffer buf0 in the AI Engine is ready to accept data from the mm2s PL kernel after the graph has been initialized. As soon as the mm2s PL kernel starts, it will sequentially fill up the PING-PONG buffers and FIFOs inside the stream switch connected to the buf0. The data transported to and from these buffers do not depend on graph.run().
  • Each column of the AI Engine - PL interface has two performance counters. Because there are limited performance counters, event::stop_profiling() can be used to release the performance counters.
  • There is some overhead when calling the graph and profiling APIs. The profiling results can be read using event::read_profiling(). The profiling results can vary if the performance counters are not stopped before event::read_profiling().
Important: The ADF APIs can also be used in AI Engine simulation, hardware emulation, and hardware flows. In hardware emulation and hardware flows, adf::registerXRT is required before using the event profile ADF APIs. For example:
#include "adf/adf_api/XRTConfig.h"
......
auto device = xrt::device(0); //device index=0
auto uuid = device.load_xclbin(xclbinFilename);
auto dhdl = xrtDeviceOpenFromXcl(device);
adf::registerXRT(dhdl, uuid.get());
event::handle handle = event::start_profiling(......);