Buffer vs. Stream in Data Communication - 2024.1 English

AI Engine-ML Kernel and Graph Programming Guide (UG1603)

Document ID
Release Date
2024.1 English

AI Engine kernels in the data flow graph operate on data streams that are infinitely long sequences of typed values. These data streams can be broken into separate blocks called buffers and processed by a kernel. Kernels consume input blocks of data and produce output blocks of data. An initialization function can be specified to run before the kernel starts processing input data. The kernel can read scalars or vectors from the memory, however, the valid vector length for each read and write operation must be either 128 or 256 bits. Input and output data buffers are locked for kernels before they are executed. Because the input data buffer needs to be filled with input data before kernel start, it increases latency compared to stream interface. The kernel can perform random access within a buffer of data and there is the ability to specify a buffer margin for algorithms that require some number of bytes from the previous sample.

Kernels can also access the data streams in a sample-by-sample fashion. Streams are used for continuous data and using blocking or non-blocking calls to read and write. Cascade stream only supports blocking access. The AI Engine-ML supports one 32-bit stream input port and one 32-bit stream output port. Valid vector length for reading or writing data streams must be either 32 or 128 bits (four consecutive stream read instructions in the latter case). Packet streams are useful when the number of independent data streams in the program exceeds the number of hardware stream channels or ports available.

A PLIO port attribute is used to make external stream connections that cross the AI Engine-ML to PL boundary. The PLIO port can be connected to the AI Engine-ML buffer via DMA S2MM or MM2S channels, or directly to AI Engine-ML stream interfaces. In the case of a direct connection to the AI Enginethe bandwidth is limited to 32 bits per cycle but with minimum latency. As both S2MM and MM2S DMA have two channels, the bandwidth is doubled compared with the direct connection to the AI Engine. However, the ping or pong buffer needs to be filled up before the kernel can start. Therefore, buffer interfaces from / to PL usually have a larger latency than a stream interface when both fit into design architecture.

The following table summarizes the differences in buffer and stream connections between kernels.

Table 1. Buffer vs. Stream Connections
Connection Margin Packet Switching Back Pressure Lock Max throughput by VLIW (per cycle) Multicast as a Source
Buffer Yes Yes Yes 1 Yes 2*256-bit load + 1*256-bit store Yes
Stream No Yes Yes No

1*32-bit read + 1*32-bit write

  1. Buffer back pressure, acquired or not, occurs on the whole buffer of data.

Graph code is C++ and available in a separate file from kernel source files. The compiler places the AI Engine kernels into the AI Engine-ML array, taking care of the memory requirements and making all the necessary connections for data flow. Multiple kernels with low core usage can be placed into a single tile.

For a complete overview of graph programming with AI Engine tools, refer to the AI Engine Tools and Flows User Guide (UG1076).