Introduction - 2023.2 English

AI Engine Tools and Flows User Guide (UG1076)

Document ID
UG1076
Release Date
2023-12-04
Version
2023.2 English

The AMD Versal programmable Network on Chip (NoC) is an AXI-interconnecting network used for sharing data between IP endpoints in the programmable logic (PL), the processing system (PS), AI Engine, and other integrated blocks. This device-wide infrastructure is a high-speed, integrated data path with dedicated switching. The NoC system is a large-scale interconnection of instances of NoC master units (NMUs), NoC slave units (NSUs), and NoC packet switches (NPSs), each controlled and programmed from a NoC programming interface (NPI). There are 16 NMUs/NSUs on the VC1902 device, each one is capable of 16 GB/s of throughput in each direction.

For more information on NoC architecture, see NoC Architecture in the Versal Adaptive SoC Programmable Network on Chip and Integrated Memory Controller LogiCORE IP Product Guide (PG313).

The main data streams to and from the AI Engine are PL streaming interface (PLIO) and GMIO. A GMIO is used to make external memory-mapped connections to or from the global memory through the AI Engine - NoC master unit. This feature lets you monitor the run-time traffic between the DDR memory and the AI Engine. This section provides you with a methodology to profile the NoC master unit in terms of latency and throughput of GMIO ports that read and write data to/from AI Engine to the DDR memory.

Network performance of the NoC interconnecting network can be monitored by the Vperf utility. Vperf is a Vitis-based tool that profiles the NoC and DDRMC in applications built using a v++ flow. You do not need to add any compile level options to use the utility.