The AI Engine array introduced in the AMD Versal™ architecture caters to solutions for high compute or complex DSP intensive applications, like 5G Wireless or Machine Learning algorithms. AI Engine is a high performance VLIW vector (SIMD) processor with integrated memory and interconnects to help communicate with other AI Engine cores that are connected together in a two dimensional array network in the device.
The AI Engine page in PDM for Versal adaptive SoC is available for the AI Core Series family and some AI Edge Series devices and Premiuim Series. PDM estimates the power consumption of AI Engine blocks for a particular configuration. The following figure shows the AI Engine Power interface.
For an early power estimation, you should provide the configuration details of the AI Engine array such as clock frequency, number of cores, kernel type, and the Vector Load average percentage for the cores. The supported kernel types are Int8, Int16, and Floating Point.
The kernel type represents the datatype used in the vector processing in the kernel function. There can be scenarios where a kernel uses mixed datatypes. In this case, the recommendation is that you select the lower precision datatype that is the one that has more impact on the power estimate.
Data Memory and Interconnect Load fields are auto-populated based on the number of AI Engines used and can be overridden based on the application requirement. There are eight memory banks in an AI Engine tile (each bank is 4 KB in size totaling 32 KB per tile). By default, PDM uses all of them and this can be overridden if the application requires fewer bank accesses.
Memory R/W rate is average Read/Write memory access for each bank.
The AI Engine array interface allows access to rest of the AMD Versal™ adaptive SoC. There are interface tiles for both the Programmable Logic (PL) and Network On Chip (NoC), and these interfaces tiles are represented as streams. You can overwrite the PL/NoC streams based on design application. The interconnect fields are read-only and calculated based on your input. PL streams show the available streams in the first row of AI Engine tiles and lets you specify the number of 64b PL streams that are used. It is recommended that PL streams are set at default 14 streams per 20 AI Engine tiles used. However, PL streams can be changed. You can see a DRC (cell turns yellow in the Utilization table) when the PL streams exceed the available streams within the total AI Engine array.
Interconnect load is averaged to a fixed value of 12% and has minimum impact to power, and can be overridden by import flow described in the next section. The maximum range for clock speed depends on the speed grade of a device with 1300 MHz for –3H grade. For more information, see Versal Adaptive SoC AI Engine Architecture Manual (AM009).