AI Engines - UG1192

AMD Design Conversion for Altera FPGAs and SoCs Methodology Guide (UG1192)

Document ID
UG1192
Release Date
2025-07-15
Revision
3.0.1 English
Figure 1. AI Engine Overview

The AMD AI Engines (AIEs) are an array of innovative very long instruction word (VLIW) and single instruction, multiple data (SIMD) processing engines and memories, all interconnected with 100s of terabits per second of interconnect and memory bandwidth.

The AIE array is the top-level hierarchy of the AIE architecture. It integrates a two-dimensional array of AIE tiles. Each AIE tile integrates a very-long instruction word (VLIW) processor, integrated memory, and interconnects for streaming, configuration, and debug. The AIE array interface enables the AIE to communicate with the rest of the Versal device through the NoC or directly to the PL. The AIE array also interfaces to the processing system (PS) and platform management controller (PMC) through the NoC. Further details of the hardware can be found in the AI Engine Architecture Manuals.

  • Versal Adaptive SoC AIE-ML Architecture Manual (AM020)
  • Versal Adaptive SoC AIE-ML v2 Architecture Manual (AM027)
  • Versal Adaptive SoC AI Engine Architecture Manual (AM009)