Application Mapping and Design Partitioning - 2025.1 English - UG1504

Versal Adaptive SoC System and Solution Planning Methodology Guide (UG1504)

Document ID
UG1504
Release Date
2025-06-25
Version
2025.1 English

Application mapping is the first step of system design planning using Versal devices. During application mapping, you map various parts of the system into the adaptive SoC hardware, depending on your performance, latency and system cost requirements.

For example, embedded applications typically comprise an accelerator block that performs the compute intensive processing, a DRAM for data storage, an interconnect switch for moving data from DRAM to accelerator hardware, on-chip memory for faster intermediate access, and an embedded processor for controlling the data flow and performing less compute intensive tasks. Data center applications typically comprise a PCIe® interface-based interface for transferring data from host to card DRAM, an interconnect switch, an accelerator block to handle compute intensive functions, on-chip memory for faster accelerator access, and an embedded processor core that controls the data movement in the Versal device, performs less compute intensive functions, and manages communication with the host processor.

When mapping your application, consider which of the following Versal device series is better suited to your system application:

Versal Prime Series Gen 2, Versal Prime, Premium and HBM Series
The Versal Prime Series Gen 2, Versal Prime, Premium, and HBM Series include programmable logic (PL) (CLBs, RAMs, DSPs), network on chip (NoC), PCIe interfaces, and processing subsystems.
Versal AI Core, Versal AI Edge Series Gen 2, AI Edge, and Premium Series
The Versal AI Core, Versal AI Edge Series Gen 2, AI Edge Series, and Versal Premium VP2502 and VP2802 devices include adaptable AI Engines as well as PL (CLBs, RAMs, DSPs), NoC, PCIe interfaces, and processing subsystems.

For example, you can map a compute intensive accelerator function to either the DSP Engines or AI Engines, depending on compute, power, and latency requirements. If the function is more compute heavy and requires multiple parallel processing, AI Engines are recommended. If the function has smaller compute requirements in which latency is critical, DSP Engines are recommended.

To identify the best compute mapping for your design, see the following tutorials available from the GitHub repository: