AI Engine System Partitioning Planning - 2024.1 English

Versal Adaptive SoC System and Solution Planning Methodology Guide (UG1504)

Document ID
Release Date
2024.1 English

System Partitioning is the process of designing an embedded system for heterogeneous compute. It involves analyzing system requirements and stacking those up head-to-head against device capabilities. It is expected to be done upfront before project kickoff and have reasonable accuracy. Also, it is expected to be completed quickly without implementing the full system.

The goal is to minimize risk in the solution and identify a device suitable for application needs.

Some of the steps involved in system partitioning include:

  • Partitioning the workload across available compute elements
  • Identifying a workable system “data flow”
  • Managing I/O bandwidth within interface limits
  • Provisioning data storage with practical costs
  • Minimizing compute to achieve throughput targets

It is often a series of top-down analysis and bottom-up synthesis steps. The problem is first analyzed and broken down to generate data that characterizes the problem dimensions. From the generated data during the analysis step, a strawman solution can be synthesized that is expected to solve the problem. A strawman proposal is a draft proposal that attempts to solve the problem at hand. It is intended to ensure everyone involved has a common understanding of the initial concept, generate discussions of its pros and cons, and to spur the generation of new and better proposals. A follow up analysis step should be done to confirm all system requirements are met and to identify risky areas and quantify estimates and error bars on those estimates.

The interaction between three fundamental design dimensions (compute, storage, and bandwidth) needs to be understood. How will the data move between the different subsystems (PL, AI Engine, and PS)? Where should the compute happen? What are the storage requirements? What are the variables that influence the solution?