- To compile your graph, execute the following command (see Compiling an AI Engine Graph
Application in
AI
Engine Tools and Flows User Guide (UG1076) for more
details).
v++ -c --mode aie --target hw --config ./config.cfg project.cpp
The program is called project.cpp. The AI Engine compiler reads the input graph specified, compiles it to the AI Engine-ML array, produces various reports, and generates output files in the Work directory.
- After parsing the C++ input into a graphical intermediate form expressed in
JavaScript object notation (JSON), the AI Engine compiler performs resource mapping, scheduling
analysis, and maps kernel nodes in the graph to the processing cores in the
AI Engine-ML array and data buffers to
memory banks. The JSON representation is augmented with mapping information.
Each AI Engine also requires a schedule of
all the kernels mapped into it.
The input graph is first partitioned into groups of kernels to be mapped to the same core.
The output of the mapper can also be viewed as a tabular report in the file project_mapping_analysis_report.txt. This reports the mapping of nodes to processing cores and data buffers to memory banks. Inter-processor communication is appropriately double-banked as ping-pong buffers.
- The AI Engine compiler allocates
the necessary locks, memory buffers, DMA channels, descriptors, and generates
routing information for mapping the graph onto the AI Engine-ML array. It synthesizes a main program for each core
that schedules all the kernels on the cores, and implements the necessary
locking mechanism and data copy among buffers. The C/C++ program for each core
is compiled to produce loadable ELF files. The AI Engine compiler also generates control APIs to control
the graph initialization, execution and termination from the
main
application and a simulator configuration script scsim_config.json. These are all stored within the Work directory under various sub-folders (see Compiling an AI Engine Graph Application in AI Engine Tools and Flows User Guide (UG1076) for more details). - After the compilation of the AI Engine graph, the AI Engine compiler writes a summary of compilation
results called
<graph-file-name>.aiecompile_summary
to be viewed in the Vitis IDE. The summary contains a collection of reports, and diagrams reflecting the state of the AI Engine-ML application implemented in the compiled build. The summary is written to the working directory of the AI Engine compiler as specified by the--work_dir
option, which defaults to ./Work.To open the AI Engine compiler summary, use the following command:
vitis -a ./Work/graph.aiecompile_summary
- To run the graph, execute the following command (see Simulating an AI Engine Graph
Application in
AI
Engine Tools and Flows User Guide (UG1076) for more details).
aiesimulator –-pkg-dir=./Work
This starts the SystemC-based simulator with the control program being the
main
application. The graph APIs which are used in the control program configure the AI Engine-ML array including setting up static routing, programming the DMAs, loading the ELF files onto the individual cores, and then initiates AI Engine-ML array execution.At the end of the simulation, the output data is produced in the directory aiesimulator_output and it should match the reference data.
The graph can be loaded at device boot time in hardware or through the host application. Details on deploying the graph in hardware and the flow associated with it is described in detail in Building and Running the System in the Embedded Design Development Using Vitis (UG1701).
Note: Only AI Engine kernels that
have been modified are recompiled in subsequent compilations of the AI Engine graph. Any
un-modified kernels will not be recompiled.