Using Multiple DDR Banks - 2023.1 English

Vitis Tutorials: Hardware Acceleration (XD099)

Document ID
XD099
Release Date
2023-08-02
Version
2023.1 English

In the previous step, you noticed the overlap of the host data transfer documents sent to the FPGA were also split into multiple buffers. Flags from the FPGA were also sent to the host immediately; this overlaps the compute profile score on the CPU with the “Compute FPGA” which further improves the application execution time.

You also observed memory contention because the host and kernel both accessed the same bank at the same time. In this section, you configure multiple DDR banks to improve the kernel performance.

AMD Alveo™ cards have multiple DDR banks, and you can use multiple banks in ping-pong fashion to minimize the contention.

  • The host is writing words to DDR bank 1 and DDR bank 2 alternatively.

  • When the host is writing words to DDR bank 1, the kernel is reading flags from DDR bank 2.

  • When host is writing documents to DDR bank 2, the kernel is reading flags from DDR bank 1.

The kernel will read from DDR bank 1 and bank 2 alternatively and its maxi port is connected to both DDR banks. You must establish the connectivity of kernel arguments to DDR banks in the v++ --link command as described in Mapping Kernel Ports to Memory. In this case, the $LAB_WORK_DIR/makefile/connectivity.cfg configuration file specifies the connectivity.

```
[connectivity]
sp=runOnfpga_1.input_words:DDR[1:2] 
```
  • The -sp option instructs the v++ linker that input_words is connected to both DDR banks 1 and 2. You will need to rebuild the kernel because connectivity is now changed.