The link phase is when the memory ports of the kernels are connected to
memory resources which include DDR, HBM, and PLRAM. By default, when the xclbin
file is produced during the v++
linking process, all kernel memory interfaces are connected to the same global
memory bank (or gmem
). As a result, only one kernel
interface can transfer data to/from the memory bank at one time, limiting the performance of
the application due to memory access.
While the Vitis compiler can automatically connect CU to global memory resources, you can also manually specify which global memory bank each kernel argument (or interface) is connected to. Proper configuration of kernel to memory connectivity is important to maximize bandwidth, optimize data transfers, and improve overall performance. Even if there is only one compute unit in the device, mapping its input and output arguments to different global memory banks can improve performance by enabling simultaneous accesses to input and output data.
--conectivity.sp
option to distribute connections across
different memory banks.The following example is based on the Kernel Interfaces example code. Start by assigning the kernel arguments to separate bundles to increase the available interface ports, then assign the arguments to separate memory banks:
- In C/C++ kernels, assign arguments to separate bundles in the kernel code
prior to compiling them:
void cnn( int *pixel, // Input pixel int *weights, // Input Weight Matrix int *out, // Output pixel ... // Other input or Output ports #pragma HLS INTERFACE m_axi port=pixel offset=slave bundle=gmem #pragma HLS INTERFACE m_axi port=weights offset=slave bundle=gmem1 #pragma HLS INTERFACE m_axi port=out offset=slave bundle=gmem
Note that the memory interface inputs
pixel
andweights
are assigned different bundle names in the example above, whileout
is bundled withpixel
. This creates two separate interface ports.Important: You must specifybundle=
names using all lowercase characters to be able to assign it to a specific memory bank using the--connectivity.sp
option. - Edit a config file to include the
--connectivity.sp
option, and specify it in thev++
command line with the--config
option, as described in Vitis Compiler Command.For example, for thecnn
kernel shown above, theconnectivity.sp
option in the config file would be as follows:[connectivity] #sp=<compute_unit_name>.<argument>:<bank name> sp=cnn_1.pixel:DDR[0] sp=cnn_1.weights:DDR[1] sp=cnn_1.out:DDR[2]
Where:
-
<compute_unit_name>
is an instance name of the CU as determined by theconnectivity.nk
option, described in Creating Multiple Instances of a Kernel, or is simply<kernel_name>_1
if multiple CUs are not specified. -
<argument>
is the name of the kernel argument. Alternatively, you can specify the name of the kernel interface as defined by the HLS INTERFACE pragma for C/C++ kernels, includingm_axi_
and thebundle
name. In thecnn
kernel above, the ports would bem_axi_gmem
andm_axi_gmem1
.Tip: For RTL kernels, the interface is specified by the interface name defined in the kernel.xml file. -
<bank_name>
is denoted asDDR[0]
,DDR[1]
,DDR[2]
, andDDR[3]
for a platform with four DDR banks. You can also specify the memory as a contiguous range of banks, such as DDR[0:2], in which case XRT will assign the memory bank at run time.Some platforms also provide support for PLRAM, HBM, HP or MIG memory, in which case you would use PLRAM[0], HBM[0], HP[0] or MIG[0]. You can use the
platforminfo
utility to get information on the global memory banks available in a specified platform. Refer to platforminfo Utility for more information.In platforms that include both DDR and HBM memory banks, kernels must use separate AXI interfaces to access the different memories. DDR and PLRAM access can be shared from a single port.
-
Important: Customized bank assignments might also need to be reflected in the host code in some cases, as described in Assigning DDR Bank in Host Code.
-
Connecting Directly to Host Memory
The
PCIe®
Slave-Bridge IP is
provided on some data center platforms to let kernels access directly to host memory.
Configuring the device binary to connect to memory requires changing the link specified by
the --connectivity.sp
command below. It also requires
changes to the accelerator card setup and your host application as described at Host-Memory Access in the XRT documentation.
[connectivity]
## Syntax
##sp=<cu_name>.<interface_name>:HOST[0]
sp=cnn_1.m_axi_gmem:HOST[0]
In the command syntax above, the CU name and interface name are the same,
but the bank name is hard-coded to HOST[0]
.