--connectivity Options - 2023.1 English

Vitis Unified Software Platform Documentation: Application Acceleration Development (UG1393)

Document ID
UG1393
Release Date
2023-07-17
Version
2023.1 English

As discussed in Linking the System, there are a number of --connectivity.XXX options that let you define the topology of the FPGA binary, specifying the number of CUs, assigning them to SLRs, connecting kernel ports to global memory, and establishing streaming port connections. These commands are an integral part of the build process, critical to the definition and construction of the application.

--connectivity.nk

--connectivity.nk <arg>

Where <arg> is specified as <kernel_name>:#:<cu_name1>,<cu_name2>,...<cu_name#>.

This instantiates the specified number of CU (#) for the specified kernel (kernel_name) in the generated FPGA binary (.xclbin) file during the linking process. The cu_name is optional. If the cu_name is not specified, the instances of the kernel are simply numbered: kernel_name_1, kernel_name_2, and so forth. By default, the Vitis compiler instantiates one compute unit for each kernel.

For example:

v++ --link --connectivity.nk vadd:3:vadd_A,vadd_B,vadd_C
Tip: This option can be specified in a configuration file under the [connectivity] section head using the following format:
[connectivity]
nk=vadd:3:vadd_A,vadd_B,vadd_C

--connectivity.sc

--connectivity.sc <arg>

Create a streaming connection between two compute units through their AXI4-Stream interfaces. Use a separate --connectivity.sc option for each streaming interface connection. The order of connection must be from a streaming output port of the first kernel to a streaming input port of the second kernel. Valid values include:

<cu_name>.<streaming_output_port>:<cu_name>.<streaming_input_port>[:<fifo_depth>]

Where:

  • <cu_name> is the compute unit name specified in the --connectivity.nk option. Generally this is <kernel_name>_1 unless a different name was specified.
  • <streaming_output_port>/<streaming_input_port> is the function argument for the compute unit port that is declared as an AXI4-Stream.
  • [:<fifo_depth>] inserts a FIFO of the specified depth between the two streaming ports to prevent stalls. The value is specified as an integer.
Important: An error will occur if the --connectivity.sc kernel drives itself.

For example, to connect the AXI4-Stream port s_out of the compute unit mem_read_1 to AXI4-Stream port s_in of the compute unit increment_1, use the following:

--connectivity.sc mem_read_1.s_out:increment_1.s_in
Tip: This option can be specified in a configuration file under the [connectivity] section head using the following format:
[connectivity]
sc=mem_read_1.s_out:increment_1.s_in

The inclusion of the optional <fifo_depth> value lets the v++ linker add a FIFO between the two kernels to help prevent stalls. This uses BRAM resources from the device when specified, but eliminates the need to update the HLS kernel to contain FIFOs. The tool also instantiates a Clock Converter (CDC) or Datawidth Converter (DWC) IP if the connections have different clocks, or different bus widths.

--connectivity.slr

--connectivity.slr <arg>

Use this option to assign a CU to a specific SLR on the device. The option must be repeated for each kernel or CU being assigned to an SLR.

Important: If you use --connectivity.slr to assign the kernel placement, then you must also use --connectivity.sp to assign memory access for the kernel.

Valid values include:

<cu_name>:<SLR_NUM>

Where:

  • <cu_name> is the name of the compute unit as specified in the --connectivity.nk option. Generally this is <kernel_name>_1 unless a different name was specified.
  • <SLR_NUM> is the SLR number to assign the CU to. For example, SLR0, SLR1.

For example, to assign CU vadd_2 to SLR2, and CU fft_1 to SLR1, use the following:

v++ --link --connectivity.slr vadd_2:SLR2 --connectivity.slr fft_1:SLR1
Tip: This option can be specified in a configuration file under the [connectivity] section head using the following format:
[connectivity]
slr=vadd_2:SLR2
slr=fft_1:SLR1

--connectivity.sp

--connectivity.sp <arg>

Use this option to specify the assignment of kernel arguments to system ports within the platform. A primary use case for this option is to connect kernel arguments to specific memory resources. A separate --connectivity.sp option is required to map each argument of a kernel to a particular memory resource. Any argument not explicitly mapped to a memory resource through the --connectivity.sp option is automatically connected to an available memory resource during the build process.

Valid values include:

<cu_name>.<kernel_argument_name>:<sptag[min:max]>

Where:

  • <cu_name> is the name of the compute unit as specified in the --connectivity.nk option. Generally this is <kernel_name>_1 unless a different name was specified.
  • <kernel_argument_name> is the name of the function argument for the kernel, or the compute unit interface port.
  • <sptag> represents a system port tag, such as for memory controller interface names from the target platform. Valid <sptag> names include DDR, PLRAM, and HBM.
  • [min:max] enables the use of a range of memory, such as DDR[0:2]. A single index is also supported: DDR[2].
Tip: The supported <sptag> and range of memory resources for a target platform can be obtained using the platforminfo command. Refer to platforminfo Utility for more information.

The following example maps the input argument (A) for the specified CU of the VADD kernel to DDR[0:3], input argument (B) to HBM[0:31], and writes the output argument (C) to PLRAM[2]:

v++ --link --connectivity.sp vadd_1.A:DDR[0:3] --connectivity.sp vadd_1.B:HBM[0:31] \
--connectivity.sp vadd_1.C:PLRAM[2]
Tip: This option can be specified in a configuration file under the [connectivity] section head using the following format:
[connectivity]
sp=vadd_1.A:DDR[0:3]
sp=vadd_1.B:HBM[0:31]
sp=vadd_1.C:PLRAM[2]

--connectivity.noc.connect

  --connectivity.noc.connect <arg> 

Where <arg> is in the form of <compute_unit_name>.<kernel_interface_name>:<noc interface>, and specifies a connection between the PL kernel interface and the Versal NoC. Valid values are internal memory controllers, or master interfaces on the Versal NoC cell.

The Vitis compiler estimates kernel bandwidth requirements based on NoC connectivity and M_AXI properties (datawidth * clock freq) across the dynamic region, and automatically sets NoC configuration settings for read and write bandwidth, scaling as needed to avoid exceeding the available bandwidth.

For example:
[connectivity]
noc.read_bw=mm2s.M_AXI:2000.16
noc.write_bw=mm2s.M_AXI:2010.16
noc.connect=mm2s.M_AXI:M00_INI

--connectivity.noc.read_bw

--connectivity.noc.read_bw <arg>

Where <arg> is in the form <compute_unit_name>.<kernel_interface_name>:<Bandwidth>.<Avg_burst_length>, and specifies both the bandwidth and burst length of the connection. The bandwidth is specified in MB/s.

This option specifies expected read traffic characteristics on M_AXI interfaces to let you override the automatic Versal NoC configuration.

--connectivity.noc.write_bw

--connectivity.noc.write_bw <arg>

Where <arg> is in the form <compute_unit_name>.<kernel_interface_name>:<Bandwidth>.<Avg_burst_length>, and specifies both the bandwidth and burst length of the connection. The bandwidth is specified in MB/s.

This option specifies expected write traffic characteristics on M_AXI interfaces to let you override the automatic Versal NoC configuration.

--connectivity.connect

--connectivity.connect <X:Y>

This option can be used to make connections through the Vivado IP integrator, but v++ does not perform any error checking on the specified connections. Use this to specify general connections between kernels and non-AXI elements of the target platform, such as connections to GT ports.

The X and Y connections must be specified as arguments compatible with either the IP integrator connect_bd_net or connect_bd_intf_net commands. The specific format of <X:Y> is:
src/hierarchy_name/cell_name/pin_name:dst/hierarchy_name/cell_name/pin_name

These cannot include connections between AXI4-Stream interfaces which require the use of --conectivity.sc, or M_AXI interfaces which require the use of --connectivity.sp as described above.

Tip: This option can be specified in a configuration file under the [connectivity] section head using the following format:
[connectivity]
connect=<X:Y>