Performance
QDMA performance and detailed analysis is available in AR 71453.
AMD provides two example designs
for you to experiment with. Standard example design is for functional test only. To generate
an example design for performance analysis, use the following Tcl command to generate a
performance example design:
set_property CONFIG.performance_exdes {true} [get_ips qdma_0]
Following are the QDMA register settings recommended by AMD for better performance. Performance numbers can vary based on systems and OS used.
| Address | Name | Fields | Field Value | Register Value |
|---|---|---|---|---|
| 0xB08 | PFCH CFG |
|
|
0x100_0100 |
| 0xA80 | PFCH_CFG_1 |
|
|
0x3c_003c |
| 0xA84 | PFCH_CFG_2 |
|
|
0x8040_03C8 |
| 0x147C | PFCH_CFG_3 |
|
|
0x8000 |
| 0x1484 | PFCH_CFG_4 |
|
|
0x80_0320 |
| 0x1400 | CRDT_COAL_CFG_1 |
|
|
0x4010 |
| 0x1404 | CRDT_COAL_CFG_2 |
|
|
0x38_0060 |
| 0x15C | GLBL_RRQ_PCIE_THROT |
|
|
0x604_5000 |
| 0x160 | GLBL_RRQ_AXIMM_THROT |
|
|
0 |
| 0x158 | GLBL_RRQ_BRG_THROT |
|
|
0x8604_5000 |
| 0xE24 | H2C_REQ_THROT_PCIE |
|
|
0x8604_6000 |
| 0xE2C | H2C_REQ_THROT_AXIMM |
|
|
0x8204_4000 |
| 0x12EC | H2C_MM_DATA_THROT |
|
|
0x1_5000 |
| 0x250 | QDMA_GLBL_DSC_CFG |
|
|
0x50_2015 |
| 0x4C | CONFIG_BLOCK_MISC_CONTROL |
|
|
0x1_0009 |
- QDMA_C2H_INT_TIMER_TICK (0xB0C) set to 25. Corresponding to 100 ns (1 tick = 4 ns for 250 MHz user clock)
- C2H trigger mode set to user timer, with counter set to 64 and timer to match round trip latency. Global register for timer should have a value of 30 for 3 μs.
- TX/RX API burst size = 64, ring depth = 2048. The driver should update TX/RX PIDX in batches of 64.
- PCIe MPS = 256 bytes, MRRS >= 512 bytes, Extended Tag Enabled, Relaxed Ordering Enabled
- The driver will update the completion CIDX in batches of 64 to reduce number of MMIO writes before updating the C2H PIDX
- The driver should update the H2C PIDX in batches of 64, and also update for the last descriptor of the scatter gather list.
- C2H context:
-
bypass= 0 (Internal mode) -
frcd_en= 1 -
qen= 1 -
wbk_en= 1 -
irq_en=irq_arm=int_aggr= 0
-
- C2H prefetch context:
-
pfch= 1 -
bypass= 0 -
valid= 1
-
- C2H CMPT context:
-
en_stat_desc= 1 -
en_int= 0 (Poll_mode) -
int_aggr= 0 (Poll mode) -
trig_mode= 4 -
counter_idx= corresponding to 64 -
timer_idx= corresponding to 3 μs -
valid= 1
-
- H2C context:
-
bypass= 0 (Internal mode) -
frcd_en= 0 -
fetch_max= 0 -
qen= 1 -
wbk_en= 1 -
wbi_chk= 1 -
wbi_intvl_en= 1 -
irq_en= 0 (Poll mode) -
irq_arm= 0 (Poll mode) -
int_aggr= 0 (Poll mode)
-
For optimal QDMA streaming performance, packet buffers of the descriptor ring should be aligned to at least 256 bytes.
Recommended:
AMD recommends that you limit the total outstanding descriptor fetch to be
less than 8 KB on the PCIe. For example, limit the
outstanding credits across all queues to 512 for a 16B descriptor.
Performance in Descriptor Bypass Mode
When the design is configured in descriptor bypass mode, all the above setting apply. The following information provides recommendations to improve performance in bypass mode.
- When bypass in
h2c_byp_in_st_sdiports is set, the QDMA IP generates the status write back for every packet. AMD recommends that this port be asserted once in 32 packets or 64 packets. And if there are no more descriptors left then asserth2c_byp_in_st_sdiat the last descriptor. This requirement is per queue basis, and applies to AXI4 (H2C and C2H) bypass transfers and AXI4-Stream H2C transfers. - For AXI4-Stream C2H Simple bypass
mode, the
dsc_crdt_in_fenceport should be set to 1 for performance reasons. This recommendation assumes the user design already coalesced credits for each queue and sent them to the IP. In internal mode, set thefencebit in the QDMA_C2H_PFCH_CFG_2 (0xA84) register.
Performance Optimization Based on Available Cache/Buffer Size
| Name | Entry/Depth | Description |
|---|---|---|
| C2H descriptor cache depth | 1024 | Total number of outstanding C2H stream descriptor fetches for cache bypass and internal. This cache depth is not relevant in simple bypass mode, in simple bypass mode you can have longer descriptor cache. |
| Prefetch cache depth | 64 | C2H prefetch tags available. If you have more then 64 active queues for packets < 512 B, performance can reduce depending on the data pattern. If you see performance degradation, you can implement simple bypass mode, where you can maintain the descriptor flow. |
| C2H payload FIFO depth | 512 | Units of 64 B. The amount of C2H data that C2H engine can buffer. This amount of buffer can sustain the host read latency up to 2 us (512 *4 ns). If latency is more then 2 us, there could be performance degradation. |
| Common reorder buffer depth | 512 | Units of 64 B for Soft IP. Shared buffer space that can be flexibly allocated between the read engines. Throttle CSRs can be used to limit the amount of outstanding read data used by each engine in this common buffer space. |
Resources Utilization
For QDMA Resource Utilization, see Resource Use web page.