Performance and Resource Use - 1.0 English

RFSoC DFE Fast Fourier Transform LogiCORE IP Product Guide (PG390)

Document ID
Release Date
1.0 English

For full details about performance and resource use, visit the Performance and Resource Use web page.


The latency of the RFSoC DFE FFT core depends on the point size and the maximum point size configured for each transform. Latency is the time taken to process and output a full block of data. It is measured beginning at the cycle on which the first data sample is presented to the core and ending at the cycle after the last data sample is output for the corresponding block. The minimum latency is achieved when s_axis_din_tvalid and m_axis_dout_tready (if present) are driven continuously High throughout the operation.

The time to output a block is included in the latency value because the output samples are generated in bit-reversed address order, meaning that the data typically cannot be used until the whole block has been produced. To obtain the first-in to first-out latency, subtract the point size from the first-in to last-out latency obtained from the table below.

The following table shows the minimum latency in clock cycles for all configurations when output back-pressure support is disabled (m_axis_dout_tready not present). When output back-pressure support is enabled and m_axis_dout_tready is driven continuously High, the latency is further increased by two clock cycles.

Table 1. DFE FFT Latency
Maximum Point Size Point Size
256 512 1024 2048 4096
256 535        
512 795 1051      
1024 1308 1564 2076    
2048 2336 2592 3104 4128  
4096 4385 4641 5153 6177 8225
Note: The core supports 100% throughput on the input and output interfaces. Blocks can be provided back to back without interruption, even when changing the point size or transform direction.