Internals of Hash-Join-v4 and Hash-Build-Probe-v4 - 2023.2 English

Vitis Libraries

Release Date
2023.2 English

This document describes the structure and execution of Hash-Join-V4 and Hash-Build-Probe-V4, implemented as hashJoinV4 and hashBuildProbeV4 function.

Hash Join MPU Structure

The Hash-Join-v4 and Hash-Build-Probe-v4 are general primitives to accelerate Hash-Join algorithm utilizing the advantage of high memory bandwidth in Xilinx FPGA. Compared with hashJoinV3 and hashBuildProbeV3, v4 primitive provides Bloom-Filter to avoid redundant access to HBMs which can further reduce the efficiency of Hash-Join algorithm. Hash-Join-v4 performs Hash-Join in single-call mode which means the small table and the big table should be scanned one after another. Hash-Build-Probe-v4 provides a separative solution for build and probe in Hash-Join. In Hash-Build-Probe, incremental build is supported, and the work flow can be scheduled as build0 -> build1 -> probe0 -> build2 -> probe2…

Workload is distributed based on MSBs of hash value of join key to Processing Unit (PU), so that each PU can work independently. Current design uses maximum number of 8 PUs, served by 4 input channels through each of which a pair of key and payload can be passed in each cycle. By the way, overflow processing is provided in this design.

The number of PU is set to 8, as each PU requires two dedicated bank to temporarily store rows in small table (one for base rows and another for overflow rows). Due to DDR/HBM memory access delay, 4 channels can serve enough data to these PUs. Each PU performs HASH-JOIN in 3 phases.

  1. Build: with small table as input, the number of keys falls into each hash values are counted. The value of hash counters are stored in bit vector in URAM. Every hash value has a fixed depth of storage in HBM to store rows of small table. If the counter of hash value is larger than the fixed depth, it means that overflow happens. Another bit vector is used for counting overflow rows. In v4 primitive, the overflow hash counter is stored in HBM to save URAM storage for bloom filter bit vector. Also, the overflow of small table will be stored in another area of the HBM.
  1. Merge: accumulating the overflow hash counter to get the offsets of each hash value. Then, the overflow rows will be read in from one HBM and write out to another HBM. The output address of overflow rows is according to the offset of its hash value. By operation of Merge, we can put the overflow rows into its right position and waiting for Probe. In order to provide high throughput in this operation, two dedicated HBM ports is required for each PU, which provides read and write accessibilities at the same time.