Internals of Hash-Join-v4 and Hash-Build-Probe-v4 - 2024.1 English

Vitis Libraries

Release Date
2024-08-06
Version
2024.1 English

This document describes the structure and execution of Hash-Join-V4 and Hash-Build-Probe-V4, implemented as hashJoinV4 and hashBuildProbeV4 functions.

Hash Join MPU Structure

The Hash-Join-v4 and Hash-Build-Probe-v4 are general primitives to accelerate the Hash-Join algorithm utilizing the advantage of high memory bandwidth in AMD FPGAs. Compared with hashJoinV3 and hashBuildProbeV3, the v4 primitive provides Bloom-Filter to avoid redundant access to HBMs which can further reduce the efficiency of the Hash-Join algorithm. Hash-Join-v4 performs Hash-Join in single-call mode which means the small table and the big table should be scanned one after another. Hash-Build-Probe-v4 provides a separative solution for build and probe in Hash-Join. In Hash-Build-Probe, incremental build is supported, and the workflow can be scheduled as build0 -> build1 -> probe0 -> build2 -> probe2…

Workload is distributed based on the most significant bits (MSBs) of the hash value of the join key to the processing unit (PU), so that each PU can work independently. The current design uses a maximum number of eight PUs, served by four input channels through each of which a pair of key and payload can be passed in each cycle. Additionally, overflow processing is provided in this design.

The number of PU is set to 8, as each PU requires two dedicated bank to temporarily store rows in the small table (one for base rows and another for overflow rows). Due to the DDR/HBM memory access delay, four channels can serve enough data to these PUs. Each PU performs HASH-JOIN in three phases.

  1. Build: With the small table as the input, the number of keys falls into each hash values are counted. The value of the hash counters are stored in bit vector in URAM. Every hash value has a fixed depth of storage in HBM to store the rows of the small table. If the counter of hash value is larger than the fixed depth, it means that overflow happens. Another bit vector is used for counting overflow rows. In the v4 primitive, the overflow hash counter is stored in HBM to save URAM storage for the bloom filter bit vector. Also, the overflow of small table will be stored in another area of the HBM.
Build
  1. Merge: Accumulating the overflow hash counter to get the offsets of each hash value. Then, the overflow rows will be read in from one HBM and write out to another HBM. The output address of overflow rows is according to the offset of its hash value. By operation of Merge, you can put the overflow rows into its right position and wait for Probe. To provide high throughput in this operation, two dedicated HBM ports are required for each PU, which provides read and write accessibilities at the same time.