Quantizing RNN Models - 1.4.1 English

Vitis AI RNN User Guide (UG1563)

Document ID
UG1563
Release Date
2021-12-03
Version
1.4.1 English

XRNN is a customized accelerator built on FPGA to achieve RNNs acceleration. It can support different RNN networks, including RNN, GRU, LSTM, and bi-directional LSTM. The quantizer performs fixed-point int16 quantization for model parameters and activations. The compiler generates instructions based on the target network structure and XRNN hardware architecture.

Note: Currently, RNN toolchain only supports LSTM and OpenIE models, and the generated instructions can only be deployed in the Alveo™ U25 and U50 cards.