XRNN is a customized accelerator built on FPGA to achieve RNNs acceleration. It can support different RNN networks, including RNN, GRU, LSTM, and bi-directional LSTM. The quantizer performs fixed-point int16 quantization for model parameters and activations. The compiler generates instructions based on the target network structure and XRNN hardware architecture.
Note: Currently, RNN toolchain only supports LSTM
and OpenIE models, and the generated instructions can only be deployed in the
Alveo™
U25 and U50 cards.