To make a PyTorch model quantizable,
it is necessary to modify the model definition to make sure the modified model meets the
following conditions. An example is available in Vitis AI GitHub.
- The model to be quantized should include forward method only. All other functions should be moved outside or move to a derived class. These functions usually work as pre-processing and post-processing. If they are not moved outside, the API removes them in the quantized module, which causes unexpected behavior when forwarding the quantized module.
- The float model should pass the jit trace test. Set the float module to
evaluation status, then use the
torch.jit.trace
function to test the float model.