Right now vai_q_pytorch only has GPU version. vai_q_pytorch can be obtained in two ways:
Docker container
Vitis AI provide docker container for quantization tools including vai_q_pytorch. After running a GPU container, activate conda environment vitis-ai-pytorch. All the requirements are ready there, vai_q_pytorch APIs can be called directly. vai_q_pytorch now only has GPU version, vitis-ai-pytorch environment only exists in GPU container.
conda activate vitis-ai-pytorch
Install from Source Code
vai_q_pytorch is designed to work as a Pytorch plugin and itself is a python package. It is open source in Vitis_AI_Quantizer.It is recommended to install vai_q_pytorch in conda environment, follow the steps here:
- Set CUDA_HOME environment variable in .bashrcIf CUDA library is installed in /usr/local/cuda, add the following line into .bashrc. If CUDA is in other directory, change the line accordingly.
export CUDA_HOME=/usr/local/cuda
- Install Pytorch(1.1-1.4) and torchvision
Here take pytorch 1.1 and torchvision 0.3.0 as an example, detailed instructions for other versions are in pytorch website.
pip install torch==1.1.0 torchvision==0.3.0
- Install other
dependencies
pip install -r requirements.txt
- Install
vai_q_pytorch
cd ./pytorch_binding python setup.py install (for user) python setup.py develop (for developer)
- Verify
installation
python -c "import pytorch_nndct"
To create deployed model for VAI compiler, XIR library needs to be installed. Right now XIR library is not public available, so use docker environment to generate deployed model if necessary.