Runtime Options - 3.5 English

Vitis AI Library User Guide (UG1354)

Document ID
UG1354
Release Date
2023-06-29
Version
3.5 English

Vitis AI ONNX Runtime integrates a compiler that compiles the model graph and weights as a micro-coded executable. This executable is deployed on the target accelerator.

The model is compiled when the ONNX Runtime session is started, and compilation must complete prior to the first inference pass. The length of time required for compilation varies, but may take a few minutes to complete. Once the model has been compiled, the model executable is cached and for subsequent inference runs, the cached executable model can optionally be used (details below).

Several runtime variables can be set to configure the inference session as listed in the table below. The config_file variable is not optional and must be set to point to the location of the configuration file. The cacheDir and cacheKey variables are optional.

Table 1. Runtime Variables
Runtime Variable Default Value Details
config_file "" Required. The configuration file path, the configuration file vaip_config.json is contained in vitis_ai_2023.1-r3.5.0.tar.gz
cacheDir /tmp/{user}/vaip/.cache/ Optional. Cache directory
cacheKey {onnx_model_md5} Optional. Cache key used to distinguish between different models.

The final cache directory is {cacheDir}/{cacheKey}. In addition, environment variables can be set to customize the Vitis AI Execution provider.

Table 2. Environment Variables
Environment Variable Default Value Details
XLNX_ENABLE_CACHE 1 Whether to use cache, if it is 0, it will ignore the cached executable and the model will be recompiled.
XLNX_CACHE_DIR /tmp/$USER/vaip/.cache/{onnx_model_md5} Optional. Configure cache path.