Using Unified APIs - 1.1 English

Vitis AI User Guide (UG1414)

Document ID
Release Date
1.1 English
Vitis AI provides a C++ DpuRunner class with the following interfaces:
std::pair<uint32_t, int> execute_async(  const std::vector<TensorBuffer*>& input,  const std::vector<TensorBuffer*>& output);
  1. Submit input tensors for execution, and output tensors to store results. The host pointer is passed via the TensorBuffer object. This function returns a job ID and the status of the function call.
    int wait(int jobid, int timeout);

    The job ID returned by execute_async is passed to wait() to block until the job is complete and the results are ready.

    TensorFormat get_tensor_format()
  2. Query the DpuRunner for the tensor format it expects.

    Returns DpuRunner::TensorFormat::NCHW or DpuRunner::TensorFormat::NHWC

    std::vector<Tensor*> get_input_tensors()
  3. Query the DpuRunner for the shape and name of the input tensors it expects for its loaded AI model.
    std::vector<Tensor*> get_output_tensors()
  4. Query the DpuRunner for the shape and name of the output tensors it expects for its loaded AI model.
  5. To create a DpuRunner object call the following:
    DpuRunner::create_dpu_runner(const std::string& model_directory);

    which returns


The input to create_dpu_runner is a model runtime directory generated by the AI compiler. The directory contains a meta.json that distinguishes each directory for each Vitis Runner, along with files needed by the Runner at runtime.