DRP-AI TVM Application Development

From Renesas.info
Revision as of 23:25, 22 March 2023 by Zkmike (talk | contribs) (Added Section for Running the TVM Code)

General Information

This is a sumary of how to run a TVM model on the RZV board. The code snippest are based on the TVM Tutorial Application that can be found here.  The TMV Runtime Inference API is part of the EdgeCortix MERA software. The API for the EdgeCortix MERA softwre is defined in the header file MeraDrpRuntimeWrapper.h. The DRP-AI pre-processing is defined in the TVM Applicaton Document appendix here .


It is requred that TVM applications be deployed as follows.

/
├── usr
│   └── lib64
│       └── libtvm_runtime.so
└── home
    └── root
        └── tvm
            ├── preprocess_tvm_v2ma
            │   ├── drp_param.bin
            │   ...
            │   └── preprocess_tvm_v2ma_weight.dat
            ├── resnet18_onnx
            │   ├── deploy.json
            │   ├── deploy.params
            │   └── deploy.so
            ├── sample.yuv
            ├── synset_words_imagenet.txt
            └── tutorial_app
  • libtvm_runtime.so
    • This is the TVM Runtime Library.
    • This is required to run any TVM AI Models.
    • This file is located ${TVM_ROOT}/obj/build_runtime (RZV2L, RZV2M,RZV2MA)
  • preprocess_tvm_v2xx
    • This is the pre-compiled DRP pre-processsing files. This is required to run the pre-process code on
    • The name of the direcory depends on the RZV MPU used ( _v2xx = l, m, or ma )
    • These directories are located : ${TVM_HOME}/app/exe
    • Only the the directory for your MPU needs to be compied. (i.e. the example above only copies the v2ma directory)
  • xxxxxxx_onnx directory
    • This is the location output files of the TVM Translator
    • The name of the directory
  • Application
  • Optional files: These files depend on the Model used
    • sample.yuv
      • This is the input YUV image used for TVM tutorial applicaion demo.
    • synset_words_imagenet.txt
      • This text file list the names of the Resnet classifications. This is used for post-processing to display the name

Initialization

Step 1) Load pre_dir object to DRP-AI. The pre-processing object directory pre_dir contains the Renesas pre-compiled files located in the ${TVM_ROOT}/apps/exe/preprocess_tvm_<v2xx> (v2l, v2m, v2ma).

  • preprocess_tvm_v2m
  • preprocess_tvm_v2ma
  • preprocess_tvm_v2la
preruntime.Load(pre_dir);

Step 2) Load TVM Translated inference model directory and its weight from the model_dir to runtime object.

runtime.LoadModel(model_dir);

Step 3) Allocate continuous memory for the Camera capture buffer. Pre-processing Runtime requires the input buffer to be allocated in contiguous memory area. This application uses imagebuf (u-dma-buf) contiguous memory area. Refer for this page about the Linux Contiguous Memory Area here.

Run TVM Inference

This section describes the steps to run the TVM AI model. It is recommended that this section and the Post-processing section run in a thread.  In-addition the Camera Capture should also be run in a separate thread.