We have hosted the application tensorrt backend for onnx in order to run this application in our online workstations with Wine or directly.
Quick description about tensorrt backend for onnx:
Parses ONNX models for execution with TensorRT. Development on the main branch is for the latest version of TensorRT 8.4.1.5 with full dimensions and dynamic shape support. For previous versions of TensorRT, refer to their respective branches. Building INetwork objects in full dimensions mode with dynamic shape support requires calling the C++ and Python API. Current supported ONNX operators are found in the operator support matrix. For building within docker, we recommend using and setting up the docker containers as instructed in the main (TensorRT repository). Note that this project has a dependency on CUDA. By default the build will look in /usr/local/cuda for the CUDA toolkit installation. If your CUDA path is different, overwrite the default path. ONNX models can be converted to serialized TensorRT engines using the onnx2trt executable.Features:
- ONNX models can be converted to human-readable text
- ONNX models can be converted to serialized TensorRT engines
- ONNX models can be optimized by ONNX's optimization libraries
- Python Modules
- TensorRT 8.4.1.5 supports ONNX release 1.8.0
- The TensorRT backend for ONNX can be used in Python
Programming Language: C++.
Categories:
©2024. Winfy. All Rights Reserved.
By OD Group OU – Registry code: 1609791 -VAT number: EE102345621.