Skip to content

yester31/Monocular_Depth_Estimation_TRT

Repository files navigation

Monocular Depth Estimation Model to TensorRT

Project Overview

This project aims to optimize the inference performance of various monocular depth estimation models using NVIDIA's TensorRT. It provides a pipeline to convert pre-trained PyTorch models into ONNX format and then into TensorRT engines, allowing for a comparative analysis of inference speeds.

  • Key Features:
    • Introduction to various monocular depth estimation models and a TensorRT conversion pipeline.
    • Performance comparison (FPS, inference time) between the original PyTorch models and the TensorRT-optimized models.
    • Generation of 3D depth information and point clouds from 2D images.

1. Development Environment

  • Hardware: NVIDIA RTX3060 (notebook)
  • OS: Windows Subsystem for Linux (WSL)
  • Linux Distribution: Ubuntu 22.04.5 LTS
  • CUDA Version: 12.8
# Create and activate a Conda virtual environment
conda create -n trte python=3.11 --yes
conda activate trte

# Install the required libraries
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128
pip install tensorrt-cu12
pip install onnx
pip install opencv-python
pip install matplotlib

2. Supported Models

Each model directory contains a README.md file with detailed instructions.

Model Name Link to TensorRT Conversion Main Outputs
Depth Anything V2 TensorRT Conversion Depth
Distill Any Depth TensorRT Conversion Depth
Depth Anything AC TensorRT Conversion Depth
Depth Pro TensorRT Conversion Depth
Uni Depth V2 TensorRT Conversion Depth
Metric3D V2 TensorRT Conversion Depth
UniK3D TensorRT Conversion Depth
MoGe-2 TensorRT Conversion Depth
VGGT TensorRT Conversion Depth
StreamVGGT TensorRT Conversion Depth
Depth Anything V3 TensorRT Conversion Depth
Metric Anything TensorRT Conversion Depth

Releases

No releases published

Packages

 
 
 

Contributors

Languages