Monocular Depth Estimation Demo

Demo Background

Depth is a valuable sensing modality for removing backgrounds, finding moving objects, and understanding the location of obstacles in 3d space. Depth is useful in fields like robotics, ADAS, surveillance, video conferencing systems, and more. Collecting depth data can be done with dedicated sensors like LIDAR or iToF sensors. Radar and ultrasonic provide similar data, but at lower resolutions. Cameras are rich sources of information, but accurate depth requires multiple cameras and careful calibration. However, applications that have looser accuracy requirements (e.g., a mobile robot that simply need to know if an object is close enough to run into within a few seconds) can use mono-camera techniques.

This demo shows a dense depth map produced by a deep learning network. Images are collected from a single camera, and the neural network runs with acceleration on the AM6xA SoC to create a relative depth map. This depth map is visualized as a heat map, such that red pixels correspond to shorter distances and blue to longer distances.

Source code is available on Texas Insturments github under the edgeai-demo-monodepth-estimation repository.

How to get started

Please find the following resources for reproducing the demo. This requires:

  1. Get an AM62A starter kit EVM and a USB-camera
  2. Download the Edge AI Linux SDK
  3. Load the Edge AI Linux SDK via an SD card using the quick start guide
  4. Log into the EVM through a network connection
  5. Clone the git repo for this demo onto the EVM (or copy all files to the SD card)
  6. Run the demo using the 'run_demo.sh' script. This assumes there is a 1920x1080 USB camera and 1920x1080 monitor plugged in, and that the ~/.profile script has run such that the camera interface is renamed to /dev/video-usb-cam0.


See the README on the github repository for more information and directions

These steps are validated on the 9.0 Edge AI Linux SDK on AM62A. Newer SDK versions may require recompiling the model. See the readme within the source code repository for help recompiling the model for a different SoC or SDK version -- this will require an x86 PC running Ubuntu 22.04.

General Edge AI and AM62A resources

Purpose Link
Edge AI Studio; Model Analyzer and Model Composer https://dev.ti.com/edgeaistudio/
Top level github page for Edge AI https://github.com/TexasInstruments/edgeai
AM62A Datasheet (superset device) https://www.ti.com/product/AM62A7
AM62A Academy (Basic Linux Training/bringup) https://dev.ti.com/tirex/explore/node?node=A__AB.GCF6kV.FoXARl2aj.wg__AM62A-ACADEMY__WeZ9SsL__LATEST
Support Forums (See Processors -> AM62A7) https://e2e.ti.com