TI Edge AI Demos

Add embedded intelligence to your design using TI Edge AI and robotics software demos created with AM6xA processors.

Featured Demos

See our newest and most interesting demos!

Texas Instruments

...

Multi Camera AI on AM62A

This demo implements multiple camera AI application on AM62A processors based on the V3Link Camera Solution Kit and 4x IMX2a9 cameras. The demo shows how AM62A processed the videos and implemented object detection models at 120 FPS with bandwidth for expansion.

Texas Instruments

...

People Tracking on AM62A

This demo offers real-time vision-based people tracking with statistical insights such as total visitors, current occupancy, and visit duration. It also offers heatmap highlighting frequently visited areas. This demo has applications in areas such as retail, building automation, and security.

Texas Instruments

...

Monocular Depth Estimation

This demo shows single-camera depth estimation using a deep learning model. The MiDaS deep learning CNN gives relative depth information to distinguish people and objects from each other and backgrounds.

Smart cameras & AI boxes

Explore demos for smart cameras and edge AI boxes created by Texas Instruments and third party hardware and software vendors in the TI Edge AI ecosystem.

Texas Instruments

...

Image classification

Evaluate the performance of AI accelerators on TDA4x processors using different pre-compiled TensorFlow Lite, ONNX, or TVM models to classify images from USB and CSI camera inputs, as well as H.264 compressed video and JPEG image series.

Texas Instruments

...

Object detection

Evaluate the performance of AI accelerators on TDA4x processors using different pre-compiled TensorFlow Lite, ONNX, or TVM models to detect objects. Input sources include USB and CSI camera inputs, as well as H.264 compressed video and JPEG image series.

Texas Instruments

...

Semantic segmentation

Evaluate the performance of AI accelerators on TDA4x processors using different pre-compiled TensorFlow Lite, ONNX, or TVM semantic segmentation models. Input sources include USB and CSI camera inputs, as well as H.264 compressed video and JPEG image series.

Texas Instruments

...

Multiple input, multi-inference

This demo explains the data flow for two input sources passed through two inference operations—image classification, object detection and semantic segmentation. The outputs are overlaid on the input image individually and displayed using a mosaic layout. The single output image can be displayed on a screen or saved to H.264 compressed video or a JPG image series.

Texas Instruments

...

6D pose estimation

Object pose estimation aims to estimate the 3D orientation and 3D translation of objects in a given environment. It is useful in a wide range of applications like robotic manipulation for bin-picking, motion planning, and human-robot interaction task such as learning from demonstration. Supported on TDA4VM, AM68A and AM69A.

Texas Instruments

...

Human pose estimation

Multi person 2D human pose estimation is the task of understanding humans in an image. Given an input image, target is to detect each person and localize their body joints. Multi person 2D pose estimation is the task of understanding humans in an image. Given an input image, target is to detect each person and localize their body joints. Supported on TDA4VM, AM68A and AM69A.

Texas Instruments

...

Multi Camera AI on AM62A

This demo implements multiple camera AI application on AM62A processors based on the V3Link Camera Solution Kit and 4x IMX2a9 cameras. The demo shows how AM62A processed the videos and implemented object detection models at 120 FPS with bandwidth for expansion.

Texas Instruments

...

People Tracking on AM62A

This demo offers real-time vision-based people tracking with statistical insights such as total visitors, current occupancy, and visit duration. It also offers heatmap highlighting frequently visited areas. This demo has applications in areas such as retail, building automation, and security.

Texas Instruments

...

License plate recognition

License plate recognition aims to recognize the license plate number characters. This demo is specifically useful in wide area of applications including automotive domain. This demo is built on yolox-s-lite model, which is part of model zoo. Supported on TDA4VM.

Amazon Web Services

...

ML Edge deployment and inference with AWS IoT Greengrass

This workshop provides step-by-step instructions on using AWS IoT and AWS IoT Greengrass to orchestrate the deployment of a pre-trained and optimized object counting ML model to the edge device, run inference and send inference results to AWS IoT Core.

Plumerai

...

People Detection and Face Identification

Plumerai provides AI software for people detection, face identification and more. This software is optimized to run fast (>15fps) on Arm Cortex-A cores within processors like the AM62, without requiring AI acceleration. Target applications include security cameras, video doorbells, video conferencing, elder care, and building automation

Building and Factory Automation

AI can be used in commercial and factory building automation for enhancing security, manufacturing, human-machine interaction (HMI). Explore demos created by Texas Instruments and third party hardware and software vendors in the TI Edge AI ecosystem.

Texas Instruments

...

Smart retail scanner

This demo showcases the use of vision AI on the AM62A processor for codeless food scanning, enabling automatic recognition of products and checkout of the user’s order. The demo employs an IMX219 camera and utilizes the built-in ISP and DL engine to perform real-time object detection inference.

Texas Instruments

...

Barcode imager with deep learning

This demo shows a barcode reader application for 1-D and 2-D codes. Using deep learning and open source decoder software, AM62A can be the main processor for high performance barcode readers and imagers.

Texas Instruments

...

Defect detection

Defect detection in machine vision and factory automation finds incorrectly produced parts and materials so they can be removed quickly and automatically. Edge AI Studio simplifies model training and deployment for new types of parts and materials.

Texas Instruments

...

Conferencing Systems

This demo showcases the use of vision AI and audio keyword spotting on AM6xA devices in the context of audio-visual conferencing systems. Audio commands control the area that the camera focuses on.

Texas Instruments

...

Gesture controlled HMI

This design shows integration of mmWave radar and camera sensor with AM62/AM62A to control a building access HMI with face detection and gestures to unlock a PIN controlled entry

Texas Instruments

...

Smart camera with Vision-Radar Fusion

Vision and Radar fusion covers weakness in the individual sensors like poor lighting or low resolution. Radar sensors detect movement in 3D space, allowing better proximity detection while vision on AM62/AM62A recognizes objects like vehicles, people or animals.

Ignitarium

...

Personnel protective equipment detector

This application is an AI-based object detection solution for detecting specific types of personal protective equipment (PPE), such as jackets, helmets, gloves and goggles. The solution supports in-field trainable mode, allowing new types of PPE to be trained on the same inference hardware.

D3 Engineering

...

Sensor fusion data acquisition system

The DesignCore®Sensor Fusion Data Acquisition System is the exemplary data mining platform for Edge applications. This solution records data in high fidelity without sacrificing storage space, and greatly reduces human error due to its rule-based automated recording capability.

Multicoreware

...

Face Recognition on Arm cores


HuBe.ai from Multicoreware provides insights to human behavior and other people-centric AI technology like face recognition. For applications with lower framerate requires, Arm cores are sufficient to run AI models on processors without accelerators.

Autonomous machines & robots

Explore demos for autonomous machines and robots created by Texas Instruments and third party hardware and software vendors in the TI Edge AI ecosystem.

Texas Instruments

...

Stereo depth estimation

This Robot Operating System (ROS) application demonstrates hardware accelerated stereo vision processing on a live stereo camera or a ROS bag file on TDA4VM Processor. Computation intensive tasks such as image rectification, scaling and stereo disparity estimation are processed on vision hardware accelerators.

Texas Instruments

...

Environmental awareness

This Robot Operating System (ROS) application demonstrates a hardware accelerated semantic segmentation task on live camera or ROS bag data using the TDA4VM processor. Computation intensive image pre-processing tasks such as image rectification and scaling happen on vision hardware accelerator VPAC, while the AI processing is accelerated on deep learning accelerator.

Texas Instruments

...

3D obstacle detection

This ROS based application demonstrates hardware accelerated 3D obstacle detection on a stereo vision input. The hybrid computer vision and AI based demo utilizes vision hardware accelerators on TDA4VM for image rectification, stereo disparity mapping and scaling tasks, while AI-based semantic segmentation is accelerated on deep learning accelerator.

Texas Instruments

...

Visual localization

This application demonstrates hardware accelerated ego vehicle localization, estimating 6-degrees of freedom pose. It uses a deep neural network to learn hand computed feature descriptor like KAZE in a supervised manner. The AI and CV based demo utilizes deep learning hardware accelerator and TI DSP on TDA4VM to run the task efficiently, while leaving the Arm® cores completely free for other tasks.

Texas Instruments

...

Monocular Depth Estimation

This demo shows single-camera depth estimation using a deep learning model. The MiDaS deep learning CNN gives relative depth information to distinguish people and objects from each other and backgrounds.

Ignitarium

...

Autonomous navigation with 2D LIDAR

This project demonstrates Autonomous Navigation of a Turtlebot 2 on a predefined map built using the Gmapping SLAM package. It uses the AMCL algorithm to localize itself on the map and navigates between two endpoints with a path generated by the global planner and avoids obstacles using the local path planner.

Ignitarium

...

Autonomous navigation with 3D LIDAR and AI

This project demonstrates 3D Lidar SLAM with loop closure, 3D objection detection model on point cloud data and object collision algorithm running on a safety MCU to engage emergence safety stop when an object crosses into the robot’s C-Space.