Human Non-human Classification User Guide

Table of Contents

⚠️ Device Compatibility

This demo is currently only compatible with the IWRL6432.

Overview

This example demonstrates the use of TI mmWave sensors to classify dynamic objects using the IWRL6432 sensor module in the wall-mount position. Detection, tracking, and classification algorithms run onboard the IWRL6432 mmWave sensor to localize objects, track their movement, and finally classify them as Human or Non-Human.

With the classification software running on the IWRL6432 chip, the mmWave sensor module outputs a number of data streams, including a list of tracked objects and their associated labels which can be visualized using the PC based Industrial Visualizer included in the toolbox.

This user guide covers the procedures to flash, run and customize the IWRL6432 Human vs. Non-Human Classification Demo. For details on the software, algorithms and implementation running on the IWRL6432 device in this demo, please refer to the Motion and Presence Detection Demo Documentation in the MMWAVE-L-SDK (i.e. MMWAVE SDK5) at the following location : <MMWAVE_SDK5_INSTALL_DIR>/docs/MotionPresenceDetectionDemo_documentation.pdf


Quickstart

Prerequisites

Prerequisite 1 - Run Presence and Motion Detection Demo

Before continuing with this demo, users should run the Presence and Motion Detection Demo for the IWRL6432 to gain familiarity with the sensor’s capabilities and the tools used in the Radar Toolbox.


Prerequisite 2 - XDS Firmware

If using an IWRL6432BOOST EVM, the latest XDS110 firmware is required. The XDS110 Firmware runs on the microcontroller onboard the IWRL6432BOOST, which provides the JTAG Emulation and serial port communication over the XDS110 USB port. We have observed packet loss on the serial port (COM ports) with some versions of the XDS110 firmware which can cause the demo to fail.

The latest XDS110 firmware is installed with the latest version of Code Composer Studio. CCS version 12.1 or later required.


Prerequisite 3 - PC CPU and GPU Resources

The Industrial Visualizer requires sufficient GPU and CPU resources to process the incoming UART data every frame and run the demo smoothly. Users with insufficient PC resources may notice occasional visualizer lag or missed frames in the log file. The demo should still run on most PCs regardless of hardware resources, but performance might vary.


Hardware Requirements

Item Details
IWRL6432BOOST EVM IWRL6432BOOST
Mounting Hardware The EVM needs to be mounted at a height of ~1.5-2.5m with a slight downtilt. An adjustable clamp style smartphone adapter mount for tripods and a 60-75” tripod can be used to clamp and elevate the EVM. This is only an example solution for mounting; other methods can be used so far as setup specifications are met.
Computer PC with Windows 7 or 10. If a laptop is used, please use the ‘High Performance’ power plan in Windows.
Micro USB Cable Due to the high mounting height of the EVM, an 8ft+ cable or USB extension cable is recommended.

Software Requirements

Tool Version Download Link
TI MMWAVE-L-SDK 5.2.0.0 Link to Latest mmWave L SDK. To access a previous version of the mmWave SDK scroll to the bottom of the table and click the link under “MMWAVE-SDK previous release”. Repeat to continue stepping back to previous versions.
Uniflash Latest Uniflash tool is used for flashing TI mmWave Radar devices. Download offline tool or use the Cloud version
Code Composer Studio Latest Code Composer Studio

1. Configure the EVM for Flashing Mode

Place the IWRL6432BOOST EVM in flashing mode as shown in the operational modes page

2. Flash the EVM using Uniflash

Flash the IWRL6432BOOST EVM with the Presence and Motion Detection image found at <MMWAVE_SDK5_INSTALL_DIR>\examples\mmw_demo\motion_and_presence_detection\prebuilt_binaries\motion_and_presence_detection_demo.release.appimage using the latest version of Uniflash. Follow the instructions in Using Uniflash with mmWave

3. Configure the EVM for Functional Mode

Place the IWRL6432BOOST EVM in functional mode as shown in the operational modes page

4. Scene Setup

Point the IWRL6432BOOST EVM forward at a height of approximately 1.9 m in the air.

5. Open Visualizer

Navigate to <RADAR_TOOLBOX_INSTALL_DIR>\tools\visualizers\Industrial Visualizer. The visualizer software may be run from Python 3.7.3 or as a standalone executable. In general, running the visualizer from the Python source runs faster and results in fewer errors over long periods of time. If running Python directly from source, run the setUpEnvironment.bat script first to ensure all the correct packages and versions are installed. Then run the visualizer either directly from source with python gui_main.py, or as an executable by running the mmWaveIndustrialVisualizer.exe

6. Reset the IWRL6432BOOST EVM by pressing the RESET_SW button

7. Select the arrowed options as shown in the below graphic.

  1. Set the Device to IWRL6432 as indicated by arrow 1

  2. Set CLI COM to the appropriate COM port as indicated by arrow 2. If the IWRL6432BOOST EVM is plugged in when the visualizer is opened, then the COM ports filled in automatically. If they do not appear, check the Device Manager on Windows for the XDS110 Class Application/User UART port, and use this port number in the visualizer.

  3. Select the IWRL6432 Out of Box Demo as indicated by arrow 3.

  4. Press the “Connect” button as indicated by arrow 4

  5. Press the “Select Configuration” button as indicated by arrow 5. Navigate to <RADAR_TOOLBOX_INSTALL_DIR>\radar_toolbox\source\ti\examples\Classification\human_non-human_classification\chirp_configs and select one of the configuration files present.

  6. Press the “Start and Send Configuration” button as indicated by arrow 6. At this point you should see points and tracks appear as you move in front of the EVM. After the track has been present for some time, a label should also become visible, indicating whether the track is human or non-human.

8. Optimal Physical Setup

  1. For best results, the EVM should be positioned high enough to be above the top of tracked objects and with a slight down tilt. The aim is to position the EVM so that the antenna beam can encompass the area of interest. If the down tilt is too severe, noise from ground clutter would increase and the effective sensing area would decrease. If there is no down tilt, tracking performance would be worse for cases in which one person is in line with and shielded by another person.

⚠️ Warning
Whatever the height and tilt the radar is placed at, ensure that the sensorPosition line in the .cfg file reflects the position and angle of the radar.

📝 NOTE: More Information
For a more detailed discussion of the optimal placement and angle of the radar device, refer to TI’s Application Brief on the topic - Best Practices for Placement and Angle of mmWave Radar Devices

Setup Requirements:

Setup using suggested tripod and smartphone clamp mount:

  1. Screw on clamp mount to tripod
  2. Clamp EVM across its width below power barrel jack to attach EVM
  3. Adjust tripod head for ~10 degree down tilt (Tip: Bubble or level smartphone apps can be used to measure down tilt)
  4. Plug in micro-usb and power supply to EVM
  5. Extend tripod so that the EVM is elevated 1.5-2.5m from the ground
  6. Position EVM and tripod assembly in desired location of room. The EVM should be positioned so that the 120 degree FOV of the EVM antenna encompasses the area of interest and points to the region in which people are expected to enter the space.

Customization

Modifying the Configuration File for the Environment

The configuration files included a set of commands which are used to specify the scene boundaries (i.e. area of interest) in relation to the sensor position and may need to be modified accordingly. These commands and their respective parameters are listed below.

Modifying the Configuration File for Performance

The configuration file contains a number of commands that can be modified to optimize device performance. For a comprehensive description of the configuration parameters and recommendations related to tuning the CLI parameters, please refer to the following documents available in the MMWAVE-L-SDK.

Document Location Contents
Motion and Presence Detection Demo Tuning Guide <MMWAVE_SDK5_INSTALL_DIR>/docs/MotionPresenceDetectionDemo_TuningGuide.pdf - Comprehensive list of the Tunable Detection Layer CLI Parameters
- Common problems and solutions for effective target detection
Motion and Presence Detection Demo Group Tracker Tuning Guide <MMWAVE_SDK5_INSTALL_DIR>/docs/MotionPresenceDetectionDemo_GroupTracker_TuningGuide.pdf - Comprehensive list of the Tunable Tracker Layer CLI Parameters
- Common problems and solutions for effective target tracking

📝 NOTE - Typical Configuration Challenges
Typical problems and solutions for first-time users working with the mmWave devices are documented in Section 4 of the Motion and Presence Detection Demo Tuning Guide and in Section 4 of the Motion and Presence Detection Demo Group Tracker Tuning Guide. These scenarios include tuning the range and velocity, reducing noise, discerning true tracks from ghost tracks and more.

UART Output Data Format

The demo outputs the point cloud and tracking information using a TLV(type-length-value) encoding scheme with little endian byte order. For every frame, a packet is sent consisting of a fixed sized Frame Header and then a variable number of TLVs depending on what was detected in that scene. The TLVs can be of types representing the 3D point cloud, target list object, and associated points.

When running the motion_and_presence_detection_demo.release.appimage binary, the guiMonitor command lets users output any TLV that from the motion_and_presence_detection_demo.release.appimage binary, which includes the Motion and Presence Detection demo and Classification demo on the IWRL6432. For a comprehensive list of these TLV’s that may be output, see Section 13 of the Motion and Presence Detection Demo Documentation. Below lists the TLV’s that may be output that pertain to Classification.


Frame Header

Length: 40 Bytes
A Frame Header is sent at the start of each packet. Use the Magic Word to find the start of each packet.

Value Type Bytes Comments
Magic Word uint64_t 8 syncPattern in hex is: ‘02 01 04 03 06 05 08 07’
Version unint32_t 4 Software Version
Total Packet Length unint32_t 4 In bytes, including header
Platform unint32_t 4 A6843
Frame Number unint32_t 4 Frame Number
Time [in CPU Cycles] unint32_t 4 Message create time in cycles
Num Detected Obj unint32_t 4 Number of detected points in this frame
Num TLVs unint32_t 4 Number of TLVs in this frame
Subframe Number unint32_t 4 Sub-Frame number

TLV Header

Length: 8 Bytes
A TLV Header is sent at the start of each TLV. Following the header is the the TLV-type specific payload.

Value Type Bytes Comments
Type unint32_t 4 TLV Type
Length [number of bytes] unint32_t 4 In bytes

TLVs

Target List TLV

Size: sizeof (tlvHeaderStruct) + sizeof (trackerProc_Target) x numberOfTargets The Target List TLV consists of an array of targets. Each target object is defined as given below.

Value Type Bytes Comments
tid uint32 4 Track ID
posX float 4 Target position in X dimension, meters
posY float 4 Target position in Y dimension, meters
posZ float 4 Target position in Z dimension, meters
velX float 4 Target velocity in X dimension, meters/s
velY float 4 Target velocity in Y dimension, meters/s
velZ float 4 Target velocity in Z dimension, meters/s
accX float 4 Target acceleration in X dimension, meters/s^2
accY float 4 Target acceleration in Y dimension, meters/s^2
accZ float 4 Target acceleration in Z dimension, meters/s^2
ec[16] float 16x4 Tracking error covariance matrix, [4x4] in range/azimuth/elevation/doppler coordinates
g float 4 Gating function gain
confidenceLevel float 4 Confidence Level

Classifier Output TLV

Size: sizeof (tlvHeaderStruct) + numberOfTargets * CLASSIFIER_NUM_CLASSES * sizeof(uint8_t) The Classifier Output TLV consists of an array of classifier outcome in Q7 format, packed as classOutcome[targetIndex][classIndex].


Need More Help?