AWRL6432 Life Presence Detection Capon2D Users Guide

Table of Contents

Overview

This example project demonstrates the use of TI AWRL6432 mmWave sensors for Vehicle 2-row Child/Life Prescence Detection (LPD). This lab outputs a 3D point cloud over UART, which is then utilized by an Occupancy Detection State Machine in the accompanying MATLAB visualizer. This lab features a 2D Capon processing chain for detection within the vehicle.

Requirements

🛑 Before Continuing! xWRL6432 - ES2.0 Only
For xWRL6432 devices, this demo is only compatible with ES2.0. To determine the ES version of your device, review the Determine Silicon Revision Guide

Hardware Requirements

Item Details
AWRL6432 Evaluation Board AWRL6432 ES2.0 Evaluation Board
Computer PC with Windows 10. If a laptop is used, please use the ‘High Performance’ power plan in Windows.
Micro USB Cable Due to the high mounting height of the EVM, a 15ft+ cable or USB extension cable is recommended.

Additional Software Requirements

Tool Version Download Link
MATLAB Runtime 2022b (9.13) Exact version required. https://www.mathworks.com/products/compiler/matlab-runtime.html
MMWAVE-L-SDK 05.04.00.01 https://www.ti.com/tool/MMWAVE-L-SDK

Quickstart

1. Follow xwrL64xx MCU+ SDK

Follow the steps from the SDK, which can be found here

Flash the desired binary using uniflash or the SDK visualizer <MMWAVE_L_SDK_05_04_00_01\tools\visualizer\visualizer.exe>

2. Mount the EVM and Create Test Environment

Mount the sensor at the desired location securely so that the sensor is stable during runtime. Update the desired configuration file so that it reflects the device’s location. More information on mounting can be found in the section below.

3. Launch the Visualizer

🛑 MATLAB Runtime Version R2022b (9.13) Required
Exact version R2022b (9.13) required.

  1. Launch the visualizer executable located at <PACKAGE_LOCATION>\tools\visualizers\InCabin_LifePresenceDetection_GUI\
  2. Double click to launch AWRL6432_LifePresenceDetection_visualizer.exe.
  3. A black console window will appear. After 30-60sec, a configuration window will appear as shown below.

4. Config Visualizer

  1. Select the EVM Control and Data COM port using the respective dropdowns.

  2. You should see the Chirp configuration file automatically selected in the Configuration dialogue. But if the configuration file is not the correct one, browse to select a file provided in the config_file directory: <PACKAGE_LOCATION>\tools\visualizers\InCabin_LifePresenceDetection_GUI\config_file

  3. Select Real Demo mode and press Go! to start the Visualizer.

  4. Select Index for recording file if users plan to record the point cloud for future playback

  5. The Visualizer window will appear as shown below and after several seconds, the visualizer should start showing the point cloud and occupancy decisions in their respective panels:

5. Understand the Visualizer

The Visualizer window is divided into five panels as shown in the annotated picture above:

  1. Statistics: This section prints real-time information from the target: Frame number, both processed and (target), and a Zone table that displays the occupancy state, number of detections in the zone, and average SNR of the detections. When the Count button is clicked, the Points column becomes the “Frame” count (total number of frames that have been counted), and the SNR column becomes “Count” - the number of frames with positive occupancy state.

  2. Chirp Configuration: This panels shows the static chirp configuration loaded at startup time.

  3. Control: This panel provides an Exit button, a Record button (saves frame by frame point cloud and occupancy data). In addition, there are two sub-panels, which independently allows either the Occupancy Display to be paused, or selection/pausing of a display in the Point Cloud panel.

  4. Occupancy: This panel pictorally displays the output of presence detection. The interior of the car will declair to be occupied if any of the defined zone is occupied.

  5. Point Cloud: This panel can display one of several displays: “2D Point Cloud” shows and overhead view of the detected points, along with the zone boundaries when that zone decision is positive.

6. Re-playing previously saved output

As described in the previous section, the AWRL6432_LifePresenceDetection_visualizer can save the EVM UART output (point cloud information) in files named fHistRT_xxxx.mat where each file can contain up to 1000 frames of information. These saved fHist files can be re-played offline by running AWRL6432_LifePresenceDetection_visualizer.exe as shown below:

  1. Navigate to <PACKAGE_LOCATION>\tools\visualizers\InCabin_LifePresenceDetection_GUI\

  2. Copy the previously saved fHistRT_xxxx.mat files to the directory above

  3. Launch the same visualizer AWRL6432_LifePresenceDetection_visualizer.exe, select the mode as Playback instead of Real Demo and select the preferred file index for the recorded file, and and press Go! to start the playback as shown in the figure below. In playback mode, the port numbers are no longer relevant, but a matching chirp configuration is still needed.

Algorithm Overview

This demo can be viewed as two pieces: the device running the radar code and the host running the visualizer.

Point Cloud Generation

This demo is an enhancement from the original xwrL64xx/14xx MCU+ SDK Motion and Presence Demo. Next, block diagram of different demos will be provided to highland the difference.

Method 1 Original SDK Motion and Presence Demo

A block diagram is provided below

In order to increase performance of detection of nearly-static objects such as people/baby sitting still in the vehicle, this lab uses minor mode for detection, i.e., mutliple frames are used to generate range-Azimuth-elevation heatmap, rather than a single frame. By using a sliding window of multiple frames, there is a finer granularity of detection with respect to time.

Method 2 Enhanced Chain with Capon

A block diagram is provided below

The major differences are listed below:

  1. Range-azimuth-elevation “3D” heatmap is calculated (using Capon method) and stored.
  2. Doppler binning is added as an option to improve SNR: Part of the Doppler FFT output bins (instead of all Doppler Bins) are selected for angle heatmap generation.
  3. More azimuth bins and more elevation bins can be supported in this chain: numSelDoppBinsPerFrame * numAzimFFTSize * numElevFFTSize <= 8K
  4. CFAR is now running on range-azimuth heatmap for every elevation bins. Second pass CFAR option (on azimuth domain) is added. Dynamic CFAR threshold is added as an option.

After receiving the point cloud data from the radar device through UART, the visualizer will map the point cloud to different zones. Occupancy detection logic is then run for each zone to determined whether a zone is occupied. Finally, the presence detection is declared if any zone is occupied.

Zone Mapping

After the point cloud is sent to host through UART. Inside the visualizer, the point cloud is first transformed into car coordination. Then mapped to different zones. Each zone is defined through a sequence of cuboids using cuboidDef CLI command. The details can be found in the CPD w/Classification demo user’s guide

This zone mapping idea has been reused for all the in-cabin sensing demo, including localization, intruder detection, presence detection and etc.

For different application, users may need to define different zones. For in-cabin localization, a common practice is to define each seat as one zone. For presence detection demo, users can define the entire interior of car (including footwear region) as one zone. However with overhead mounting, we observe some false detection right under the sensor, which is caused by the strong floor reflection, we call it, static leakage. This type of false detection can be controlled through CFAR threshold adjustment based on angle and distance, which has been added to this demo code.

Occupancy Detection Logic

After zone mapping, the Occupancy Detection State Machine is running to determine the occupancy status for each zone using a number of factors that can be configured by the occStateMach CLI command. The general idea is that a zone can be detected as occupied only if the total number and quality (average SNR) of the detected point cloud within the zone are higher than the pre-determined threaholds. The details can be found in the CPD w/Classification demo user’s guide

EVM Mounting and Coordinate Transforms

The detected point cloud is all relative to the sensor. In order to make occupancy decision for each seat in the car, we need to transform the point cloud from sensor coordinates to the car coordinates. There are different position and different mounting angle that can be considered for mounting the device. To give the full flexibility, “sensorPosition” CLI is used to indicate the mounting offset in (x, y, z) and mounting rotation angle in y-z plane, x-y plane and x-z plane. Based on these offsets and tilting angles, the visualizer will transform the point cloud from the sensor coordinate to the car coordinate. The car coordinate is plotted in the left figure below. Here we provide one example:

Users can find more mounting examples in the setup guide

Demo Specific Configuration File Parameters

There are three parts of the CLI commands,

1 The standard xwrL64xx MCU+ SDK CLI commands

please refer to the SDK-L Motion and Presence Demo Tuning Guide for these commands, located at <MMWAVE_SDK_L_INSTALL_DIR>\docs\MotionPresenceDetectionDemo_TuningGuide.pdf

2 The commands that are specific to the demo visualizer

Please refer to the Life Presence Detection visualizer for the following commands | Command | Description | —————-|———— | sensorPosition | Sensor mounting position | numZones | number of zones to define, | totNumRows | number of rows in the car for occupancy detection.
| occStateMach | Threshold used for occupancy state machine. | interiorBounds | Interior Boundary for display | cuboidDef | Zone definition | zoneNeighDef | Zone type and zone neighborhood definition.

3 The commands that are specific to this demo binary

Angle Heatmap Generation Configuration

angleHeatmapGenCfg 1 1 -3 3
Parameter Type Value Description
caponEnabled bool 1 Set to 1 for Capon chain, FFT option (0) is not supported yet. FFT option will be integrated later.
doppBinningEnabled bool 1 Set to 1 to enable Doppler binning, Set to 0 to use all chirps or all Doppler bins
doppSelMinBin int -3 The minimum Doppler bins to be included for angle spectrum generation, suggested to set to -4 or -3
doppSelMaxBin int 3 The maximum Doppler bins to be included for angle spectrum generation, suggested to set to 4 or 3

CFAR Configuration

cfarCfg 2 8 4 3 0 10.0 0 1 4 6 2 10.0 0.8 0 0 0 1 1 1 1 0.5 1.5 0.15
Parameter Type Value Description
averageMode int 2 averaging mode selection: 0: CFAR-CA; 1: CFAR-CAGO. 2: CFAR-CASO. Suggested to set 2.
winLen[0] int 8 One-sided noise averaging window length (in samples) of range-CFAR. Suggested to set as power of 2.
guardLen[0] int 4 One-sided guard length (in samples) of range-CFAR.
noiseDiv[0] int 3 Cumulative noise sum divisor expressed as a shift. Sum of noise samples is divided by 2^noiseDiv. Should be set as log2(winLen).
cyclicMode int 0 Cyclic mode or wrapped around mode: 0: Disabled. 1: Enabled.
thresholdScale[0] float 10.0 Threshold factor of range-CFAR in dB scale (20log10).
peakGroupingEn int 0 Peak grouping (in range domain) enable/disable flag: 0: Disabled. 1: Enabled.
secondPassEn bool 1 Second Pass CFAR on azimuth domain enable/disable flag: 0: disabled 1: enabled. In second pass CFAR, only CFAR-CASO is supported; the signal is wrapped around in angle dimension for noise calculation.
winLen[1] int 8 One-sided noise averaging window length (in samples) for second pass CFAR (azimuth dimension). Suggested to set as power of 2.
guardLen[1] int 4 One-sided guard length (in samples) for second pass CFAR (azimuth dimension).
noiseDiv[1] int 3 Cumulative noise sum divisor expressed as a shift for second pass CFAR (azimuth dimension). Sum of noise samples is divided by 2^noiseDiv. Should be set as log2(winLen).
thresholdScale[1] float 10.0 Threshold factor for second pass CFAR (azimuth dimension). in dB scale (20log10).
sideLobeThreshold float 0.8 Sidelobe threshold (in linear scale) in azimuth domain to declare a local peak as a valid detection.
enableLocalMaxRange int 0 Extracting only local max in range: 0: Disabled. 1: Enabled: If the detected point is not local maximum in the range domain, exclude it from the detection.
enableLocalMaxAzimuth int 0 Extracting only local max in azimuth: 0: Disabled. 1: Enabled: If the detected point is not local maximum in the azimuth angle domain, exclude it from the detection.
enableLocalMaxElevation int 0 Extracting only local max in elevation: 0: Disabled. 1: Enabled: If the detected point is not local maximum in the elevation angle domain, exclude it from the detection.
interpolateRange int 1 Interpolation in range enable/disable flag: 0: Disabled. 1: Enabled. But this peak interpoloation function only applies when enableLocalMaxRange is enabled.
interpolateAzimuth int 1 Interpolation in azimuth enable/disable flag: 0: Disabled. 1: Enabled. But this peak interpoloation function only applies when enableLocalMaxAzimuth is enabled.
interpolateElevation int 1 Interpolation in elevation enable/disable flag: 0: Disabled. 1: Enabled. But this peak interpoloation function only applies when enableLocalMaxElevation is enabled.
DynCFAREn bool 1 Dynamic CFAR threshold enable/disable flag. With dynamic CFAR threshold enable, the threshold over range will be adjusted by dynScale_range(rangeIdx) * dynScale_azim(azimuthIdx) * dynScale_elev(elevationIdx)
maxSNR_range float 0.5 range lower than this distance will be flattened to the max threshold, this value will be used to generate dynScale_range
flatSNR_range float 1.5 range higher than this distance will be flatteded to the min threshold, this value will be used to generate dynScale_range
polyQuadCoeff float 0.15 Quadratic polynomial f(x) = 1 - polyQuadCoeff * x^2 is used to generate dynamic threshold over angle domain, i.e., dynScale_azim and dynScale_elev. Where x is the angle in the unit of radians.

Demo Limitation

The following limitation applied to the demo chain.

Performance Tuning

The key items to focus on for tuning are as follows:

Need More Help?