AWRL6432 Life Presence Detection Demo Users Guide
Table of Contents
Overview
This lab demonstrates the use of TI AWRL6432 mmWave sensors for Vehicle 2-row Child/Life Prescence Detection (CPD). This lab uses the default OOB demo (motion and presence detection demo) provided in SDK L release and outputs a 3D point cloud over UART, which is then utilized by an Occupancy Detection Logic in the accompanying MATLAB visualizer.
Requirements
Hardware Requirements
| Item | Details | 
|---|---|
| AWRL6432 Evaluation Board | AWRL6432 ES1.0 Evaluation Board | 
| Computer | PC with Windows 10. If a laptop is used, please use the ‘High Performance’ power plan in Windows. | 
| Micro USB Cable | Due to the high mounting height of the EVM, a 15ft+ cable or USB extension cable is recommended. | 
Additional Software Requirements
| Tool | Version | Download Link | 
|---|---|---|
| MATLAB Runtime | 2022b (9.13) | Exact version required. https://www.mathworks.com/products/compiler/matlab-runtime.html | 
| MMWAVE-L-SDK | 5.3.0.2 | https://www.ti.com/tool/MMWAVE-L-SDK | 
Quickstart
1. Follow xwrL64xx MCU+ SDK
Follow the steps from the SDK, which can be found here
- Run ATE calibration
- Configure the EVM to flashing.
Flash the desired binary using uniflash or the SDK visualizer <MMWAVE_L_SDK_05_03_00_02\tools\visualizer\Low_power_visualizer_5.3.0.0>
2. Mount the EVM and Create Test Environment
Mount the sensor at the desired location securely so that the sensor is stable during runtime. Update the desired configuration file so that it reflects the device’s location. More information on mounting can be found in the section below.
3. Launch the Visualizer
🛑 MATLAB Runtime Version R2022b (9.13) Required
Exact version R2022b (9.13) required.
- Launch the visualizer executable located at <PACKAGE_LOCATION>\tools\visualizers\InCabin_LifePresenceDetection_GUI\
- Double click to launch AWRL6432_LifePresenceDetection_visualizer.exe.
- A black console window will appear. After 30-60sec, a configuration window will appear as shown below.
4. Config Visualizer
- Select the EVM Control and Data COM port using the respective dropdowns. 
- Since the same visualizer supports multiple demos, users will need to browse to select the correct configuration file, an example configuration file is provided below: - <PACKAGE_LOCATION>\tools\visualizers\InCabin_LifePresenceDetection_GUI\config_file\minor_motion_presence_detection.cfg
- Select Real Demo mode and press Go! to start the Visualizer. 
- Select Index for recording file if users plan to record the point cloud for future playback 
- The Visualizer window will appear as shown below and after several seconds, the visualizer should start showing the point cloud and occupancy decisions in their respective panels: 
5. Understand the Visualizer
The Visualizer window is divided into five panels as shown in the annotated picture above:
- Statistics: This section prints real-time information from the target: Frame number, both processed and (target), and a Zone table that displays the occupancy state, number of detections in the zone, and average SNR of the detections. When the Count button is clicked, the Points column becomes the “Frame” count (total number of frames that have been counted), and the SNR column becomes “Count” - the number of frames with positive occupancy state. 
- Chirp Configuration: This panels shows the static chirp configuration loaded at startup time. 
- Control: This panel provides an Exit button, a Record button (saves frame by frame point cloud and occupancy data). In addition, there are two sub-panels, which independently allows either the Occupancy Display to be paused, or selection/pausing of a display in the Point Cloud panel. 
- Occupancy: This panel pictorally displays the output of presence detection. The interior of the car will declair to be occupied if any of the defined zone is occupied. 
- Point Cloud: This panel can display one of several displays: “2D Point Cloud” shows and overhead view of the detected points, along with the zone boundaries when that zone decision is positive. 
6. Re-playing previously saved output
As described in the previous section, the AWRL6432_LifePresenceDetection_visualizer can save the EVM UART output (point cloud information) in files named fHistRT_xxxx.mat where each file can contain up to 1000 frames of information. These saved fHist files can be re-played offline by running AWRL6432_LifePresenceDetection_visualizer.exe as shown below:
- Navigate to - <PACKAGE_LOCATION>\tools\visualizers\InCabin_LifePresenceDetection_GUI\
- Copy the previously saved fHistRT_xxxx.mat files to the directory above 
- Launch the same visualizer AWRL6432_LifePresenceDetection_visualizer.exe, select the mode as Playback instead of Real Demo and select the preferred file index for the recorded file, and and press Go! to start the playback as shown in the figure below. In playback mode, the port numbers are no longer relevant, but a matching chirp configuration is still needed. 
Algorithm Overview
This demo can be viewed as two pieces: the device running the radar code and the host running the visualizer. The device code is the same demo as xwrL64xx/14xx MCU+ SDK OOB Demo, and information related to that part of the algorithm can be viewed in the SDK OOB documentation.
In order to increase performance of detection of nearly-static objects such as people/baby sitting still in the vehicle, this lab uses minor mode for detection, i.e., mutliple frames are used to generate range-Azimuth heatmap, rather than a single frame. By using a sliding window of multiple frames, there is a finer granularity of detection with respect to time.
After receiving the point cloud data from the radar device through UART, the visualizer will map the point cloud to different zones. Occupancy detection logic is then run for each zone to determined whether a zone is occupied. Finally, the presence detection is declared if any zone is occupied.
Zone Mapping
After the point cloud is sent to host through UART. Inside the visualizer, the point cloud is first transformed into car coordination. Then mapped to different zones. Each zone is defined through a sequence of cuboids using cuboidDef CLI command. The details can be found in the CPD w/Classification demo user’s guide
This zone mapping idea has been reused for all the in-cabin sensing demo, including localization, intruder detection, presence detection and etc.
For different application, users may need to define different zones. For in-cabin localization, a common practice is to define each seat as one zone. For presence detection demo, users can define the entire interior of car (including footwear region) as one zone. However with overhead mounting, we observe some false detection right under the sensor, which is caused by the strong floor reflection, we call it, static leakage. This type of false detection can be controlled through CFAR threshold adjustment based on angle and distance, which is however not yet supported in the current target code. Therefore, in our example, we define different zones and apply different detection threshold in the state machine to help with this problem.
Occupancy Detection Logic
After zone mapping, the Occupancy Detection State Machine is running to determine the occupancy status for each zone using a number of factors that can be configured by the occStateMach CLI command. The general idea is that a zone can be detected as occupied only if the total number and quality (average SNR) of the detected point cloud within the zone are higher than the pre-determined threaholds. The details can be found in the CPD w/Classification demo user’s guide
EVM Mounting and Coordinate Transforms
The detected point cloud is all relative to the sensor. In order to make occupancy decision for each seat in the car, we need to transform the point cloud from sensor coordinates to the car coordinates. There are different position and different mounting angle that can be considered for mounting the device. To give the full flexibility, “sensorPosition” CLI is used to indicate the mounting offset in (x, y, z) and mounting rotation angle in y-z plane, x-y plane and x-z plane. Based on these offsets and tilting angles, the visualizer will transform the point cloud from the sensor coordinate to the car coordinate. The car coordinate is plotted in the left figure below. Here we provide one example:
- when the sensor is mounted around the center of the roof facing downward, then sensor position should be programmed as: - sensorPosition 0 1.2 1.2 90 -90 0- This indicates the sensor is mounted at (x = 0, y = 1.2m, z = 1.2m) and rotated 90 degree clockwise in y-z domain to facing the floor. In addition 90 degree anti-clockwise rotation in x-y plane to use the better FOV in azimuth to cover the depth of the car. Please referred to the figure below to help understanding. 
Users can find more mounting examples in the setup guide
Demo Specific Configuration File Parameters
In addition to the standard xwrL64xx MCU+ SDK CLI commands (please refer to the SDK’s user guide for these commands), there are some additional commands that are specific to this demo visualizer:
| Command | Parameters (in command line order) | 
|---|---|
| sensorPosition | offset in x direction, in meter | 
| offset in y direction, in meter | |
| offset in z direction, in meter | |
| clockwise rotation angle in y-z plane, in degree | |
| clockwise rotation angle in x-y plane, in degree | |
| clockwise rotation angle in x-z plane, in degree | |
| numZones | number of zones to define, min = 5, max = 8. The first 5 zones must correspond to the zone numbers shown in the Occupancy Display. If you only want 2 operational zones, define the others as NULL zones. (see cuboidDef below). | 
| totNumRows | number of rows in the car for occupacy detection. | 
| occStateMach | zone type: the state machine allow different thresholds for different zone now, use zone type to seperate them. In our example, we use a different set of thresholds for middle seat. | 
| enter-condition1: number of detected points in a zone to enter the occupied state | |
| enter-condition1: average SNR of detected points in a zone to enter the occupied state | |
| enter-condition2: number of detected points in a zone to enter the occupied state | |
| enter-condition2: average SNR of detected points in a zone to enter the occupied state | |
| numEntryThreshold: every of continuous frames that the zone is passing the occupancy condition. | |
| stay-condition: number of detected points in a zone to remain in the occupied state | |
| stay-condition: average SNR of detected points in a zone to remain in the occupied state | |
| forget-condition: number of frames with less than ceiling points (see next parameter) to hold occupied state before dropping | |
| forget-condition: ceiling points (max points) in a zone during “hold” frames. | |
| overload-condition: average SNR in a zone to declare overload condition (excessive movement). Causes state machine to freeze all the zones. | |
| interiorBounds | min X (azimuth) -X towards passenger side, +X towards driver side (zone1) | 
| max X (azimuth) | |
| min Y (depth) 0: brake padel, +Y towards the rear | |
| max Y (depth) | |
| cuboidDef | parent zone number (1 based) | 
| cuboid number within the zone (also 1 based). min = 1, max = 3 per zone | |
| min X (azimuth) | |
| max X (azimuth) | |
| min Y (depth) | |
| max Y (depth) | |
| min Z (height) 0.0 is the car floor | |
| max Z (height) should not be higher than the car roof. | |
| (Note: to define a NULL zone, define it with a single cuboidDef command setting all X,Y,Z values to 0.0) | |
| zoneNeighDef | zone ID | 
| zone type: defined to allow different set of threshold in occStateMach. In our example, we use different zone type of middle seat than the rest of the seats. | |
| numNeigh: number of neighbor. When adding a neighbor to a zone, the enter-condition2 for the zone is enhanced. Specifically, the avgSNR for enter-condition2 has to be larger than the avgSNR of all its defined neighbors. We have found this method useful to reduce the false detection in the middle seat. | |
| neighbors: define the zone ID for each neighbor. | 
Performance Tuning
The key items to focus on for tuning are as follows:
- Proper mounting - Ensure that all FOV and resolution constaints from the device are accounted for.
- Point Cloud Tuning - Users can adjust CFAR detection threshold in cfarCfg CLI to trade off between number of detected points vs false detection ratio. Lower “K0” results in richer point cloud but higher false detection ratio.
 cfarCfg 2 8 2 3 09.00 0.9 0 1 1 1
- Accurate zone definition - Ensuring the zones align with the physical space of the vehicle. This is described in further detail in the section zone mapping.
- occStateMach - All the parameters in the occStateMach affect how the occupancy state machine moves between states. They may need to be adjusted depending on the number of points and their SNR that is observed in the specific environment. These parameters are further defined in the parameter list.
Need More Help?
- Search for your issue or post a new question on the mmWave E2E forums for xWRLx432
- See the SDK for more documentation on various algorithms used in this demo. Start at <MMWAVE_SDK_L_INSTALL_DIR>\docs\api_guide_xwrL64xx\index.html