AWRL1432 KTO Users Guide

Table of Contents

Device Compatibility

⚠️ This lab is currently only compatible with the AWRL1432.

Overview

This lab demonstrates the use of TI AWRL1432 mmWave sensors for a kick-to-open system to allow for hands-free access to the trunk of a vehicle. As low power consumption is a key pain point for this use case, this lab implements two modes: “Low Power Mode” and “Gesture Mode.” While in “Low Power Mode,” the sensor consumes minimal power while searching for a presence to trigger a switch to “Gesture Mode.” In “Gesture Mode” the device uses range, velocity, and angle data to enable detection and classification of a kick gesture. This lab outputs what mode the sensor is in, the device’s power consumption, and whether or not a kick has been detected.

Quickstart

Prerequisites

Prerequisite 1 - Run mmWave Lab

Before continuing with this lab, users should run the mmw Demo for the AWRL1432 to gain familiarity with the sensor’s capabilities and the tools used in the Radar Toolbox.


Prerequisite 2 - XDS Firmware

If using an AWRL1432BOOST EVM, the latest XDS110 firmware is required. The XDS110 Firmware runs on the microcontroller onboard the AWRL1432BOOST, which provides the JTAG Emulation and serial port communication over the XDS110 USB port. Packet loss has been observed on the serial port (COM ports) with some versions of the XDS110 firmware which can cause the demo to fail.

The latest XDS110 firmware is installed with the latest version of Code Composer Studio. CCS version 12.2 or later required.


Prerequisite 3 - PC CPU and GPU Resources

The Kick-to-Open Visualizer requires sufficient GPU and CPU resources to process the incoming UART data every frame and run the demo smoothly. Users with insufficient PC resources may notice occasional visualizer lag or missed frames in the log file. The demo should still run on most PCs regardless of hardware resources, but performance might vary.


Requirements

Hardware Requirements

Item Details
AWRL1432 Evaluation Board AWRL1432 Evaluation Board
Computer PC with Windows 10. If a laptop is used, please use the ‘High Performance’ power plan in Windows.
Micro USB Cable Due to the high mounting height of the EVM, a 15ft+ cable or USB extension cable is recommended.

Software Requirements

Tool Version Download Link
TI mmWave L SDK 5.3.0.2 TI mmWave SDK 5.3.0.2
Uniflash Latest Uniflash tool is used for flashing TI mmWave Radar devices. Download offline tool or use the Cloud version
Code Composer Studio CCS 12.2 or later Code Composer Studio

1. Configure the EVM for Flashing Mode

Follow the instructions for Hardware Setup of Flashing Mode

2. Flash the EVM using Uniflash

Flash the binary listed below using UniFlash. Follow the instructions for using UniFlash

The flashable binary can be found at: <RADAR_TOOLBOX_INSTALL_DIR>\source\ti\examples\AWRL1432_KTO\prebuilt_binaries\

3. Configure the EVM for Functional Mode

Follow the instructions for Hardware Setup of of Functional Mode

4. Run the Lab

1. Open Visualizer

Navigate to <RADAR_TOOLBOX_INSTALL_DIR>\tools\visualizers\Kick_To_Open_Visualizer. The visualizer software may be run from Python 3.7.3 or as a standalone executable. In general, running the visualizer from the Python source runs faster and results in fewer errors over long periods of time. If running Python directly from source, run the setUpEnvironment.bat script first to ensure all the correct packages and versions are installed. Then run the visualizer either directly from source with python gui_main.py, or as an executable by running the mmWave_Kick_To_Open_Visualizer.exe

2. Reset the AWRL1432BOOST EVM by pressing the RESET_SW button

3. Select the arrowed options as shown in the below graphic.

  1. Set the Device to AWRL1432 as indicated by arrow 1

  2. Set CLI COM to the appropriate COM port as indicated by arrow 2. If the AWRL1432BOOST EVM is plugged in when the visualizer is opened, then the COM ports filled in automatically. If they do not appear, check the Device Manager on Windows for the XDS110 Class Application/User UART port, and use this port number in the visualizer.

  3. Select the AWRL1432 Kick-to-Open demo as indicated by arrow 3.

  4. Press the “Connect” button as indicated by arrow 4

  5. Press the “Select Configuration” button as indicated by arrow 5. Navigate to <RADAR_TOOLBOX_INSTALL_DIR>\radar_toolbox\source\ti\examples\AWRL1432_KTO\chirp_configs and select gesture_recognition.cfg.

  6. Press the “Start and Send Configuration” button as indicated by arrow 6.

4. Running the Demo

This utilizes two “modes:” a gesture recognition mode and a low power presence detection mode. While in presence detection mode, the device operates on minimal power searching for a person within roughly 2m. Once a person is detected, the device switches to gesture recognition mode in which the following features are supported:

Perform the above gesture while standing at a distance of 1m from the radar sensor. The detected gestures will be displayed in the ‘Gesture Status’ box of the visualizer.

Once a person enters the presence detection range (roughly 3m), the device will switch from “Low Power Mode” to “Gesture Recognition Mode,” in which the frame rate is much higher. The device is now ready to recognize kick gestures. In the visualizer, the “Presence Threshold” plot will disappear and be replaced by the “Doppler Average” plot which shows the extracted feature being streamed from the device.

After a person exits and there is no presence found for about 10 seconds, the device will switch back to “Low Power Mode,” where it will once again wait for a presence to be detected.

This concludes the Quickstart section.

Developer’s Guide

Import Lab Project to CCS

To import the source code into your CCS workspace, a CCS project is provided in the lab at the path given below.

🛑 Error during Import to IDE
If an error occurs, check that the software dependencies listed above have been installed. Errors will occur if necessary files are not installed in the correct location for importing.

Build the Lab

Selecting Rebuild instead of Build ensures that the project is always re-compiled. This is especially important in case the previous build failed with errors.

🛑 Build Fails with Errors
If the build fails with errors, please ensure that all the software requirements are installed as listed above and in the mmWave SDK release notes.

Execute the Lab

There are two ways to execute the compiled code on the EVM:

UART Output Data Format

This demo outputs data using a TLV (type-length-value) encoding scheme with little endian byte order. For every frame, a packet is sent consisting of a fixed sized Frame Header and then a variable number of TLVs depending on what was detected in that scene. The TLVs for this demo include the extracted features used for the neural network inference as well as the detected gesture for that frame.

Frame Header

Length: 40 Bytes
A Frame Header is sent at the start of each packet. Use the Magic Word to find the start of each packet.

Value Type Bytes Comments
Magic Word uint64_t 8 syncPattern in hex is: ‘02 01 04 03 06 05 08 07’
Version uint32_t 4 Software Version
Total Packet Length uint32_t 4 In bytes, including header
Platform uint32_t 4 A6843
Frame Number uint32_t 4 Frame Number
Time [in CPU Cycles] uint32_t 4 Message create time in cycles
Num Detected Obj uint32_t 4 Number of detected points in this frame
Num TLVs uint32_t 4 Number of TLVs in this frame
Subframe Number uint32_t 4 Sub-Frame number

TLV Header

Length: 8 Bytes
A TLV Header is sent at the start of each TLV. Following the header is the the TLV-type specific payload.

Value Type Bytes Comments
Type uint32_t 4 TLV Type
Length [number of bytes] uint32_t 4 In bytes

TLVs

Gesture Features TLV

Size: sizeof (tlvHeaderStruct) + 56 Bytes
The Gesture Features TLV consists of an array of extracted features some of which are used as input to the neural network. Each extracted feature is defined as given below.

Value Type Bytes Comment
rangeAvg float 4 range average feature from RD heatmap
dopplerAvg float 4 doppler average feature from RD heatmap
dopplerAvgPos float 4 doppler positive average feature from RD heatmap
dopplerAvgNeg float 4 doppler negative average feature from RD heatmap
numPoints float 4 number of detected points above a threshold in RD heatmap
azimWtMean float 4 azimuth weighted mean from elevation-azimuth heatmap
elevWtMean float 4 elevation weighted mean from elevation-azimuth heatmap
azimWtDisp float 4 azimuth weighted displacement from elevation-azimuth heatmap
elevWtDisp float 4 elevation weighted displacement from
dopAzimCorr float 4 doppler azimuth correlation across n frames
dopElevCorr float 4 doppler elevation correlation across n frames
dopPosNegCorr float 4 doppler positive negative correlation across n frames
dopPosElevCorr float 4 doppler positive elevation correlation across n frames
dopPosAzimCorr float 4 doppler positive azimuth correlation across n frames

Gesture Classifier Output TLV

Size: sizeof (tlvHeaderStruct) + sizeof(uint8_t)
The Gesture Classifier Output TLV consists of a single uint8_t indicating the detected gesture.

Gesture Identifier
No Gesture 0
Kick 1
Presence Detection Output TLV

Type: MMWDEMO_OUTPUT_EXT_PRESENCE_OUTPUT = 352
Size: sizeof (tlvHeaderStruct) + sizeof(uint8_t)
The Presence Detection Output TLV consists of a single uint8_t indicating the detected presence. This TLV is not sent out in gesture-only mode.

Status Value
No Presence Detected 0
Presence Detected 1
Presence Detection Threshold TLV

Type: MMWDEMO_OUTPUT_EXT_PRESENCE_DETECT_THRESHOLD = 353
Size: sizeof (tlvHeaderStruct) + sizeof(uint32_t)
This TLV contains the sum of magnitude across 2nd Doppler bin from range ‘x’ to range bin ‘y’ in the presence mode. The value in this TLV This TLV is not sent out in gesture-only mode.

Stats TLV

Type: MMWDEMO_OUTPUT_EXT_MSG_STATS = 354
Size: sizeof (tlvHeaderStruct) + sizeof(MmwDemo_output_message_stats)

Value Type Bytes Comment
interFrameProcessingTime uint32_t 4 Interframe processing time in usec
transmitOutputTime uint32_t 4 Transmission data transmit time in usec
powerMeasured[4] uint16_t[4] 8 Power at 1.8V, 3.3V, 1.2V and 1.2V RF rails (1LSB = 100 uW)
tempReading[4] int16_t[4] 8 Temperature: Rx, Tx, PM, DIG. (°C), 1LSB = 1°C

Neural Network

This demo utilizes a neural network model which runs on the ARM core making predictions based on extracted features. A sliding window buffer of features, extracted from the data across a number of frames, is used as input to the neural network. The model outputs a set of probabilities, each represents the probability that the input data belongs to a specific gesture.

Features

The model is trained on following extracted features.

Extracted Feature Description
Doppler Average Weighted average of the Doppler across the heatmap
Elevation Weighted Mean Select the N cells of the heat-map with the highest magnitude and for each cell compute the elevation bin (via angle-FFT). The average elevation is the weighted average of all these elevation bin indices.
Azimuth Weighted Mean Select the N cells of the heat-map with the highest magnitude and for each cell compute the azimuth bin (via angle-FFT). The average azimuth is the weighted average of all these azimuth bin indices.
Number of detected points Number of cells in the heatmap that have a magnitude above a certain threshold
Range Average Weighted average of the range across the heatmap.

Retraining

Users may wish to retrain the neural network model for various reasons such as adding or removing certain gestures. The full process for retraining and deploying a model is not provided in this guide; however, one can save the extracted features which are output over UART and use the saved features as training data in the model building process.