TI Utilities API
|
The Log module provides APIs to instrument source code. More...
Data Structures | |
struct | Log_Module |
Macros | |
#define | Log_MODULE_DEFINE(...) |
#define | Log_MODULE_USE(...) |
#define | Log_EVENT_DEFINE(name, fmt) |
#define | Log_EVENT_USE(name, fmt) |
#define | Log_printf(module, level, ...) |
#define | Log_event(module, level, ...) |
#define | Log_buf(module, level, ...) |
#define | _Log_DEFINE_LOG_VERSION(module, version) |
Typedefs | |
typedef enum Log_Level | Log_Level |
typedef const struct Log_Module | Log_Module |
typedef void(* | Log_printf_fxn) (const Log_Module *handle, uint32_t header, uint32_t index, uint32_t numArgs,...) |
typedef void(* | Log_buf_fxn) (const Log_Module *handle, uint32_t header, uint32_t index, uint8_t *data, size_t size) |
Enumerations | |
enum | Log_Level { Log_DEBUG = 1, Log_VERBOSE = 4, Log_INFO = 16, Log_WARNING = 64, Log_ERROR = 256, Log_ALL = 1 + 4 + 16 + 64 + 256, Log_ENABLED = 512 } |
The Log module provides APIs to instrument source code.
To access the LOG APIs, the application should include its header file as follows:
The logging ecosystem are to be considered beta quality. They are not recommended for use in production code by TI. APIs and behaviour will change in future releases. Please report issues or feedback to E2E.
The following terms are used throughout the log documentation.
Term | Definition |
---|---|
LogModule | A parameter passed to Log APIs to indicate which software module the log statement originated from. Modules also control the routing of logs to sinks. |
LogLevel | The severity or importance of a given log statement. |
Sink | Also simply called a logger. This is a transport specific logger implementation. The Logging framework is flexible such that multiple sinks may exist in a single firmware image. |
CallSite | A specific invocation of a Log API in a given file or program. |
Record | The binary representation of a log when it is stored or transported by a given sink. The log record format varys slightly with each sink depending on their implementation and needs. However, they all convey the same information. |
Link Time Optimization (LTO) | A feature of some toolchains that can significantly reduce the code overhead of the log statements through a process called dead code elimination. In order to maximize the benefits of this, all static libraries and application files should have LTO enabled. |
The following sections describe the usage of the TI logging system implementation. This document will focus on the target (i.e. code that runs) on the embedded device. For associated PC tooling, please see the README in the tools/log/tiutils/ folder.
Desgin Philiosophy:
At the core of the logging implementation is heavy use of the C preprocessor. When reading an application, the Log APIs may look like function calls, but the preprocessor expands them heavily.
There are several ways in which the preprocessor is used.
ti_log_Log_ENABLE
is not defined, all statements are removed by the preprocessor. This does not rely on LTO or any other optimization. It removes any traces of logs from the program.An simplified pseudo-C implementation of what Log_printf(LogModule_App1, Log_DEBUG, "Hello World!");
would expand to is shown below. This will not compile and is not extensive, just for illustration.
From here, the logger has transferred control over to the sink implementation, which varys based on the tansport (e.g. circular buffer in memory or UART).
When adding log statements to the target software, it is recommended to create a logging module for each software component in the image. Modules enable the reader to understand where the log record originated from. Some log visualizers may allow the reader to filter or sort log statements by module. It is also recommended to namespace modules.
For example, a good module name for the UART
driver that exists in source/ti/drivers
, could be ti_drivers_UART
.
Modules also control the routing of log records to a sink. Routing is controlled via the LogModule panel in SysConfig, but can be changed in plain C code using the macro Log_MODULE_DEFINE and passing the sink specific Log_MODULE_INIT_
to the init
parameter within the Log_MODULE_DEFINE macro. An example for the LogBuf sink is below, it will do the following
LogModule_App1
.CONFIG_ti_log_LogSinkBuf_0
.Log_ERROR
level. Other logs will not be stored.TI created libraries will never use Log_MODULE_DEFINE. This leaves the choice of routing logs to their sinks to the end application writer. This is recommended when creating any static libraries to defer the final logging decisions to link time.
Each new module will instantiate a Log_Module structure with a levels
bitmap and pointers to the selected sink implementation and sink configuration. See the Log_Module structure for more information.
Log levels are a way to indicate the severity or importance of the contents of a particular log call site. Each call site takes an argument that allows the user to specify the level. As with modules, log visualization tools allow the user to sort or filter on a given level. This can help the reader to find important or relevant log statements in visualization.
Log levels are also used to control the emission of logs. Each call site will check that the level is enabled before calling the underlying log API.
Depending on optimization, the check at each log statement for whether the given level is enabled or not may end up being optimized away, and the entire log statement may be optimized away if the log level is not enabled.
Optimization level -flto
for both the TICLANG toolchain and GCC will typically be able to optimize the above statement.
Each time a Log API is invoked, a metadata string is placed in the .out file. This string contains information about the API type, file, line module, level, and other information associated with the log call site. Each call site emits a string to a specific memory section called .log_data
. In addition to this, a pointer to the string in .log_data is stored in another section called .log_ptr
. Because the .log_ptr section is always in the same location, and each entry is the same size, an indexing-scheme can be used to refer to each log-string. Entry 0 in .log_ptr would point to the first string, entry 1 would point to the second string, etc. This Is necessary on some devices where transmitting an entire 32-bit address as a reference to the string is not possible, and instead an 8-bit index can be transmitted across the Log sink implementation instead. In order to use logging, this section should be added to the linker command file. By default, this section points to a nonloadable region of memory. Meaning that the metadata will not be loaded on the target device. Instead, the various logging visualization tools such as wireshark and TI ROV2 will read the metadata from this section and properly decode the log statements. The benefit of this approach is that very little memory is consumed on target. Additionally, the log transport only needs to store or send pointers to this meta section when a log API is called.
This approach minimizes the amount of memory consumed on device and bytes sent over the transport. This section can be loaded on target if desired or if you are creating a custom logger. The design does not preclude this.
In order to use the logging framework, the log section must be added to the linker command file. Here is a sample for the TI linker. Other examples can be found in the TI provided linker files for each toolchain.
Sinks are responsible for storing or transporting the log record. In general there are two categories of sinks:
Sinks may vary in their implementation based on the nature of the storage or transport that they support, but they all have the following in common:
<SinkName>
is the name of the sink.In addition, some sinks require initialization. This will be listed in the documentation for the sink implementation. Sinks are closely tied to their associated host side tooling. Since the log statements are not parsed at all by the target code, this must be delegated to a program running on a PC. While the binary format of log records may vary across sink implementations, it is suggested that each log record contain:
This is the minimum amount of information needed to decode a log statement.
This section provides a basic usage summary and a set of examples in the form of commented code fragments. Detailed descriptions of the LOG APIs are provided in subsequent sections.
The following example demonstrates how to create a log event object and use it in the code. There are two steps to using a log event: 1. instantiation and 2. call site(s). Instantiation creates the event and the necessary metadata, and call site is where the event is actually recorded by the logger framework.
Later on, in the application, the count event is consumed. Note the log module must match between event creation and call site. In the code below, a LogEvent record is created for serialization or stage by the Log sink.
The following example demonstrates use of the Log printf API. in code. Log will embed the format string in the call site and will take arguments using varadic arguments
The following example demonstrates use of the Log buf API. in code.
Buf will embed the format string in the call site and will take the buffer as a pointer and length. Buffers are treated as arrays of bytes. The buffer API should only be used when it is necessary to log data that is only available at runtime. It will actually send or store the entire contents of the buffer, so this API should be used sparingly as it is costly in terms of runtime and memory overhead.
For a uniform experience with the logging tool, users are recommended to follow certain guidelines regarding the Log API. Typical use-cases for each API call is desribed below
Log_printf should be the default mechanism for emitting a log statement within an application. Along with the Log-levels, Log_printf should be used to communicate debug information as a formatted string, which accepts variadic arguments. In this case, a pointer to the string and the arguments themselves are transported by the Log sink.
Log_event is meant to represent more generic debug-information, and typically something that can occur from anywhere in the application, as opposed to being localised in a single library. Events can also be defined once and referenced from anywhere in the application, so the same event can be used by multiple libraries. A generic example would be an event such as "Entering critical section"
When the debug-information to be emitted is a large amount of dynamic data, and is not suitable as an argument to printf, then Log_buf should be used. Log_buf can transport the contents of large dynamic buffers, and as a consequence has a larger overhead and should be used sparsely.
#define Log_MODULE_DEFINE | ( | ... | ) |
#define Log_MODULE_USE | ( | ... | ) |
#define Log_EVENT_DEFINE | ( | name, | |
fmt | |||
) |
#define Log_EVENT_USE | ( | name, | |
fmt | |||
) |
#define Log_printf | ( | module, | |
level, | |||
... | |||
) |
#define Log_event | ( | module, | |
level, | |||
... | |||
) |
#define Log_buf | ( | module, | |
level, | |||
... | |||
) |
#define _Log_DEFINE_LOG_VERSION | ( | module, | |
version | |||
) |
typedef const struct Log_Module Log_Module |
typedef void(* Log_printf_fxn) (const Log_Module *handle, uint32_t header, uint32_t index, uint32_t numArgs,...) |
typedef void(* Log_buf_fxn) (const Log_Module *handle, uint32_t header, uint32_t index, uint8_t *data, size_t size) |
enum Log_Level |