How to Develop with RPMsg IPC#

There are many different ways to implement Inter-Processor Communication (IPC) between Linux and remote cores. TI provides drivers and sample code for a specific implementation of IPC called RPMsg. RPMsg allows Linux to send and receive 496-Byte messages within 512-Byte packets to and from remote cores. RPMsg may or may not be the right IPC implementation for your design. It is up to the designer to evaluate which software implementations are appropriate for their systems.

This section serves as a central hub of information about developing with RPMsg IPC:

  • RPMsg IPC Resources

  • When would an MCU+ usecase need Linux RPMsg?

  • Getting started with MCU to MCU IPC examples

  • Getting started with IPC Linux examples

    • How to build and run the RPMsg Linux userspace example application

    • How to build and run the RPMsg Linux kernel space example application

  • RPMsg IPC Advanced Topics

    • the RPMsg shared memory example

    • How to add multiple RPMsg endpoints to a remote core running RTOS

    • Graceful shutdown

RPMsg IPC Resources#

RPMsg Linux Drivers & Linux Userspace Libraries

For more information about the RPMsg Linux drivers & Linux userspace libraries, reference the section IPC for AM62x in the Processor SDK Documentation.

RPMsg MCU+ Drivers

For more information about RPMsg on remote cores, reference the MCU+ SDK Documentation:

DMA memory Carveouts

For more information about the memory carveouts that Linux creates for IPC, and how to adjust those memory carveouts, reference section How to allocate memory.

Software Dependencies

Prerequisites

Note

You should use the same version of the MCU+ Processor SDK and the Linux Processor SDK. In the past, the Linux SDK has sometimes been released before the MCU+ SDK; so far, the new Linux SDK has continued to work with the previous version of the MCU+ SDK. However, we cannot guarantee that different versions of the Linux and MCU+ SDKs will always work together.

When would an MCU+ usecase need Linux RPMsg?#

For general purpose communication between cores, RPMsg is one potential option. RPMsg can be utilized to send data 496 bytes at a time, or RPMsg can be used as a notification mechanism if a “shared memory” approach is used to transmit large amounts of data between cores. Customers can also implement other forms of IPC that are not developed or supported by TI.

Some Linux remoteproc features are only available if Linux RPMsg is enabled in the remote core. For cores that support graceful shutdown during Linux runtime, graceful shutdown will only work if Linux RPMsg is enabled in the remote core firmware.

For more information about graceful shutdown, reference section Graceful shutdown.

For more information about the MCU+ code involved in graceful shutdown, reference the MCU+ SDK docs Graceful shutdown of remote cores from Linux.

Getting started with MCU to MCU IPC examples#

For more information about building and running the RPMsg IPC example between two MCU+ cores, reference the MCU academy section mcu_rpmsg_ipc.

Getting Started with IPC Linux Examples#

For more information about how to use the Linux RemoteProc driver to load firmware into remote cores, reference the Linux academy section Booting Remote Cores

For more information about running the RPMsg firmware that comes prebuilt on the default Linux filesystem image, reference the Linux academy section IPC Example.

Building the remote core RPMsg firmware#

Steps to build the remote core RPMsg firmware in the MCU+ SDK at examples/drivers/ipc/ipc_rpmsg_echo_linux can be found in the MCU+ SDK docs at Build a Hello World example

For more information about the RPMsg example, reference the MCU+ SDK page IPC RP Message Linux Echo

Building the Linux userspace RPMsg example#

Note

These steps were tested on Ubuntu 18.04. Later versions of Ubuntu may need different steps.

Access source code in the git repo here. rproc_id is defined at include/rproc_id.h.

Build the Linux Userspace example for Linux RPMsg by following the steps in the top-level README:

  1. Download the git repo

  2. Install GNU autoconf, GNU automake, GNU libtool, and v8 compiler as per the README

  3. Perform the Build Steps as per the README

Running the RPMsg userspace example#

This section assumes the reader has already gone through the Linux academy section IPC Example.

Linux RPMsg can be tested with prebuilt binaries that are packaged in the “tisdk-default-image” filesystem:

  1. Copy the Linux RPMsg Userspace application from <ti-rpmsg-char_repo>/examples/rpmsg_char_simple into the board’s Linux filesystem.

  2. Ensure that the remote core symbolic link points to the desired binary file in /lib/firmware/ti-ipc/<processor>/. Update the symbolic link if needed. Reference Linux academy section Booting Remote Cores from the Linux Console or User Space for more information.

  3. Run the example on the board as detailed at Linux academy section IPC Example.

Building the RPMsg kernel space example#

The kernel space example is in the Linux Processor SDK under samples/rpmsg/rpmsg_client_sample.c

Build the kernel module rpmsg_client_sample:

  • Set up the kernel config to build the rpmsg client sample. Use menuconfig to verify Kernel hacking > Sample kernel code > Build rpmsg client sample is M:

$ export PATH=<sdk path>/linux-devkit/sysroots/x86_64-arago-linux/usr/bin:$PATH
$ make ARCH=arm64 CROSS_COMPILE=aarch64-none-linux-gnu- distclean
$ make ARCH=arm64 CROSS_COMPILE=aarch64-none-linux-gnu- tisdk_am64xx-evm_defconfig
$ make ARCH=arm64 CROSS_COMPILE=aarch64-none-linux-gnu- menuconfig
  • Make the kernel and modules. Multithreading with X different threads (-jX) can speed up the make process:

$ make ARCH=arm64 CROSS_COMPILE=aarch64-none-linux-gnu- -j8

Running the RPMsg kernel space example#

Linux RPMsg can be tested with prebuilt binaries that are packaged in the “tisdk-default-image” filesystem:

  • Copy the Linux RPMsg kernel driver from <Linux_SDK>/board-support/<linux>/samples/rpmsg/rpmsg_client_sample.ko into the board’s Linux filesystem.

  • Ensure that the remote core symbolic link points to the desired binary file in /lib/firmware/ti-ipc/<processor>/. Update the symbolic link if needed. Reference Linux academy section Booting Remote Cores from the Linux Console or User Space for more information.

  • Run the example on the board:

root@am64xx-evm:~# modprobe rpmsg_client_sample count=10
[  192.754123] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: new channel: 0x400 -> 0xd!
[  192.762614] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: incoming msg 1 (src: 0xd)
[  192.767945] rpmsg_client_sample virtio1.ti.ipc4.ping-pong.-1.13: new channel: 0x400 -> 0xd!
[  192.778102] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: incoming msg 2 (src: 0xd)
[  192.787125] rpmsg_client_sample virtio2.ti.ipc4.ping-pong.-1.13: new channel: 0x400 -> 0xd!
[  192.793103] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: incoming msg 3 (src: 0xd)
[  192.799752] rpmsg_client_sample virtio3.ti.ipc4.ping-pong.-1.13: new channel: 0x400 -> 0xd!
[  192.809324] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: incoming msg 4 (src: 0xd)
[  192.823064] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: incoming msg 5 (src: 0xd)
[  192.833132] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: incoming msg 6 (src: 0xd)
[  192.843179] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: incoming msg 7 (src: 0xd)
[  192.853170] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: incoming msg 8 (src: 0xd)
[  192.863228] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: incoming msg 9 (src: 0xd)
[  192.873335] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: incoming msg 10 (src: 0xd)
[  192.883392] rpmsg_client_sample virtio0.ti.ipc4.ping-pong.-1.13: goodbye!
[  192.891964] rpmsg_client_sample virtio1.ti.ipc4.ping-pong.-1.13: incoming msg 1 (src: 0xd)
[  192.902022] rpmsg_client_sample virtio1.ti.ipc4.ping-pong.-1.13: incoming msg 2 (src: 0xd)
[  192.912136] rpmsg_client_sample virtio1.ti.ipc4.ping-pong.-1.13: incoming msg 3 (src: 0xd)
[  192.922181] rpmsg_client_sample virtio1.ti.ipc4.ping-pong.-1.13: incoming msg 4 (src: 0xd)
[  192.932270] rpmsg_client_sample virtio1.ti.ipc4.ping-pong.-1.13: incoming msg 5 (src: 0xd)
[  192.942319] rpmsg_client_sample virtio1.ti.ipc4.ping-pong.-1.13: incoming msg 6 (src: 0xd)
[  192.952403] rpmsg_client_sample virtio1.ti.ipc4.ping-pong.-1.13: incoming msg 7 (src: 0xd)
[  192.962433] rpmsg_client_sample virtio1.ti.ipc4.ping-pong.-1.13: incoming msg 8 (src: 0xd)
[  192.972538] rpmsg_client_sample virtio1.ti.ipc4.ping-pong.-1.13: incoming msg 9 (src: 0xd)
[  192.982616] rpmsg_client_sample virtio1.ti.ipc4.ping-pong.-1.13: incoming msg 10 (src: 0xd)
[  192.992836] rpmsg_client_sample virtio1.ti.ipc4.ping-pong.-1.13: goodbye!
[  193.001472] rpmsg_client_sample virtio2.ti.ipc4.ping-pong.-1.13: incoming msg 1 (src: 0xd)
[  193.011614] rpmsg_client_sample virtio2.ti.ipc4.ping-pong.-1.13: incoming msg 2 (src: 0xd)
[  193.020184] rpmsg_client_sample virtio2.ti.ipc4.ping-pong.-1.13: incoming msg 3 (src: 0xd)
[  193.028628] rpmsg_client_sample virtio2.ti.ipc4.ping-pong.-1.13: incoming msg 4 (src: 0xd)
[  193.037089] rpmsg_client_sample virtio2.ti.ipc4.ping-pong.-1.13: incoming msg 5 (src: 0xd)
[  193.045484] rpmsg_client_sample virtio2.ti.ipc4.ping-pong.-1.13: incoming msg 6 (src: 0xd)
[  193.053874] rpmsg_client_sample virtio2.ti.ipc4.ping-pong.-1.13: incoming msg 7 (src: 0xd)
[  193.062261] rpmsg_client_sample virtio2.ti.ipc4.ping-pong.-1.13: incoming msg 8 (src: 0xd)
[  193.070614] rpmsg_client_sample virtio2.ti.ipc4.ping-pong.-1.13: incoming msg 9 (src: 0xd)
[  193.079000] rpmsg_client_sample virtio2.ti.ipc4.ping-pong.-1.13: incoming msg 10 (src: 0xd)
[  193.087397] rpmsg_client_sample virtio2.ti.ipc4.ping-pong.-1.13: goodbye!
[  193.094355] rpmsg_client_sample virtio3.ti.ipc4.ping-pong.-1.13: incoming msg 1 (src: 0xd)
[  193.102729] rpmsg_client_sample virtio3.ti.ipc4.ping-pong.-1.13: incoming msg 2 (src: 0xd)
[  193.111134] rpmsg_client_sample virtio3.ti.ipc4.ping-pong.-1.13: incoming msg 3 (src: 0xd)
[  193.119512] rpmsg_client_sample virtio3.ti.ipc4.ping-pong.-1.13: incoming msg 4 (src: 0xd)
[  193.127928] rpmsg_client_sample virtio3.ti.ipc4.ping-pong.-1.13: incoming msg 5 (src: 0xd)
[  193.136292] rpmsg_client_sample virtio3.ti.ipc4.ping-pong.-1.13: incoming msg 6 (src: 0xd)
[  193.144761] rpmsg_client_sample virtio3.ti.ipc4.ping-pong.-1.13: incoming msg 7 (src: 0xd)
[  193.153207] rpmsg_client_sample virtio3.ti.ipc4.ping-pong.-1.13: incoming msg 8 (src: 0xd)
[  193.161691] rpmsg_client_sample virtio3.ti.ipc4.ping-pong.-1.13: incoming msg 9 (src: 0xd)
[  193.170119] rpmsg_client_sample virtio3.ti.ipc4.ping-pong.-1.13: incoming msg 10 (src: 0xd)
[  193.178632] rpmsg_client_sample virtio3.ti.ipc4.ping-pong.-1.13: goodbye!

RPMsg IPC Advanced Topics#

How to pass large amounts of data between cores?#

Reference the RPMsg_char zerocopy example for a reference of how to define a shared memory region to pass data between Linux and a remote core, and then use RPMsg as a signaling mechanism to notify the other core when the shared memory region is ready to read.

https://git.ti.com/cgit/rpmsg/rpmsg_char_zerocopy/

How to add multiple RPMsg endpoints to a remote core running RTOS?#

Question#

I am using RPMsg to communicate from Linux with a remote core running RTOS. I want to define multiple RPMsg endpoints in my remote core software. How do I do that?

Answer#

This FAQ applies to AM62x. The provided sample code was written for AM64x R5F, and tested on Processor SDK 8.6. The concepts will apply to all processors that use RPMsg to communicate between a Linux core and an RTOS remote core, but some of the details may change between cores and devices.

This git patch to the MCU+ SDK adds two additional RPMsg endpoints that can be used to communicate with Linux userspace:

Linux_RPMsg_Echo-add-additional-endpoints.patch

Key details#

  • Each additional endpoint uses the same RPMsg service name that was defined for Linux userspace, “rpmsg_chrdev”.

    The expected service name for RPMsg endpoints that will communicate with Linux userspace is defined in the Linux rpmsg_char driver.

  • Each endpoint needs to have a separate RPMessage_Object with a unique semaphore in the RTOS-side code.

    TI tested giving each endpoint a separate RPMessage_Object by giving each endpoint a separate task in the RTOS-side code.

Other information#

For steps to apply a git patch, reference section How to apply a git patch.

What should the output look like?#

// code is running on AM64x R5F0_0, so use RPROC_ID 2

root@am64xx-evm:~# rpmsg_char_simple -r 2 -n 1 -d rpmsg_chrdev -p 14
Created endpt device rpmsg-char-2-986, fd = 3 port = 1024
Exchanging 1 messages with rpmsg device ti.ipc4.ping-pong on rproc id 2 ...

Sending message #0: hello there 0!
Receiving message #0: hello there 0!

Communicated 1 messages successfully on rpmsg-char-2-986

TEST STATUS: PASSED
root@am64xx-evm:~# rpmsg_char_simple -r 2 -n 1 -d rpmsg_chrdev -p 15
Created endpt device rpmsg-char-2-988, fd = 3 port = 1024
Exchanging 1 messages with rpmsg device ti.ipc4.ping-pong on rproc id 2 ...

Sending message #0: hello there 0!
Receiving message #0: hello there 0!

Communicated 1 messages successfully on rpmsg-char-2-988

TEST STATUS: PASSED
root@am64xx-evm:~# rpmsg_char_simple -r 2 -n 1 -d rpmsg_chrdev -p 16
Created endpt device rpmsg-char-2-990, fd = 3 port = 1024
Exchanging 1 messages with rpmsg device ti.ipc4.ping-pong on rproc id 2 ...

Sending message #0: hello there 0!
Receiving message #0: hello there 0!

Communicated 1 messages successfully on rpmsg-char-2-990

TEST STATUS: PASSED

// now let's see what happens if you talk to an endpoint that does not exist

root@am64xx-evm:~# rpmsg_char_simple -r 2 -n 1 -d rpmsg_chrdev -p 17
_rpmsg_char_find_ctrldev: could not find the matching rpmsg_ctrl device for virtio1.rp
msg_chrdev.-1.17
Can't create an endpoint device: Success
TEST STATUS: FAILED

Graceful shutdown#

For more information about the MCU+ code involved in graceful shutdown, reference the MCU+ SDK docs Graceful shutdown of remote cores from Linux.

What is graceful shutdown?#

During a “graceful shutdown”, the Linux remoteproc driver does not simply turn off a remote core. Instead, remoteproc uses RPMsg to request that a remote core releases its resources and goes into a known good state. After the remote core sends an RPMsg back that it is in a good shutdown state, then Linux shuts the remote core down. Thus, if Linux RPMsg is not enabled in a remote core’s firmware, the Linux remoteproc driver cannot gracefully shut down that remote core.

Why does graceful shutdown matter?#

Graceful shutdown allows us to shut down, and then restart, a remote core during Linux runtime. That allows for faster debug during development (not required to reboot the entire processor to load different firmware binaries). It also allows for loading new firmware into the remote cores during Linux runtime in the final product (e.g., if a firmware binary must be updated, but the entire system cannot be rebooted).

Note

graceful shutdown only works if the remote core is able to respond. If the remote core has crashed or entered a bad state, the Linux driver will throw a timeout error instead of forcing the remote core off. This preserves the remote core state if debugging is required. In a multicore system, a bad state can come from outside the remote core (e.g., if the remote core is waiting for data from another core). So turning off and restarting the remote core may not actually address the source of the issue. Throwing a timeout error instead of blindly shutting off the core allows customers to handle the timeout error and then take whatever action is appropriate for their specific usecase.

The AM62x is a K3 architecture device. K3 devices are multicore devices. However, each peripheral is only designed to be controlled by 1 core at a time. That means that there must be some way of coordinating which core is controlling which peripheral.

K3 devices use a separate core to manage which peripheral is “owned” by which processor. This core is called the “device manager” core, or the DM core. Whenever a core boots up and decides to use a peripheral, it requests ownership of that peripheral from the DM core. If another core has already been granted ownership of that peripheral, the DM core refuses the request.

Graceful shutdown matters because in graceful shutdown, the remote core is given time to message the DM core and release all of the peripherals the remote core was using. Then Linux can shut down the remote core, and restart it. When the remote core reboots, it will request ownership of its peripherals from the DM core. Since no cores are currently using the peripheals, the DM core will give ownership back to the remote core.

If the remote core is powered off during Linux runtime without warning the remote core, then the remote core is not able to tell the DM core to release its peripherals. When the remote core requests its peripherals after being rebooted, the DM core will refuse the request, because The DM core will think that the peripherals are already in use. At that point, the remote core typically stalls, and the entire processor needs to be rebooted.