Rust is a fairly new programming language (born in 20101), but is showing great potential for developing embedded firmware. It is first and foremost designed to be a systems programming language, which makes it particularly suitable for microcontrollers. It’s aim to improve on some of C/C++’s biggest shortcomings by implementing a robust ownership model (which removes entire classes of errors from occurring) is also very much applicable to firmware.
As of 2022, the C and C++ programming languages still remain the de-facto standard for embedded firmware. However, Rust’s role in firmware is looking bright. Rather than firmware being an afterthought, it feels as if Rust has first-tier embedded support. There is an official Rust Embedded Devices Working Group and The Embedded Rust Book.
This page aims to be an exploration into running Rust on microcontrollers (what you would consider to be low-level embedded firmware, and not running on a hosted environment such as Linux), covering things such as:
- Language Features
- Architecture Support
- MCU Family Support
- IDEs, Programming and the Debugging Experience
- Rust Disadvantages
Lets jump straight in!
Rust Language Features
Let’s explore some of Rust’s language features and how they are applicable to embedded firmware.
One of the core differences between Rust and C/C++ is that Rust implements a robust ownership model into the programming language. This prevents many memory-related bugs that can occur in C/C++ (think memory leaks, dangling pointers, e.t.c.). These benefits that Rust provides are just as applicable to embedded firmware as they are to software.
For anything but basic primitive data types that live on the stack (primitive data types include
f64, e.t.c), Rust will move data when the assignment operator is used, rather than perform a copy. The following example shows how the compiler enforces that only one variable may own a piece of data at once.
If you did actually want to perform a copy,
s2 = s1.clone() is the correct way to do it. These ownership rules also apply to passing variables into functions. A comprehensive walk-through can be found in The Rust Programming Language - Ch. 04 - Understanding Ownership.
Rather than transfer ownership, Rust also lets you “borrow” data via references. You are only allowed to:
- Borrow as many immutable references and no mutable references, OR
- Borrow one mutable reference and no immutable references.
A big part of writing firmware is interacting with peripherals (GPIO, UART, USB, DMA, e.t.c.). Most peripherals are memory mapped – i.e. you read/write to “magic” memory addresses to control the peripheral. The standard way of accessing peripherals in Rust is to use a Peripheral Access Crate or PAC. More likely than not, there will already be a PAC for the particular microcontroller you are using.
For example, the
cortex_m crate provides access to the peripherals that are shared across all Cortex-M devices (e.g. the NVIC interrupts, SysTick). You can “claim” the peripherals by calling
Note that a singleton pattern is employed – you can only call
take() once and have it return
Some<T>. The next time it will return
None and then calling
unwrap() will cause a panic. You would typically
take() at the start of your high-level
The other peripherals that aren’t part of the Cortex-M architecture (such as UARTs, timers, PWMs, e.t.c.) are normally found in a different crate for your particular microcontroller. For example, if I was using a STM32F30x microcontroller, I would add the appropriate PAC crate and then be able to write:
Another thing Rust can do is provide compile-time checks that hardware has been configured properly based on how the code uses it. The Embedded Rust Book has this to say:
When applied to embedded programs these static checks can be used, for example, to enforce that configuration of I/O interfaces is done properly. For instance, one can design an API where it is only possible to initialize a serial interface by first configuring the pins that will be used by the interface.
One can also statically check that operations, like setting a pin low, can only be performed on correctly configured peripherals. For example, trying to change the output state of a pin configured in floating input mode would raise a compile error. – The Embedded Rust Book - Static Guarantees3
Lets explain this with an example. We’ll follow the Embedded Rust Book guide as use a GPIO pin (a very basic form of MCU peripheral) as an example, with
into_...() named functions to convert between the different types.
svd2rust is a command-line tool that ingests SVD files (a.k.a.
CMSIS-SVD – they are files that define register names, addresses and uses, you can think of them as a computer-readable version of the microcontrollers datasheet) and creates Rust PAC crates that expose the peripherals in a type-safe Rust API4. It currently supports the Cortex-M, MSP430, RISCV and Xtensa LX6 microcontrollers4.
Rust supports ad-hoc polymorphism via its concept of traits. As a really basic example, both float and integer types implement the
Add trait because they can be added. The embedded-hal project leverages traits by defining traits for things such as GPIO pins (input and output), UART, I2C, SPI, ADC, e.t.c. These generic interfaces can be used by the application code, and underneath the vendor specific drivers implement the correct functionality for each particular microcontroller. This is very similar to how virtual interface classes in C++ are used to create a portable HAL. More on this in the cargo and Package Structure section.
For example, the
Write traits are defined as5:
Safer Array Use
In Rust, if you index into an array, it is automatically bounds checked. This can prevent a ton of subtle “undefined behaviour bugs” that you get when you try the same thing in C/C++ (and security concerns!). Of course, bounds checking does incur a small amount of runtime overhead (which is likely to be negligible in 99% of use cases).
If an array reference is passed into a function, then Rust can’t work out at compile time if the indexing is out-of-bounds. In this case it will do bounds checking at runtime and panic:
If the runtime overhead of bounds checking is a concern for your application, you can get rid of this overhead however by using an array iterator instead of indexing (or using
get_unchecked()). This is in fact the recommended way of accessing an array unless you really have to use an index (some situations do still need to randomly access the array).
You might have also noticed that arrays don’t decay to pointers (i.e. loses the dimension information –
sizeof now gives you the size of the pointer) as easily as they do in C/C++ (especially when passing into functions). In Rust you can pass in references to any-sized arrays into a function while still being find it’s length by calling
.len(), something you cannot do in C/C++ (you can pass arrays in C/C++ without the variable decaying to a pointer, but you have to hardcode the function to a specific array size, this is because the size information is not saved in the array memory layout).
Concurrency is something you have to concern yourself about in embedded firmware when it comes to interrupts and multiple threads/cores (e.g. running a RTOS). One of the first times you’ll encounter concurrency concerns is when updating variables from within an interrupt. In C/C++, the use of
volatile and critical sections is generally the way the problem is solved. When using multiple threads, RTOS primitives such as mutexes/queues/e.t.c are used to prevent data corruption.
In Rust, you can also use critical sections to prevent data races in interrupts.
nb crate (it’s docs are here) takes an interesting approach to solving the problem of deciding whether or not an API call should block or not (or how to block!). It allows people writing APIs to write the core functionality, and then let the caller decide on the blocking behaviour. The API has return a type of
nb::Result<T, Error> where
T is the standard return type for the function. If the caller does want to block waiting for the function to complete, they can wrap the call in the
block! macro. This
nb crate has some potential to be used with HAL peripherals such as UART
read/write() functions (which typically block until data is sent/received).
In most languages, there are two common ways of handling errors. Either:
- Return an error code
- Throw an exception
In embedded firmware, sometimes exceptions are banned from use due to either the unpredictable time of execution (although contrary to popular belief, exceptions can actually improve runtime in the non-exceptional case) or the fact it’s another complexity that every developer has to be aware of. Returning error codes is the de-facto error handling method for many embedded projects, but you have to remember to check for errors and propagate them up the call stack appropriately. Rust can vastly improve the error handling experience due to it’s
For example, lets invent a pretend
uart_write_bytes() function which writes an array of bytes over a UART. Our UART has some peculiar behaviour and can’t write more than 10 bytes at a time. If the user provides more than 10 bytes, we want to return an error condition. If they provide 10 bytes or less, we want to write them out the UART and then return the number of bytes written. So let’s write this function:
Rust will produce warnings if we try and use this function and forget to check the returned result, e.g. if we write:
Rust will spit out:
How to deal with this returned
Result object properly? One way is to call
unwrap() returns the value if it didn’t error, and panics if there was an error. You would use
unwrap() in cases in where the error is non-recoverable, and in embedded situations you get to define what panic does (think of it to the same as a C/C++ assert).
There is also
expect() which like
.unwrap() except it also allows you to provide a custom error message:
If the error is recoverable and/or expected, you can use a
match statement on the
Result object to handle the error condition appropriately.
Another option is to use the question mark operator
? – this is shorthand doing a match statement and returning early if there is an
Err, essentially propagating the error condition up the stack. This design style is common practise (it’s very similar to how an exception works) and so it makes sense that Rust has introduced a shorthand for it. The following example shows this, with an extra function added to show the error propagation.
which is equivalent to:
None) and another which contains data but can’t possible be
0– in that case Rust will collapse the two things into one variable and use
None. This is how
One of the reasons Rust feels like it has first-tier embedded support is the standardized
#![no_std] crate-level attribute. This attribute indicates the crate will link to the
core-crate instead of the
core-crate is a subset of the
std-crate which does not contain any APIs which assume/require the use of an operating system. This
no_std is a perfect fit for bare-metal or custom RTOS environments. It provides the basic features such as primitives (floats, strings, slices, e.t.c) and generic processor features such as atomic operations and SIMD instructions. However, it does not provide any API to create things such as threads, file-system access, or the ability to make system calls.
Another thing you don’t get with
no_std is the initialization which sets up stack overflow protection and spawns a thread to call
main(). So in embedded
no_std development you define which function you want as your “main”.
no_std also means that by default, you don’t get any way to dynamically allocate memory on the heap. This might seem strange at first, because in embedded C development you typically have
malloc()/free() and friends, and in C++ you have
new/delete. No dynamic memory allocations means that you can’t use any objects that rely on it (dynamic arrays or strings), and it Rust these are called collections (
BTreeMap, e.t.c). In some cases having no dynamic memory allocation in firmware is fine (and in-fact preferable or required…e.g. MISRA). However there are some good use cases for dynamic memory allocation (my general rule for non life-threatening applications is to allow it, but only during initialization). Luckily you can enable it as long as their is a suitable allocator for your microcontroller architecture. For example, the alloc-cortex-m crate provides a custom allocator for the Cortex-M architecture. You can then also use the standard Rust collections (but be careful with them!).
no_std also means you have to define what
panic does. In embedded development, you are not allowed to return from the panic function, as it is forced to have the signature
fn(&PanicInfo) -> !. There are a few 3rd party crates you can include which useful for panicking in embedded firmware:
- panic-halt: Causes the current thread to halt by entering an infinite loop.
- panic-itm: The panicking message is logged to the host over ITM, a Cortex-M specific debugging peripheral (faster than semihosting).
- panic-semihosting: The panicking message is logged to the host over semihosting.
Once you have added one of these crates as a dependency, all you need to do is tell the Rust compiler you want to link against it, as you won’t be calling anything from it directly. To do this, use:
_ is important as it tells the compiler you want to link to it but not call anything from it. If you didn’t have this the compiler would issue you an unused import warning.
cargo and Package Structure
One sorely lacking feature with C/C++ is a standardized package manager for managing your dependencies and build process. Luckily (like most common languages) Rust comes with the
cargo package manager.
cargo translates well to embedded development, you can use it to easily include 3rd party libraries (what they call crates) – or create your own libraries to make your code more modular and re-usable.
One thing I really liked about
cargo is that it allows extensions to be installed, which can add features to the
cargo command. There is a cargo-flash crate which adds microcontroller flashing support to
cargo. You can install
This adds the sub-command
cargo flash to
cargo. Then you can type the following to program a microcontroller with your Rust executable:
Another nice thing about cargo and embedded is that the community seems to have settled onto a structured way of organizing various firmware-related libraries. This includes:
- Peripheral Access Crates (PACs): Contains the minimal naming of memory-mapped registers that control microcontroller peripherals.
- Architecture Support Crate: Contains APIs to control the CPU and peripherals that are shared across a CPU architecture (e.g. API for controlling interrupts, system ticks).
- Hardware Abstraction Layers (HALs): Wraps up the PAC registers into easy-to-use peripheral APIs such as
adc.read_value(), e.t.c. Although not unique to Rust, we might expect better standardization of the HALs in Rust because of
embedded-hal’s effort to keep it consistent across MCU families. In C/C++, the API of the HAL is usually unique to either the MCU family (STM32, SAMD, e.t.c) or the framework (Arduino, mbed, e.t.c).
- Board Support Crate: This crate is built for a specific PCB project that contains the microcontroller. The Board Support Crate uses the HAL and creates appropriately named instances of HAL objects according to how the MCU is connected to the physical world. It’s an optional extra crate that is a good idea if you are designing a board to be used by many people for many different proposes. For a single-use project, the extra overhead of creating a board support crate might not be worth it, and instead you can bundle this code inside the Application.
- Real-time Operating System (RTOS): Just like in C/C++ firmware development, you can also get RTOSes for Rust. Some of these are ports/wrappers of C/C++ RTOSes like FreeRTOS, and others a developed-from-scratch RTOSes for Rust. Using a RTOS is completely optional, and usually makes sense for larger, complex firmware applications.
- Application: The final layer of any firmware project, this contains the high-level business logic. The application layer generally reaches down and makes calls to the RTOS (if present) and HAL layer.
This structure is shown in the image below:
It is common in embedded firmware to want to be able to include/exclude blocks of code based on conditionals (e.g.
ENABLE_LARGE_LUT_ARRAY) for reasons such as freeing up memory usage by removing debug strings in production builds or including architecture specific code depending on the compile target (so you can target more than one microcontroller with the same code base). In C/C++ land, this is commonly implemented using preprocessor directives (
#ifdef, e.t.c). However, there in no preprocessor in Rust. The idiomatic way to solve this problem in Rust is to use Cargo features.
All Cargo features have to defined under
[features] in the
Cargo.toml. For example:
Then, inside a
.rs source code file you can conditionally include code blocks:
By default all features are disabled, unless a
default feature is specified in the
Another use of the C/C++ preprocessor is for performance reasons. It may be desirable to avoid function calls by creating preprocessor macros which perform direct text substitution. This is less of an issue in modern C/C++ as the compilers have gotten very good at knowing when to automatically inline functions anyway. But nevertheless, you can still perform similar tricks in Rust using Rust’s macro system. It many respects it is much more powerful and smarter than the C/C++ preprocessor (which does basic text substitution). There are however tricks you can do with the C/C++ preprocessor that you can’t do in Rust, such as partial variable name replacement. This is very unlikely to be a deal breaker though!
One common pattern in embedded C/C++ firmware is to use the preprocessor to make a
assert() macro that not only checks the provided expression is true, but also grabs the current file, line number and provided expressions as strings. e.g.:
This is made possible by the special
#exp (where the
# stringifies the
exp), and the fact that the macro contents gets plonked in the source code everywhere
assert() is called. Luckily, you can do the same thing in Rust by utilizing the
stringify!() macros (which are compiler built-ins)6.
Most embedded developers will be familiar with the
volatile keyword in C/C++. It tells the compiler that the value of this variable may change at anytime, which is true for pointers to memory-mapped peripheral registers that are updated in hardware. This is important so that the compiler does not perform incorrect optimizations (for more info on the C/C++ volatile keyword, see the Embedded Systems And The Volatile Keyword page).
Rust provides two methods,
core::ptr::read_volatile(), to tell the compiler the same thing.
write_volatile() takes a variable of type
*mut T and
read_volatile() takes a variable of type
Rust Architecture Support
When considering Rust for an embedded project, you’ll be wondering “Is the microcontroller I used supported in Rust?”. As there are so many manufacturers and MCU families out there (and a few different architectures), it all depends on exactly what you are using. We’ll cover the level of Rust support of some of the popular architectures and MCU families below.
Support for a particular architecture is usually added by running
rustup (by default
rustup only installs the standard library for your host platform7):
This sets up the build environment for cross-compiling to your chosen architecture. For a complete list of supported platforms see The rustc Book: Platform Support.
Let’s go into more detail about the Rust support of major architectures used in the embedded space today.
The ARM Cortex-M CPU architecture is well supported by Rust, so many of the MCU families that use the Cortex-M naturally have good support too. The rust-embedded/cortex-m repo provides the minimal start-up code and runtime (including semihosting) for the Cortex-M family.
|ARMv6-M (Cortex-M0, M0+, M1)|
|Armv7E-M (Cortex-M4, M7 -- no floating-point-support)|
|Armv7E-M (Cortex-M4F, M7F -- floating-point-support)|
You can quickly start a new project for a Cortex-M CPU with the command:
The crate alloc-cortex-m provides a heap allocator for Cortex-M based microcontrollers. The below code example shows how this allocator can be setup as the global allocator (so that standard collections such as
Vec work) and used within a firmware application11:
RISC-V architecture support in Rust is quite good. Below are the support architectures:
The riscv_rt crate provides the basic start-up/runtime for RISC-V CPUs.
The Xtensa architecture is only predominant in the ESP32 range of MCUs, so we’ll cover that below in the ESP32 (Espressif Systems) section.
Rust MCU Family Support
So we’ve covered the CPU architecture (which defines the instruction set), but what about support for all the peripherals that surround it and make up a MCU? Let’s cover the amount of Rust support of some popular manufacturers and their MCU families.
STM32 (ST Microelectronics)
The STM32 family of microcontrollers has some of the richest Rust support out of any microcontroller. The stm32-rs/stm32-rs repo contains Rust PAC crates for a large variety of STM32 microcontrollers. As of Nov 2022 it is actively maintained and has 824 stars.
https://github.com/Rahix/avr-hal is a popular 3rd party Rust HAL layer for the ATmega AVRs (incl. Arduino, ATmega, ATtiny).
ravedude is a useful cargo application which adds support to
cargo run to program the Arduino boards, and then connect up a serial to display any print messages.
There is a great tutorial on getting Rust working on the Arduino Uno (which uses the ATmega328P microcontroller) at https://blog.logrocket.com/complete-guide-running-rust-arduino/. Getting a blinky Rust project built and programming takes about 5 minutes using that tutorial when developing in the WSL (passing through the Arduino USB device with
The atsamd-rs/atsamd GitHub repo provides various crates for working with Atmel
same5x based devices using Rust12. This repo provides PACs (Peripheral Access Crates) and higher level HALs (Hardware Abstraction Layers). The HALs implement traits specified by the embedded-hal project.
Also included in this repo are BSPs (Board Support Packages) for a number of development boards. They distinguish between Tier 1 and Tier 2 BSPs, the Tier 1 BSPs are those that are kept up-to-date with the latest version of
atsamd-hal, whilst the Tier 2 BSPs don’t (they might be locked to some past version).
The repo looks to be active, with 705 commits and 421 stars as of November 2022.
MSP430 (Texas Instruments)
There is a not well maintained version of RTFM (Real-Time For the Masses, the old name for RTIC) available for the MSP430 MCUs at the GitHub repo japaric/msp430-rtfm.
ESP32 (Espressif Systems)
There is a fork of the Rust compiler called (esp-rs/rust) which adds in support for the Xtensa instruction set (e.g. the ESP32S3). If Xtensa upstreams support for their architecture into LLVM this fork may not be needed in the future. The same
esp-rs organization also provides a
no_std HAL on GitHub at esp-rs/esp-hal, and one with
std support at esp-rs/esp-idf-hal.
[esp-rs/esp-idf-svc] provides Rust wrappers for various ESP-IDF services such as WiFi, networking, and logging. This defines implementations of the traits defined in esp-rs/embedded-svc (think of this as an extension to the basic embedded-hal traits).
esp-rs organization also provides their own installer, espup which is “rustup for esp-rs”. It’s a tool (alternative to
rustup) for installing and maintaining the toolchain(s) required for developing with Rust on Espressif devices.
The default Embedded Rust tutorial now uses the micro:bit v2 (it used to use the STM32F303 Discovery Kit), which happens to have an nRF52 MCU onboard.
The rustup target
riscv32imac-unknown-none-elf is available to cross-compile for the Freedom E310 (e.g. the HiFive1). I couldn’t find any support for the HiFive1 Rev B bootloader, so a dedicated programmer was required to program the board.
The RP2040 is just a single chip rather than a “family”, but there are a number of boards you can buy based of this IC. There is a good quality RP2040 provided by the rp-rs/rp-hal repo. This repo is organized as a Cargo Workspace which also includes a number of Board Support Crates for development boards that use this chip, including the Raspberry Pi Pico, Adafruit Feather RP2040, Adafruit ItsyBitsy RP2040, Pimoroni Pico Explorer, SolderParty RP2040 Stamp, Sparkfun Pro Micro RP2040, Sparkfun Thing Plus RP2040, and Seeeduino XIAO RP204014.
- PSoC: The psoc-rs GutHub organization has a PAC and HAL repo for the PSoC 6 but they don’t look well maintained or used.
- PIC: The GitHub repo kiffie/pic32-rs contains a HAL for the PIC32. It looks somewhat maintained.
Rust IDEs, Programming and the Debugging Experience
One must have for embedded development is a smooth
write code -> build -> program -> debug workflow. Ideally this is without vendor lock-in (i.e. being forced to use the vendors specific IDE) and with step-by-step debugging in your code editor rather than just on the command-line. Luckily Rust can provide all this! I focused on using VS Code since it is the most popular non-vendor-specific IDE these days. VS Code has very good support for Rust and embedded development.
rust-analyzer are two VS Code extensions that you are definitely going to want to install.
I had access to a STM32F303 Discovery Kit (development board with a STM32F303 MCU on it), so I did a bit of searching and found the repo rubberduck203/stm32f3-discovery. This contained pre-made VS Code launch configurations so I should be able to debug the Rust code straight from within VS Code. With a few tweaks (including adding
"cortex-debug.gdbPath": "gdb-multiarch" to
settings.json) I was able to get thus workflow up and running!
All I needed to do is hit
F5 – this built the code, flashed it to the STM32F303 dev. kit and then jumped into a debug session. Below is an image from when I was stepping through the “Blinky” example. I was using VS Code connected to Ubuntu via the WSL (using
usbip to pass-through the STM32F303 USB device).
Knurling is a collection of projects by Ferrous Systems (two of their popular tools include
semihosting is provided for Cortex-M MCUs via the
cortex-m crate. semihosting allows you to log debug messages to the host via the attached debugger with no extra cables (e.g. USB-to-UART devices). The disadvantage is that it’s slow. A single message can take many many milliseconds depending on the attached debugger you re using. The
panic-semihosting crate can be also used to provide useful panic messages on the host. ITM is a faster option than semihosting but is only available on Cortex-M3’s and higher. RTT could be a even better option (being available on most targets/programmers like semihosting, but being fast like ITM)15. It is mostly platform agnostic and just depends on the debug probe supporting background target memory access. Once enabled you can use the
rprintln!() macro. I have not used this yet though so can’t comment much!
No language can claim to be suitable for embedded programming without a selection of RTOSes to choose from. Luckily, Rust has some, from Rust wrappers of existing C/C++ RTOSes (such FreeRTOS and RIOT) to RTOSes built from scratch to run on Rust (such as RTIC, Embassy and Tock). Lets review some of the popular options available to Rust developers.
- hashmismatch/freertos.rs: Unfortunately this one repo does not look like it has had any active development for a number of years.
- lobaro/FreeRTOS-rust: Actively seeks to improve on
hashmismatch/freertos.rsand simplify the usage of FreeRTOS from Rust.
RTIC (Real-Time Interrupt-driven Concurrency) is an interesting approach to an RTOS that seems to have a decent amount of actively development and community support. All tasks share a single call stack, and deadlock-free execution is guaranteed at compile time.
|Scheduling||Interrupt-based preemptive with priority|
|Num. Repo Stars||1k|
|Num. Repo Commits||1.1k|
Embassy primarily supports co-operative multitasking over preemptive scheduling. However, it does let you create multiple executors with different priorities, so you can get preemption if you need it. It leverages the use of Rust’s async/await. The scheduler runs all tasks on a single stack. It also provides a suite of libraries such as embassy-net for IP networking, embassy-lora for LoRa networking, embassy-usb for USB devices and embassy-boot for a bootloader.
|Num. Repo Stars||1.2k|
|Num. Repo Commits||3.4k|
Tock is an embedded operating system designed for running multiple concurrent, mutually distrustful applications on Cortex-M and RISC-V based embedded platforms – GitHub README.
|Num. Repo Stars||4k|
|Num. Repo Commits||11k|
Some of Tock is not completely baked into Rust, for example you have to break out of the Rust ecosystem and call
make to program the kernel onto your board. Once the kernel is programmed onto your board, you can then use their own
tockloader program to flash the application code.
Drone is a interrupt-based pre-emptive RTOS built in Rust for embedded devices.
|Scheduling||Interrupt-based pre-emptive with priority|
|Num. Repo Stars||361 (drone-core)|
|Num. Repo Commits||251 (drone-core)|
Rust Speed and Memory Usage
How does the speed and memory usage of Rust-built applications compare to C/C++? First off it’s worth saying that most of Rust’s unique ownership/borrow checking is purely a compile-time construct, and incurs zero runtime overhead, both in terms of speed and memory usage.
As mentioned in the Rust Language Features section, Rust automatically does bounds checking when accessing arrays. It will do it’s best to do this at compile time, but there are some cases in where this cannot be done (e.g. passing in a reference to array to a function), and it has to resort to doing this at runtime. The overhead is minimal and likely to be well-worth the effort in 99% of use cases. If you do want to avoid bounds checking you can either:
- Use iterators (if applicable)
When compiled without the
--release option, Rust will also perform overflow checking when doing mathematical operations such as add and multiply. We can see this if we use the Godbolt Compiler Explorer and compare the assembly output of a simple
square() function in both C++ and Rust:
You can see in the Rust pane that it has a few extra instructions, including
seto to read the overflow flag, and then a
test and jump
jne to a
panic! in the case an overflow occurred. This will slow down mathematical operations, but in most cases this is a worthwhile trade-off to catch overflow bugs. And remember – the overhead disappears if you built with the
--releasebuild, you can use the
checked_xxxfunctions such as
checked_add()which return an
Noneif the value overflowed.
As bad as overflow can be, in many use cases (especially in embedded programming) you want (and even rely) on overflow wrapping. A classic example is getting the current system tick value and subtracting saved previous system tick values to calculate durations. Your system tick might be stored in a 32-bit unsigned integer and count the number of milliseconds since start-up. This would wrap back to
0 in a little over
1193 hours of continuous operation. However, due to the nature of how integer mathematics are implemented, durations relying on subtraction will still work fine when the current system tick wraps back to 0, as long as no one duration spans more than half the total system tick period (approx.
597 hours). In Rust, you can safely do overflowing equations with
wrapping_xxx functions such as
gccrs is a project to incorporate a Rust “front-end” into GCC. As of December 2022 this is still WIP (work-in-progress). This end goal is to make GCC able to compile Rust code. The main benefits of this are:
- We can then benefit from GCC’s really good optimization (which is distinct from LLVM)
- We have another Rust compiler to choose from (which is normally a good thing!)
As this is a front-end project, the compiler will gain full access to all of GCC’s internal middle-end optimization passes which are distinct from LLVM. – GCC Front-End For Rust17.
There are some interesting benchmarks of Rust against C++ at https://benchmarksgame-team.pages.debian.net/benchmarksgame/fastest/rust-gpp.html. Rust is clearly faster in 4 of the benchmarks, C++ is clearly faster in 3 of them, and for the remaining 3 they are basically identical.
The Disadvantages of Using Rust
No review would be fair without mentioning the negatives. What are the disadvantages of using Rust for embedded firmware?
Not as well supported as C/C++: C/C++ is definitely better supported by the many microcontroller vendors and IDEs, and there are far more embedded libraries for C/C++ than there are for Rust (library maturity). But as shown above, Rust support for many of the top tier microcontroller families is quite good, and hopefully will continue to get better as the language matures.
It’s going to be harder finding Rust developers: Again, because of Rust’s relatively immature nature compared to other languages it’s generally going to be harder to find competent developers if you are running big teams.
Not as well optimized as C/C++ code: Nevertheless, compiled Rust code is going to be fast and likely fast enough in 99% of use cases. C/C++ code will likely beat Rust in some specific use cases. Rust speed will likely getter better over time, and projects like the GCC Front-End For Rust will help this process.
Be sure to check out the Matrix ‘Rust Embedded’ chat room.
The GitHub repo rust-embedded/awesome-embedded-rust is a huge list of embedded Rust resources maintained by the Rust Resources team. It includes tools, RTOSes, peripheral access crates (PACs), hardware abstraction layers (HALs), board support crates (BSPs), blogs, books and other training materials.
You can have a play around with Rust using an online editor/compiler such as Replit. Or if you prefer running something locally, install
cargo and then initialize a new project with
cargo new hello_world --bin (this will be for running on your computer, not on a microcontroller).
Wikipedia (2022, Nov 11). Rust (programming language). Retrieved 2022-11-19, from https://en.wikipedia.org/wiki/Rust_(programming_language). ↩︎
Embedded HAL. Module embedded_hal::serial (documentation). Retrieved 2022-12-05, from https://docs.rs/embedded-hal/latest/embedded_hal/serial/. ↩︎
rust-lang. The rustup book: Cross-compilation. Retrieved 2022-11-14, from https://rust-lang.github.io/rustup/cross-compilation.html. ↩︎
rust-lang. The rustc book: Platform Support. Retrieved 2022-11-15, from https://doc.rust-lang.org/nightly/rustc/platform-support.html. ↩︎ ↩︎
Tock (2022, Jul 27). Tock Design (Markdown documentation). Retrieved 2022-11-23, from https://github.com/tock/tock/blob/master/doc/Design.md. ↩︎
This work is licensed under a Creative Commons Attribution 4.0 International License .
- The C Preprocessor
- Mbed Studio
- How To Parse Mathematical Expressions
- Parsing Command-Line Arguments In Python
- Python Sets
- cargo flash
- peripheral access crates
- hardware abstraction layers
- board support packages