Blog post

Bringing Edge AI to Rust: Introducing the Edge Impulse Rust Library

rust
By Fernando Jiménez Moreno
Bringing Edge AI to Rust: Introducing the Edge Impulse Rust Library

We’re thrilled to announce the release of our new Edge Impulse Rust library, enabling developers to bring powerful edge AI capabilities directly into their Rust projects. With this library, you can run Edge Impulse models on Linux and macOS systems while taking full advantage of Rust’s renowned safety guarantees and performance benefits.

Edge Impulse Rust library API docs

Why Rust for Edge AI?

Rust has emerged as a go-to language for embedded and systems programming due to its distinctive mix of:

Memory safety without garbage collection

Thread safety with minimal runtime overhead

Zero-cost abstractions

Modern, ergonomic syntax

A vibrant ecosystem of high-quality crate

These qualities make Rust an ideal choice for edge AI, where speed, reliability, and resource efficiency are paramount.

Features of the Edge Impulse Rust Library

Our new library provides a safe, ergonomic interface for working with Edge Impulse models. Key features include:

Model Inference: Run Edge Impulse models (.eim files) seamlessly on Linux and macOS

Multiple Model Types: Support for classification, object detection, and visual anomaly detection

Sensor Integration: Built-in support for camera and microphone inputs

Data Ingestion: Easily upload data to Edge Impulse using the Ingestion API

Continuous Classification: Keep models running in continuous inference mode

Installation and Resources

The library is published on crates.io under the name edge-impulse-runner.

edge-impulse-runner at crates.io

You can also explore the source code and contribute on our GitHub repository

Available Examples

In the repository, you’ll find a variety of example applications that demonstrate everything from basic classification and image processing to video-based object detection and data upload. Each example is straightforward to run with Cargo, comes with a --debug flag for troubleshooting, and includes detailed comments to guide you through setup and usage. See our Edge Impulse Rust library documentation for more detail.

Video inference example with an image classification model

Getting Started

Adding the library to your own Rust project is straightforward. In your Cargo.toml, include:

[dependencies]
edge-impulse-runner = "1.0.0"

Below is a streamlined example of how you might quickly set up a classification task using the library. This is taken from the “Quick Start” section in our repository:

use edge_impulse_runner::EimModel;

fn main() -> Result<(), Box<dyn std::error::Error>> {
    // 1. Create a new model instance
    let mut model = EimModel::new("path/to/model.eim")?;

    // 2. Retrieve model information
    let params = model.parameters()?;
    println!("Model type: {}", params.model_type);

    // 3. Check sensor type
    match model.sensor_type()? {
        SensorType::Camera => println!("Camera model"),
        SensorType::Microphone => println!("Audio model"),
        SensorType::Accelerometer => println!("Motion model"),
        SensorType::Positional => println!("Position model"),
        SensorType::Other => println!("Other sensor type"),
    }

    // 4. Run inference with normalized features
    let raw_features = vec![128, 128, 128];  // Example raw values
    let features: Vec<f32> = raw_features.into_iter().map(|x| x as f32 / 255.0).collect();
    let result = model.infer(features, None)?;

    // 5. Handle the inference results (classification, object detection, etc.)
    match result.result {
        InferenceResult::Classification { classification } => {
            for (label, probability) in classification {
                println!("{}: {:.2}", label, probability);
            }
        }
        // Handle other model types as needed...
        _ => ()
    }

    Ok(())
}

Once you have the basics working, you can explore more advanced features such as object detection, continuous classification, and visual anomaly detection.

Data Ingestion

Beyond inference, you can upload training data to Edge Impulse straight from your Rust application. Here’s a quick look at how you’d send an image file:

use edge_impulse_runner::ingestion::{Category, Ingestion, UploadOptions};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Initialize with your API key
    let ingestion = Ingestion::new("your-api-key").with_debug(); // Optional: enable debug

    // Upload a file to Edge Impulse
    let result = ingestion
        .upload_file(
            "path/to/file.jpg",
            Category::Training,
            Some("my_label".to_string()),
            Some(UploadOptions {
                disallow_duplicates: true,
                add_date_id: true,
            }),
        )
        .await?;

    println!("Upload successful: {}", result);
    Ok(())
}

Use this feature to seamlessly gather new data and refine your models on Edge Impulse without leaving your Rust environment. Read on.

Join the Community

We’re excited to see the creative ways you’ll use the Edge Impulse Rust library! The project is open source, and we welcome community involvement. Here’s how you can get started:

  1. Visit our GitHub repository – File issues, read the examples, and contribute code.
  2. Check out the documentation – Explore the API details and usage notes.
  3. Try the examples – Clone the repo and run them locally to see the library in action.
  4. Share your work – Show off your edge AI projects and inspire others!
Comments

Subscribe

Are you interested in bringing machine learning intelligence to your devices? We're happy to help.

Subscribe to our newsletter