Accelerate Edge AI Application Development with the MING Stack + Edge Impulse

In this blog post we describe how the MING stack, extended with Edge Impulse, can accelerate the development of IoT and Edge AI applications. By combining open-source building blocks with on-device machine learning, developers can start prototyping without reinventing the same infrastructure over and over again. Here's how it came about, and how to implement it yourself.

A few weeks ago, while working on a computer vision application, I found myself facing an engineering dilemma. I had a camera attached to an Arduino UNO Q (embedded Linux device), an Edge Impulse model ready to run inference, and a clear idea of the outcome I wanted: detect objects at the edge, store the results, visualize trends, and trigger events. My engineer’s instinct was to start wiring everything together manually: a custom API, a database schema, a frontend dashboard, and some glue code to hold it all together.

Instead, I took a step back and followed the same philosophy that originally inspired the MING stack (MQTT, Influx, Node-RED and Grafana). Rather than building everything from scratch, why not reuse this set of well-known, open-source tools that already solve most of these problems?

I still remember in the late 90s when the first web developers were learning to build websites, the LAMP stack came to the rescue of many new developers who wanted to get started into Web development. 

The success of the LAMP stack (Linux, Apache, MySQL, and PHP or Python) came from standardization. Developers knew that if they learned Linux, Apache, MySQL, and PHP, they could build almost any web application. At that time, the LAMP stack reduced the friction to deploy a basic web server in your local computer to start building your own web application without reinventing the wheel.

How does MING help IoT and Edge AI developers?

Most IoT and Edge AI applications share the same core requirements:

By standardizing these layers with MQTT, InfluxDB, Node-RED, and Grafana, the MING stack removes friction and allows teams to iterate quickly. Adding Edge Impulse to this stack extends the concept to enable edge AI.

In this architecture, each service runs in its own Docker container and communicates over a shared network. The stack can run on an industrial PC, or an edge device such as an embedded Linux device.

The core components are:

Deploying the MING + Edge Impulse Stack

The MING stack can be deployed on any Linux machine using Docker.

Prerequisites

Hardware

Software

Deploy the MING Stack with Edge Impulse

I used the Arduino UNO Q for this project. SSH into the Arduino UNO Q Linux; find more technical information here.

Next, clone this repository containing the Docker Compose configuration of the MING stack:

git clone https://github.com/mpous/ming-edge-impulse 
cd ming-edge-impulse

Train your Machine Learning model in Edge Impulse

Go to the Edge Impulse Studio, train your object detection model (or another type) and deploy it as a Docker container.

(If you are a beginner, learn how to train a machine learning model with Edge Impulse.)

In this project, feel free to use this Object Detection public project to detect Rubber Ducks.

Setting up your machine learning model with Edge Impulse

Copy the arguments and container information. Go to the edgeimpulse folder and modify the Dockerfile template pasting the API key and docker container from the Edge Impulse Studio Deployment section.

Once the Dockerfile template from edgeimpulse folder has been updated, you are ready to start all services:

docker compose up -d

Once running, the services will be available at:

You should be able to access from your computer using the local IP address of the embedded Linux device that you are using, if you are connected to the same network. You also can use the name of the services internally to route call the services.

Configuring Node-RED

Open a browser on your computer connected to the same network than the Arduino UNO Q and navigate to:

http://<arduino-device-ip>

In the Node-RED UI, import the provided flows in the `node-red` folder in the file `flow.json`. You should see a flow similar to this.

Edge AI application built on Node-RED using the MING stack and Edge Impulse

Setup your InfluxDB database

Now it’s the moment to set up the InfluxDB database.

InfluxDB is the time-series database used in the MING stack to store inference results and metadata produced by the Edge AI pipeline. In this project we use InfluxDB v2.8.0 (the latest version).

Create the initial InfluxDB account

Once the MING stack is running, open a browser on your computer connected to the same network than the Arduino UNO Q and navigate to:

http://<arduino-device-ip>:8086

On first access, InfluxDB will prompt you to complete the initial setup:

  1. Create an admin user by providing a username and password
  2. Define an organization name (for example: edge-impulse)
  3. Create an initial bucket (for example: edge-impulse-detections)

After completing these steps, InfluxDB will initialize the database and redirect you to the InfluxDB UI.

Create an API token

InfluxDB v2 uses API tokens instead of username/password authentication for clients.

To create a token:

  1. In the InfluxDB UI, go to Load DataAPI Tokens
  2. Click Generate API Token and choose All Access API Token for development (this is only for testing purposes)
  3. Copy and store the token

This token will be required by Node-RED to write inference data into InfluxDB.

Edit the influxdb node

Configure the InfluxDB Out node in Node-RED and paste the token.

Configuring the InfluxDB Out Node in Node-RED

Once configured, the InfluxDB Out node is ready to receive structured data points from Node-RED.

The MING + Edge Impulse workflow

With this workflow, you will be able to:

Once this workflow works, go to the InfluxDB UI to check that the data is being stored successfully.

InfluxDB storing data coming from the Edge AI application

Why do we use MQTT?

MQTT plays a central role in the MING stack because it allows us to decouple edge AI inference from data storage or any other process that needs to act on the inference data.

Inference results are published to an MQTT broker instead of being written directly to a database. This ensures that the inference pipeline remains lightweight, modular, and independent of any specific storage or analytics technology.

By using MQTT as the integration layer, multiple consumers can subscribe to the same inference stream and handle the data according to their needs. One subscriber might store detections in InfluxDB, another could trigger alerts, while a third could forward results to a cloud service or an MES or SCADA industrial system. None of these consumers need to be known or configured at inference time.

Visualizing Results in Grafana

Grafana connects directly to InfluxDB as a data source. Using Flux queries, dashboards can display:

Grafana dashboard generated from the images taken and data stored in InfluxDB

I added the Grafana dashboard configuration in a JSON file in the Github repository, inside the node-red folder. Feel free to use import it, in order to have an initial simple dashboard.

What can you do with the MING Stack and Edge Impulse?

This architecture enables developers to build a wide range of edge AI prototypes in a really easy way. Find here some examples of applications already working on production using the MING stack:

Because the stack is modular, components can be replaced or extended without redesigning the entire system.

If you are interested in Edge AI, IoT, and open architectures, the MING + Edge Impulse stack provides a solid foundation to start building today. Sign up for your free Edge Impulse account to test it out and get started today.

Comments

Subscribe

Are you interested in bringing machine learning intelligence to your devices? We're happy to help.

Subscribe to our newsletter