Detecting Speed Limit Signs with Edge Impulse and OpenMV

Forgetting the speed limit on a stretch of road while driving is a concerning but common occurrence. The idea of how to overcome it inspired me to create a speed limit sign detection system. Using Edge Impulse and an OpenMV camera, I built a small, efficient module that identifies these signs and displays them on an LCD screen. Though this is a tutorial and experimental project with limited data, it demonstrates the incredible features of the Edge Impulse platform, from labeling datasets to deploying optimised models on edge devices.

Let’s dive into the steps to build this exciting project.

Link to public project: studio.edgeimpulse.com/public/563884/live
Link to repository: github.com/moe-sani/ei-speed-sign-detection


Step 1: Setting Up the Project in Edge Impulse

  1. Create a New Project: Log into Edge Impulse Studio, create a project (e.g., “Speed Sign Detection”), and set the target device to OpenMV Camera H7.
Selecting a target device
  1. Dataset Selection: Use a public dataset like the Kaggle Road Sign Detection Dataset, which includes traffic lights, stop signs, and speed limits. You can directly upload this dataset by selecting the whole folder and the platform will auto-detect the formatting.
Uploading the dataset to Edge Impulse platform
  1. Delete all the irrelevant classes (e.g., traffic lights) in the dataset.

Step 2: Streamlining Data Labelling with AIThe Kaggle dataset lacks granularity (e.g., it labels “speed limit” but not specific values like “30” or “40”). To refine labels:

  1. Use ChatGPT-4 Vision: In Edge Impulse, navigate to AI Labelling and prompt the model with:“What speed limit sign do you see in this image? Respond only with a number (e.g., 30) or ‘unsure’.”This auto-labels speed values, reducing manual effort.
AI labelling tool to detect the speed limits
  1. Balance Classes: Remove under-represented labels (e.g., rare speed values) to avoid bias.
  2. Balancing the Dataset:
    • Remove classes with insufficient data to ensure balanced training.
    • Include an "unsure" class for images without clear signs, helping the model learn background contexts.
    • Change your project type to “classification” from the project dashboard
    • Override the class labels to get the whole image to have a single label (this process can be seen in the video). This is not ideal as it needs some manual modification but this can be avoided using a more appropriate dataset.
Balancing the dataset using Edge Impulse platform filters

Step 2: Designing and Training the Model

  1. Impulse Design:
    • Define an image size of 160x160 pixels for pre-processing.
    • Use grayscale image input to simplify computations.
    • Use Edge Impulse’s Image Transfer Learning block.
Designing an Impulse in Edge Impulse platform
  1. Model Training:
    • Use Edge Impulse’s transfer learning with a MobileNetV2 backbone. Train for 60 cycles with data augmentation to improve robustness.
    • The model achieved 78% accuracy on test data, with inference times of ~350ms on the OpenMV Camera—fast enough for real-time use.
Accuracy result after Transfer Learning model has been trained

Step 3: Deploying the Model

  1. Generate Deployment Files:
    • Export the model from Edge Impulse Studio, ensuring compatibility with OpenMV.
Deployment options
  1. Load on OpenMV:
    • Copy the deployment files to the OpenMV camera.
    • Modify the following code to refine predictions, display results on the LCD, and handle low-confidence outputs effectively.
    if max_p > 0.3:
        print(speed + ".jpg")
        lcd.write(speed + ".jpg")  # Take a picture and display the image.

Source code: github.com/moe-sani/ei-speed-sign-detection/blob/main/ei_image_classification.py 

Open MV IDE to test the model
  1. Testing the Model:
    • Point the camera at printed speed sign images to test detection accuracy.
    • Optimize for angles and lighting, acknowledging limitations from the small dataset and image resolution.

Challenges and Improvements

Future improvements could include:


Conclusion

This project demonstrates the potential of Edge Impulse and OpenMV in creating practical machine learning applications. Despite challenges, the speed sign detection module offers a glimpse into how AI can enhance everyday tasks. 

Comments

Subscribe

Are you interested in bringing machine learning intelligence to your devices? We're happy to help.

Subscribe to our newsletter