How to Use USD Assets for Any Edge AI Use Case with NVIDIA Omniverse

In the last couple years, synthetic data has become a viable tool for augmenting datasets for AI use. One of the useful pieces that has arisen to make the process of generating synthetic environments more efficient are 3D models in the USD file format.

The Universal Scene Description (USD) file format provides a standardized framework for managing complex 3D data, and is a powerful open source tool for generating complex 3D scenes. Due to the enhancements of graphics processing compute in recent years, Synthetic Data Generation (SDG) has been on the rise, allowing for the creation of high-quality, labeled datasets, bypassing the limitations of real-world data collection. SDG is crucial for data collection in Edge AI because it enables the creation of large, high-quality datasets quickly and cost-effectively, overcoming the limitations of real-world data availability and improving model accuracy and efficiency. Edge AI brings AI capabilities closer to the data sources, enabling real-time processing in applications like industrial warehouse asset tracking, conveyor belt manufacturing anomaly detection, or smart city occupancy detection.

This blog post will guide you on using USD assets for edge AI applications for synthetic dataset generation, streamlining AI development and enhancing on-device performance.

Why use USD Assets for Edge AI?

Training AI models requires high-quality, diverse, and well-labeled datasets to achieve accurate performance. Often, data is limited or unavailable, and collecting and labeling it can be time-consuming and costly. This slows the development of physical AI models and delays solution deployment in the field. [1] An amazing bonus of the USD file format is that it is open source, meaning that there is an extensive library of both freely available and for-pay 3D assets in the USD/USDA format available on the internet. With this, you can easily find pre-made 3D assets for any edge AI project for synthetic dataset generation. For a discussion on challenges in the SDG world, read “Closing the Simulation to Real Gap” by NVIDIA.

For this tutorial we will show you how to use the USD assets that you've acquired with NVIDIA Omniverse, a powerful platform for 3D simulation and design, to programmatically create and simulate realistic synthetic datasets for Edge AI use cases, such as:

Read more about predictive maintenance with SDG at this NVIDIA blog from Edge Impulse's Jenny Plunkett: How to Train an Object Detection Model for Visual Inspection with Synthetic Data

How to use NVIDIA Omniverse for Synthetic Dataset Generation

Step 1: Finding your USD assets and scenes

You can utilize both the NVIDIA Omniverse asset explorer tabs built into the Omniverse Code environment for a robust library of freely available high quality USD assets (“NVIDIA Assets” and “Assets Stores (beta)” tabs), and also paid USD Scenes and assets from sites like TurboSquid, CGTrader, and Sketchfab.

This tutorial utilizes USD assets from both the Omniverse Assets tabs and CGTrader, for a conveyor belt model, fruit models, and Pepsi cans for an industrial manufacturing and production line defect detection use case. There’s an unlimited number of 3D models for any use case available on the internet. You can test out a couple models and then easily swap them out over time as your use case gets more defined. Another advantage to the OpenUSD file format is the ease of swapping out textures and materials of the 3D models, allowing you to very closely mirror the real-life environment where your computer vision model is deployed.

Source: Conveyor belt for factory (CGTrader)

Step 2: Setting up the development environment

Once you have collected your USD assets, you will need to set up your development environment. To run NVIDIA Omniverse, you will need a Windows desktop with an NVIDIA graphics card, or an AWS instance with an NVIDIA graphics card enabled. Follow NVIDIA Omniverse’s installation guide for your Windows workstation or for your AWS instance to set up your development environment.

Step 3: Importing and managing USD assets

Follow NVIDIA’s guide for Importing USD files into Omniverse. In some cases, like for USDZ archives, you may need to update the materials and textures for the objects in Blender (a tutorial for this can be found in the Blender documentation), then import the asset into Omniverse via the GUI or via Omniverse Replicator. Or, for super quick importing, use the assets found in Omniverse Code’s “Assets” tab or “Asset Stores (beta)” tabs to drag and drop the 3D models directly into your USD scene.

Once you have imported your USD assets into the scene in Omniverse Code, you can save the entire workspace as a USD file itself, which includes all the imported USD assets you found on the internet or within the Omniverse Assets browser — making sharing your scenes with your colleagues or the world even easier so your synthetic data generation pipeline can be standardized across many users.

NVIDIA Omniverse “NVIDIA Assets” tab and imported conveyor belt and CGTrader Pepsi cans

Step 4: Generating synthetic datasets

To simulate our 3D environment for synthetic dataset generation, NVIDIA Omniverse includes the scripting extension “Omniverse Replicator” in order to programmatically create 3D scenes with USD assets, procedurally update virtual camera positions and lighting, for creating robust datasets for any real-world conditions. Omniverse Replicator is a framework for developing custom synthetic data generation pipelines and services. Developers can generate physically accurate 3D synthetic data that serves as a valuable way to enhance the training and performance of AI perception networks. [2] To learn more about Omniverse Replicator, check out NVIDIA’s Replicator extension guide, and the getting started documentation.

The workflow for generating your synthetic data changes depending on your use case, for quickly getting started and getting a feel for NVIDIA Omniverse and NVIDIA Omniverse Replicator, follow Edge Impulse Expert Adam Milton Barker’s Github tutorial here: NVIDIA Omniverse Synthetic Data Generation For Edge Impulse Projects.

Once you are done collecting your dataset in Omniverse, Edge Impulse also has an Omniverse extension that allows you to rapidly upload your synthetic dataset into your Edge Impulse project via your project’s API key, to quickly train and deploy your computer vision model back into Omniverse for performance and accuracy validation. Get started with the Edge Impulse Omniverse extension in our documentation.

NVIDIA Omniverse with Edge Impulse real-time synthetic dataset collector and classification extension

Step 5: Training your edge AI model

Once you’re ready to train your edge AI model using only synthetically generated data, or real-world datasets augmented/supplemented with synthetic data, upload your training and testing datasets into your Edge Impulse project. Sign up for a free Enterprise trial! 

Edge Impulse provides several tutorials for computer vision projects. The Object Detection Tutorial provides a step-by-step guide for building a complete object detection model. For deploying an object detection model on tiny resource constrained devices like microcontrollers, or for detecting tiny objects in camera viewport, the FOMO Tutorial focuses on Faster Objects, More Objects (FOMO) detection. Additionally, the FOMO-AD Documentation offers insights into visual anomaly detection using FOMO techniques. Lastly, for leveraging NVIDIA's powerful tools, the NVIDIA TAO Documentation for Edge Impulse Enterprise users explains how to integrate NVIDIA TAO foundational models in your project for advanced object detection and robust model training with cutting edge AI research.

Step 6: Optimizing and validating your edge AI model

The Edge Impulse EON Tuner helps you find and select the best embedded machine learning model for your application within the constraints of your target device. The EON Tuner analyzes your input data, potential signal processing blocks, and neural network architectures — and gives you an overview of possible model architectures that will fit your chosen device's latency and memory requirements. [3]

The EON Tuner performs end-to-end optimizations, from the digital signal processing (DSP) algorithm to the machine learning model, helping you find the ideal trade-off between these two blocks to achieve optimal performance on your target hardware. [3] 

Once you’ve optimized and re-trained your model and are ready to validate its performance and accuracy, use the Edge Impulse Omniverse Extension to deploy your computer vision model directly into the Omniverse code GUI environment to see classification results from your Edge Impulse model with the simulated environment in real-time.

Step 7: Deploying your edge AI model

After training and validating your model, you can now deploy it to any device, or locally on your computer. This makes the model run without an internet connection, minimizes latency, and runs with minimal power consumption. [4] Check out the Edge Impulse documentation for more information and tutorials on how to deploy your edge AI model to any edge device, or any location of your choosing with a custom deployment block

Source: AdamMiltonBarker/omniverse-replicator-edge-impulse

Congratulations! You've now used USD assets to help successfully collect a synthetically generated dataset, and designed/trained a computer vision model for edge devices.

Conclusion

USD assets play a vital role in enhancing edge AI applications by enabling efficient synthetic dataset generation with NVIDIA Omniverse and model generation with Edge Impulse. By integrating these synthetically generated datasets into your Edge Impulse and edge AI projects, you can significantly improve AI model accuracy and performance and drastically decrease the time to production model deployment in the field. We encourage you to explore further resources and start experimenting with these powerful tools. Join the Edge Impulse community (and sign up for an Enterprise trial!) to share your experiences, and don't forget to follow and tag us on social media to show off your synthetically generated datasets and edge AI projects.

Further Resources

Sources

[1] “Synthetic Data for AI & 3D Simulation Workflows | Use Case.” NVIDIA, https://www.nvidia.com/en-us/use-cases/synthetic-data/.

[2] “Omniverse Replicator” NVIDIA, https://docs.omniverse.nvidia.com/extensions/latest/ext_replicator.html

[3] “EON Tuner” Edge Impulse, https://docs.edgeimpulse.com/docs/edge-impulse-studio/eon-tuner

[4] “Deployment” Edge Impulse, https://docs.edgeimpulse.com/docs/edge-impulse-studio/deployment

Comments

Subscribe

Are you interested in bringing machine learning intelligence to your devices? We're happy to help.

Subscribe to our newsletter