Speed Up Your Model Building Process With Our New AI Labeling Feature

Managing large datasets, especially when dealing with labels for images, bounding boxes, or audio samples is a common challenge in ML engineering. Our new AI labeling functionality steps in as a powerful tool to save time by automating a substantial portion of the labeling work. Let's explore the essentials of this feature and how it can be used to optimize your workflow.

Understanding AI labeling

AI labeling is a feature that empowers you to use previously trained AI models — whether they are expansive foundation models or your own tailored solutions — to automatically label data samples. It is particularly useful to label images, identify objects within an image, or designate specific parts of an audio file. You get to select pre-built AI labeling blocks, or develop your own, that use different models and specify custom prompts, adjust parameters, and define filters for each action to meet your specific labeling needs. And because some complex datasets require multiple steps to generate accurate labels, you can also chain different AI labeling blocks together in your labeling pipeline.

Tools and models at your disposal

We've pre-packaged several AI labeling blocks to make the process seamless. Noteworthy models include the OWL-ViT model for object detection, a series of GPT-4o blocks specifically designed for image labeling and bounding box relabeling, and an Audio Spectrogram Transformer for labeling sections of audio samples. These resources are openly hosted on GitHub, ensuring transparency and fostering community engagement. Feel free to explore these models, and you'll find them readily accessible for both use and customization tailored to your projects.

AI labeling is available now for all users, and can be accessed in the Data Acquisition tab. See the AI labeling feature documentation for more information.

Examples

Detect Baby Cries with Audio Spectrogram Transformer

Showing that we can automatically detect and label portions of audio samples using an audio spectrogram transformer model trained on the AudioSet dataset. In this case we asked it to detect "baby cry, infant cry" sections of the audio sample and label them "crying."

Two-step pipelines to label capsules with OWL-ViT and GPT4o 

A two-step labeling workflow that first uses a zero-shot object detection model (OWL-ViT) to identify capsules in the images, then uses an LLM (GPT-4o) to provide more details about the capsules (identifying their colors) to update the label for the bounding box accordingly, and then deleting any bounding boxes that are not around a capsule.

By integrating AI labeling into your data management processes, you can focus more on innovation and less on tedious labeling tasks. As engineers, our role is to solve problems effectively, and AI labeling can play an integral part in that mission.

If you would like to see other models integrated into AI labeling blocks, let us know on Edge Impulse forum.

Happy discovery!

Comments

Subscribe

Are you interested in bringing machine learning intelligence to your devices? We're happy to help.

Subscribe to our newsletter