Most types of traditional trade show demos can be reliably built long before an event and trusted to run correctly when displayed on the show floor. But when a demo includes an AI aspect, the change of environment can make things a lot more challenging to build in one place and run in another.
At Embedded World 2025, the Foundries.io team (part of Qualcomm Technologies) set up an computer vision edge AI people-detection application that was originally trained on stock images. The change in lighting at the event and lens appearance from the the Qualcomm Dragonboard RB3 Gen 2's onboard camera was initially resulting in poor people-detection results. A rebuild was deemed necessary. Thankfully, they were working with Edge Impulse (now part of Qualcomm Technologies as well), which meant it was easy and fast to do so. The fix was so effective it's essentially become a demo itself; they even made a video about it:
In the video, Foundries.io founder/Qualcomm engineering VP Tyler Baker demonstrates how, live on the show floor, they effortlessly rebuilt a dataset using the RB3 Gen 2's cameras (and our automatic LLM AI labeling function) and retrained their model with Edge Impulse, then containerized it and deployed that back to the RB3 Gen 2 hardware with FoundriesFactory. It worked like a charm — the resulting model performed much better with more appropriate data, and showcased the power of their hardware, alongside the ease of use of both software platforms.
“This is just a quick demo to show how powerful the integration between Foundries and Edge Impulse can be — where we can quickly come to a location with different environmental conditions, use the technology provided by Edge Impulse to sample that data, quickly retrain a model, and then use the Foundries system to deploy that securely to the edge.”
Want to give Edge Impulse a whirl to see how quickly you can ingest data and spin up your own AI-powered model, then deploy it to your hardware? Sign up today for a free developer account.