Imagine 2022 was a smash hit — and we’ll be back to do it again in 2023. Stay tuned.
Meanwhile, you can re-watch all the talks and demos here.
Hear from embedded ML industry leaders, visionaries, and researchers, and participate in live discussions.
Gain firsthand experience from technical workshops to build the next generation of devices that can hear, feel, and see.
Be at the center of the data-driven revolution and connect with like-minded developers and engineers.
Hear from industry leaders and pioneers on the future of embedded machine learning
CEO and Co-Founder
CTO and Co-Founder
Edge Impulse Board of Directors
Former CEO of Arm
VP IoT Edge Devices & Solutions
Head of Technology
VP, Sales & Business Development
Weights & Biases
Co-Founder and CIO
Social Impact & Innovation Lead
Project Lead: Sentinel AI
Conservation X Labs
Tim van Dam
Regional Cloud Advocate
Developer Relations Engineer —Machine Learning
Managing Director, Superorganism
Director of Product
Master of Ceremonies
Sr. Product Manager Machine Learning Software and Developer Experience (DX), Silicon Labs
Director, Innovation & Emerging Technologies,
Talks and interviews with industry leaders and pioneers
Sandeep Bandil, VP, IoT Edge Devices & Solutions, Brambles
Mark Benson, Head of Samsung SmartThings
Stephanie O'Donnell, Founder and Community Manager, Wildlabs
Tim van Dam, Founder, Smart Parks
Sam Kelly, Project Lead, Sentinel AI, Conservation X Labs
Kate Kallot, Co-Founder and CIO, Mara
Fran Baker, Social Impact and Innovation Lead, Arm
Tom Quigley, Managing Director, Superorganism
Ben Gibbs, CEO and Founder, Ready Robotics
Mallik P. Moturi, Chief Business Officer, Syntiant
Deep learning is the most effective method of artificial intelligence for interpreting patterns and classifying them for the real world. However, due to the computational complexity, it is usually relegated to the realm of energy hungry GPUs and CPUs, making it difficult to realize on edge devices. Syntiant’s Core 2 Neural Decision Processors (NDP) are optimally designed for deploying deep learning models on the edge, where power and area are often constrained.
In this talk, we will briefly touch on some of the challenges of the edge and how Syntiant’s NDP architecture can best be utilized to address these challenges.
Mitsuo Baba, Senior Director, Renesas
Recently, AI implementation in embedded computing is accelerating rapidly in all markets. In the ever-evolving AI technology, there are three key requirements for deploying AI functions "flexibility," "power efficiency," and "real-time operation." In this session, Renesas will explain the importance of these requirements and introduce solutions by the ready-to-use RZ family.
Omar Oreifej, Director, Computer Vision and Machine Learning, Synaptics
In this talk, we will share our experience in developing computer vision models for low power AI at the edge. We will illustrate some techniques which greatly reduced the effort in producing representative annotated training data.
Henrik Flodell, Marketing Director, Alif
Alif Semiconductor recently introduced the Ensemble family of microcontrollers and fusion processors. These devices are the first general-purpose controllers in the market to feature Arm’s Cortex-M55 MCU core as well as the Ethos-U55 microNPU, which is designed specifically for accelerating machine learning operations on deeply embedded devices. In this session, we will introduce the Ensemble family, and showcase the uplift it brings to voice and vision use cases in particular, in terms of raw performance and power savings.
Armaghan Ebrahimi, Partner Solutions Engineer, Sony
Introducing a high-performance microcontroller board with hi-res audio, camera, integrated GPS, and edge AI support.
Isaac Sanchez, Sr. Partner Program Manager, Silicon Labs
Silicon Labs is a leading pureplay IoT semiconductor manufacturer with the widest range of wireless protocols available on TinyML devices. They recently introduced EFR32MG24 which includes an integrated AI/ML accelerator that not only enables more complex applications but also optimizes TinyML devices to operate more efficiently. This is a huge benefit to IoT devices at the very edge looking to implement ML on top of their application and wireless stacks.
Rob Telson, Vice President, Ecosystem and Partnerships, BrainChip
Join BrainChip as they discuss the convergence of AI with the proliferation of sensors — bringing scalable and effective AI to the sensor and beyond.
Scott Castle, Director, Innovation & Emerging Technologies Lexmark
For enterprises, building a good model is the start of operationalizing. Stakeholders across an organization, from IT to finance to field service, will ask challenging questions about the security, return on investment, and ongoing support. Not only for the model, but for the accompanying hardware, firmware, application management, and device management, especially for geographically distributed instances. Lexmark's Optra platform is a commercialized offering solving these problems. When combined with Edge Impulse the realization of AI's potential to enhance Operations effectiveness is closer than ever.
Seann Gardiner, VP, Business Development, Weights & Biases
Karan Nisar, Machine Learning Engineer, Weights & Biases
Registrations for workshops will be reviewed and confirmed
Nikunj Kotecha and Chris Anastasi
Learn how to leverage BrainChip MetaTF to convert traditional CNNs to spiking neural networks (SNNs) with ease and apply to real world use cases.
Ability level: Beginner to advanced ML engineers
Prerequisites: Python, TensorFlow, and ML knowledge a plus
Hardware: Software simulation provided
TJ VanToll and Paige Niedringhaus
If you’ve ever tried to run your ML applications outside of comfortable Wi-Fi range, you know how difficult IoT connectivity still is in 2022. The Blues Wireless Notecard solves this problem elegantly, by offering subscription-free cellular connectivity, on a secure system-on-module with a simple JSON API. The Notecard is the perfect companion to your ML applications, letting you focus on polishing your models, while allowing the Notecard to handle your connectivity needs.
In this workshop you’ll see how it all works. You’ll start by going through a hands-on tutorial to learn about the Notecard, and then see a real-world example of how to do ML inferencing on the edge while sending your results to a cloud dashboard. Plus, we might have an exciting new feature to demo for you live.
Ability level: Intermediate
Hardware: No physical hardware required, workshop will use a simulated Notecard. If you would like to follow along with real hardware, you will need:
- Feather Starter Kit for Swan
- Adafruit LIS3DH accelerometer
- Qwiic cable
Marc Pous and Alan Boris
In this workshop, balena's Marc Pous, Developer Advocate, and Alan Boris, Hardware Hacker in Residence, will showcase machine learning fleet management using the Nvidia Jetson Nano on the balenaCloud device management platform. The Jetson Nano will run AI models on both the CPU and the GPU, and we'll also show how to re-train and update the model across the fleet of devices. If you have a Jetson Nano, bring it to the virtual workshop to participate in the live-build of the fleet.
Prerequisites: Knowledge of Docker is a plus
Hardware: Nvidia Jetson, NanoCameraSD, CardLaptop adapter for SD card (if no internal port)
Learn how to run sensor models on Syntiant's TinyML Board.
Prerequisites: Successful completion of go/stop keyword tutorial on the TinyML Board
Hardware: TinyML Board required
Danny Watson and Nick Sharp
The Infineon PSoC 6 is a highly capable Dual Core device that has been launched into the Edge Impulse Ecosystem. In this session we will provide an acoustic classification use case and go through the firmware compilation through ModusToolbox, on-device data collection, segmentation, labeling, training and deployment to do inferencing. By the end of the session you will have a model running on the PSoC 6 and prizes given to the most innovative and reliable deployments.
Ability Level: None
Hardware/Software: In advance of the workshop, please purchase and download:
• PSoC 6 Wi-Fi BT Prototyping Kit
• Install ModusToolbox
Artificial Intelligence camera (AI camera) is an enhanced camera powered by a built-in edge machine learning algorithm, smartly processing with computational photography to perform enhanced object detection in real-time. It has been widely used in smartphones for face recognition, edge devices for wildlife detection, and other edge intelligence applications. Featuring the Vision AI camera, Seeed Studio released the latest sensor prototype kit that includes the Grove - Vision AI module as the highlight to bring edge computing to IoT sensors. The module is a compact AI camera that supports simple ML model training and implementation thanks to the support from Edge Impulse. In this workshop, Seeed Studio will provide step-by-step guidance on how to train your own AI model for your specific application with Edge Impulse, and then deploy it easily to the Grove - Vision AI module to create you own Vison AI sensor.
Ability Level: Any skill level
• SenseCAP K1100 - The Sensor Prototype Kit (Note: This kit is discounted and has free global shipping until September 30)
Alif Semiconductor recently introduced the Ensemble family of Microcontrollers and Fusion processors. These devices are the first general purpose controllers in the market to feature Arm’s Cortex-M55 MCU core, as well as the Ethos-U55 microNPU, which is design specifically for accelerating machine learning operations on deeply embedded devices.
In this session we will introduce the Ensemble family, and showcase the uplift it brings to Voice and Vision use cases in particular, in terms of raw performance and power savings.
Robin M Saltnes
The Nordic Thingy:53 is the first fully self-contained prototyping solution for embedded ML with complete wireless Edge Impulse integration through Bluetooth Low Energy. It takes the simplicity of getting started with embedded machine learning to a whole new level.
In this tech talk, Robin from Nordic Semiconductor will walk you through how easy it is to set up the Thingy:53 to communicate with the Edge Impulse cloud and use it both for uploading new training data and deploying ML models in the field. He will also give a closer look at the core components of the Thingy:53, including all of its built-in sensor hardware, and the key feature of the nRF5340 dual-core wireless SoC that enables both the ML capabilities and wireless connectivity.
ML at Edge is and will be increasingly prevalent in ambient intelligence. In this talk, the attendees will hear from Panasonic about the fundamental shifts undergoing in the smart building industry. Using the example of Panasonic Grid-Eye infra-red matrix sensor, we will underline how simple, low-cost sensors can be improved using ML techniques, giving rise to deep insights into utilization of space.
Machine Learning Inference has thoroughly penetrated embedded applications, especially with visual and spatial data like images and radar/lidar. Texas Instruments has a growingly scalable processor portfolio for vision inference at the edge. Arm-only execution on the Sitara AM62 and accelerated execution on the Jacinto TDA4VM provides a range from 0.5 to 8 TOPS of performance in a Linux or RTOS environment. With a large collection of pre-optimized AI models, no-cost and low-cost development tools, and a hardware-agnostic software programming environment, TI helps you bring your idea to realization on an embedded device in no time. With Edge Impulse, we are further democratizing edge AI application development to bring embedded inference closer to the community by simplifying AI model development for TI processors. Please join our Tech Talk to learn how TI and Edge Impulse can accelerate your vision for machine learning at the edge.
Silicon Labs recently announced our EFR32MG24, the leading wireless SoC for multi-protocol applications. It also has an integrated AI/ML accelerator to offload machine learning operations for more optimized power applications. Edge Impulse’s Studio integrates this accelerator, enabling more complex use cases on this tiny edge device. In this talk, we review applications enabled by the EFR32 AI/ML accelerator and also demonstrate object recognition using Edge Impulse’s FOMO block running simultaneously with wireless connectivity.
Nathan Verrill and Mike Peacock
Collect multi-sensor data at scale, train once and deploy everywhere using Kafka, the Edge Impulse Ingestion API, and Custom Deployment Blocks to run EI models in Kafka Streams, in the cloud, and on devices, online and off, all from one Edge Impulse project.
The AWS Marketplace makes it easy for developers to discover and procure digital solutions so you can you build your next-best product.
Discussions, panels, and project deep-dives. Developer sessions with community innovators. Awesome prizes and giveaways including the Seeed Studio Jetson Xavier reComputer, K1100 LoRaWAN kit, Wio Terminal, Arduino Portenta H7, Sony Spresense, and more.
David Tischler, Development Program Manager, Edge Impulse
Jan Jongboom and Arun Rajasekaran, Edge Impulse
Constantin Craciun, Zalmotek
Raffaello Bonghi, Nvidia,
co-founder Pizza Robotics
Giovanni Di Dio Bruno, Università degli Studi di Napoli Federico II,
co-founder Pizza Robotics
Eric Pan, founder and CEO,
Avi Brown, Edge Impulse Expert
Jim Bennett, Regional Cloud Advocate, Microsoft
Mithun Das, Edge Impulse Expert
Angus Thomson, CEO and co-founder, Canairy AI
Walt Jacob, CTO and co-founder, Canairy AI
Paul Ruiz, Developer Relations Engineer, Machine Learning, Google
David Tischler, Development Program Manager, Edge Impulse
Jessica Tangeman, CEO, Hackster
Alex Glow, Lead Hardware Nerd, Hackster
Jinger Zeng, Contest Manager, Hackster