Imagine
Innovators

Amsterdam | 2026

Join us for the premier edge AI event for the real world, with presentations and demonstrations from top technology leaders and innovators.

Register now

Why attend?

Connect

with business

Hear from embedded ML industry leaders, visionaries and researchers and participate in live discussions.

Engage

with developers

Be at the center of the data-driven revolution and connect with like-minded developers and engineers.

Agenda

Join us in person at Capital C in Amsterdam, the Netherlands.

Register now
Session type
Session level
Persona
Track names
Location
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
09:00 -  10:30

Opening Keynote

The Top Dog

As AI moves beyond centralized clouds, real-time intelligence at the edge is becoming essential. This keynote introduces a scalable EdgeAI platform that enables low-latency inference, secure model deployment, and seamless cloud integration across distributed environments.

We will explore architectural patterns, lifecycle management, and real-world applications that turn edge devices into autonomous decision engines. Discover how to operationalize AI closer to the data source while maintaining performance, governance, and scalability.

Keynote
Beginner
Intermediate
Advanced
Text Link
No items found.
10:30 -  11:15

Ocean Water Quality Classification for Bacteria Contamination Using TinyML and Sensor FusionOcean Water Quality Classification for Bacteria Contamination Using TinyML and Sensor Fusion

The Upperclub

This session presents a technical overview and live demo of a TinyML project that won the Edge Impulse Hackathon 2025. The project focuses on classifying water bacteria using lightweight machine learning models deployed on resource-constrained microcontrollers.

During the session, we will walk through the project architecture, dataset, model design, and deployment pipeline using Edge Impulse and TinyML tools. Attendees will learn how the solution was built, optimized, and evaluated, as well as key lessons learned during the hackathon. The demo will show real-time inference on embedded hardware and discuss practical considerations for environmental monitoring and edge AI applications.

Breakout
Beginner
Intermediate
Advanced
Text Link
No items found.
10:30 -  11:15

Quantitative Analysis of Axon-NPU Advantages in a Low‑Power Always‑On Wake Word/Keyword Spotting System

The Beyond

This work presents a quantitative evaluation of an Axon-NPU in an always‑on keyword spotting (KWS) pipeline targeting battery‑powered embedded systems. Our focus is audio KWS (not Visual Wake Word), and we assess how offloading compute from the CPU to the Axon NPU impacts energy consumption, inference latency, and available CPU headroom under realistic real‑time constraints.

Breakout
Intermediate
Text Link
11:00 -  11:30

Edge AI is Transforming Robotics—or Is It the Other Way Around?

The Top Dog

Edge AI and robotics are in a dynamic dance of innovation, each shaping the future of the other. As robots interact with the physical world—through sensors, actuators, and real-time decision-making—Edge AI is meeting the challenge by bringing intelligence closer to the sensors, where it matters most. This convergence is giving rise to Physical AI, a new field where intelligent systems don’t just process data but actively perceive, adapt, and act in real-world environments.

In this talk, we’ll cut through the noise and focus on what really matters: the tools, techniques, and trade-offs that make Physical AI possible. We’ll explore how small, optimized multimodal models—combining vision, audio, and sensor data—are essential for robots and edge devices to operate efficiently in complex, real-world settings. But to build robust systems, we also need real-world data: diverse, representative, and reflective of the environments where these models will be deployed.

Over the last year, I’ve been investigating robotic applications and recently came across the Reachy Mini robot from Hugging Face, along with its easy-to-use developer tools. I’ll show a quick example of how Edge AI techniques can be applied to such platforms, highlighting the importance of optimized models and developer-friendly tools. Whether you’re a developer, researcher, or enthusiast, this session will give you the practical insights you need to turn ideas into action.

Keynote
Beginner
Intermediate
Text Link
11:30 -  12:00

Will it run Doom? Exploring Edge Impulse synthetic data and platform extensibility

The Above

This session will be a fun presentation and live demo of using the classic computer game, Doom, to generate a synthetic dataset and train an object detection model to detect enemies. Learn about the extensibility of Edge Impulse as well as integrating models into a final application.

Breakout
Beginner
Intermediate
Text Link
No items found.
11:30 -  12:15

AI Camera Explores Industrial-AI Future

The Upperclub

JMO's AI Camera advantages and applications

Breakout
Text Link
11:30 -  12:15

From Idea to Application in Minutes: Building Edge AI applications with AI Agents

The Skyscape Room

In this hands-on workshop, participants will experience end-to-end Edge AI development accelerated by Claude Code with an Edge Impulse Skill.

We will start with a simple computer vision project idea and then use Claude Code + the Edge Impulse Skill to generate an application where we will be able to deploy ML models and run live inference on the board (Arduino UNO Q or similar).

This session highlights how developer tooling innovations (AI agents + domain-specific Skills) bridge imagination and implementation, making Edge AI approachable for developers, educators, and professionals.

Workshop
Beginner
Intermediate
Advanced
Text Link
No items found.
11:45 -  12:30

Fast-Track Edge AI for Solution Builder

The Top Dog

Rapidly build, optimize, and deploy edge AI on Advantech platforms using Edge Impulse, simplifying workflows and accelerating the path from PoC to production.

Breakout
Beginner
Text Link
No items found.
12:30 -  1:30

Lunch

The Capital Kitchen
Text Link
No items found.
1:30 -  2:15

Beyond TOPS: System-Level Compute Efficiency with the Axon NPU and DSP Integration for Low-Cost Edge AI

The Upperclub

This talk makes the case for choosing AI accelerators based on end‑to‑end efficiency rather than peak TOPS, quantifying how the Axon NPU, tightly integrated with a lean DSP, lowers energy, latency, and system cost for on‑device inference.

Breakout
Intermediate
Text Link
1:30 -  2:15

Bringing Edge Impulse to ROS: Building an Edge-AI-Powered Inspection Robot

The Top Dog

Robotics applications increasingly require reliable, low-latency intelligence deployed directly at the edge. As ROS continues to be the foundation for modern robotic systems, tighter integration between ROS workflows and Edge AI development platforms is essential for accelerating real-world deployments.

In this talk, we present Bringing Edge Impulse to ROS, a new ROS-based integration that enables developers to run Edge Impulse models natively within ROS nodes. This work extends the Edge Impulse ecosystem into robotics, allowing teams to deploy, manage, and optimise edge machine learning models as first-class components of ROS-based systems.

We demonstrate this integration through an autonomous pipe inspection robot performing real-time anomaly detection on industrial pipelines. Building on a corrosion detection use case previously showcased at GITEX, the system uses an Edge Impulse–trained anomaly detection model to identify corrosion and surface defects on oil and gas pipelines. The robot automates this inspection process, running fully on-device with low latency and offline operation.

The session covers system architecture, ROS integration patterns, model optimisation for embedded hardware, and practical lessons learned from deploying Edge AI in a robotic inspection workflow. Attendees will gain concrete guidance on combining Edge Impulse and ROS to build scalable, intelligent robotic systems for industrial and autonomous applications.

Keynote
Intermediate
Advanced
Text Link
No items found.
1:30 -  2:15

Designing Robust Sensor Fusion Architectures for Real-World Edge AI

The Beyond

Sensor fusion can dramatically improve accuracy and reliability in Edge AI systems, but it also multiplies computational complexity. How do we combine multiple sensor modalities without exceeding power, memory, and latency constraints?

This breakout session presents a systematic approach to building efficient multimodal models for microcontrollers and embedded processors. We explore architectural patterns for fusing time-series, acoustic, and environmental data streams while minimizing memory usage and compute overhead. The session includes comparisons between concatenated feature vectors, lightweight ensemble fusion, and attention-inspired adaptive weighting techniques optimized for quantized deployment.

Attendees will walk away with a clear framework for designing sensor fusion systems that are both robust and efficient, enabling smarter devices without sacrificing battery life or responsiveness.

Breakout
Intermediate
Text Link
No items found.
2:30 -  3:15

Lazy Learning for the Factory Floor

The Beyond

From Zero-Shot to Production (Without the Data Collection Nightmare)

Training object detection models for industrial inspection typically means collecting thousands of images, spending weeks annotating, and praying your factory doesn't introduce a new component next month. We needed to quickly train a system to inspect new components in a product where each component could be one of thousands of options, so pre-training a library was not an option.

At Consult Red, we took a "lazier" approach. By leveraging the invariant nature of fixed inspection jigs and controlled lighting, we’ve moved from asking "Is there a terminal block here?" to verifying "Is X1 specifically where it is supposed to be?". We take advantage of the constraints in the system to learn only what is needed and use this to speed training and allow rapid inference for the many components in a new product. Using a two-stage approach, an open-vocabulary model for generalised zero-shot identification and a fine-tuned YOLO Pro for specific component identification, and using the natural stages of assembly as training checkpoints, we can make the model training a quick and painless part of that process.  

Breakout
Intermediate
Text Link
2:30 -  3:15

Trust the Data, Then the Model

The Upperclub

Machine learning performance is bounded by the quality of its data. Many failures can be traced back to issues such as noisy labels, unrepresentative test data and biased sampling rather than the choice of model. This talk will detail some of the tools that are in Edge Impulse and others that we are working on that can help users find problems in their data and fix them in order to optimize performance. Including a deep dive into how some of the data quality algorithms we use work.

Breakout
Intermediate
Advanced
Text Link
2:30 -  3:15

VIS

The Top Dog
Breakout
Beginner
Text Link
3:30 -  4:15

Building the Platform for Physical Agentic AI

The Top Dog

After taking over the Internet, AI agents are jumping to the physical world. This brings huge opportunities—and massive challenges, too.

Physical Agentic AI is a brand new field of engineering, where the fuzzy reasoning of language models meets the hard reality of sensory perception and physical constraints. It lets us place a spark of human insight into every object we make. But how do we frame problems, measure performance, and prove that we’re building effective physical AI?

In this talk, by the founding engineer at Edge Impulse and author of TinyML and AI at the Edge, we’ll explore this new frontier and learn how we’ll need to rethink our developer tools to help AI agents solve problems in physical space.

Keynote
Beginner
Intermediate
Advanced
Text Link
3:30 -  4:15

Deep Dive into Edge Impulse Inferencing SDK

The Above

Step through main stages of inferencing pipeline for common models, exposing main components of the SDK and explaining them.

Breakout
Intermediate
Advanced
Text Link
No items found.
3:30 -  4:15

Quectel Pi + Edge Impulse Capabilities

The Upperclub

Quectel Pi enables rapid development and deployment of intelligent edge AI solutions by combining high-performance Smart IoT hardware with a developer-friendly ecosystem. When integrated with Edge Impulse, Quectel Pi allows engineers to easily collect data, train machine learning models, and deploy optimized inference directly at the edge. This approach reduces latency, enhances data privacy, and lowers cloud dependency. With support for computer vision, audio, and sensor-based AI use cases, Quectel Pi accelerates prototyping and streamlines the transition from proof of concept to production, empowering scalable, energy-efficient, and real-time IoT intelligence.

Breakout
Intermediate
Advanced
Text Link
4:30 -  5:15

Charting the Course: Navigating Maritime VHF Communications Using LLMs and Edge Impulse

The Beyond
Breakout
Intermediate
Advanced
Text Link
No items found.
4:30 -  5:15

Dendritic Optimization - Smarter, Smaller Neural Networks for the Edge

The Above

The original artificial neuron was proposed in 1943, drawing on neuroscience research dating back to the 1860s. Since then, backpropagation was introduced, and there have been significant advances in hardware, optimizers, data curation, and architectures, while the core building block has remained fundamentally the same. Interestingly, for 70 of the last 80 years, neuroscience continued to support this original design. However, modern neuroscience now understands that the perceptron misses a critical piece of biological intelligence: the decision-making performed by a neuron's dendrites. Dendritic optimization leverages these ideas to augment artificial neurons with dendrite nodes, enabling ML practitioners to achieve smarter, smaller, and cheaper models on the same datasets. Experiments frequently show 10-20% reduced error rates after dendritic optimization as well as the ability to compress models by up to 90% without loss in accuracy.

This session will review the biology behind the artificial neuron and what modern neuroscience has learned about the function of dendritic branches in decision making.  Next we will go into the artificial dendrite and how these ideas can be brought into AI models.  Finally we will discuss our hackathon project, experimental results, and share the new dendritic impulse block and how to use it.

Breakout
Advanced
Text Link
No items found.
4:30 -  5:15

Vision is Overrated: The Case for Sensor Fusion in Robotics

The Upperclub

Robotic systems increasingly default to vision-based AI, even for tasks that are dominated by physical interaction and environmental context rather than appearance. This talk argues that many robotic intelligence problems are better addressed through sensor fusion at the edge, by combining multiple simple, low-cost sensors instead of relying on cameras alone.

Using ROS and Edge Impulse, the presentation introduces a practical workflow for collecting, training, and deploying sensor fusion models directly on robotic systems. By fusing signals such as IMU data, motor feedback, acoustic cues, and other lightweight sensors, robots can infer state, interaction, and intent in ways that are often more robust and efficient than vision-based approaches.

The presentation focuses on a practical ROS-to-edge-AI workflow built around Edge Impulse, showing how ROS topics can be directly leveraged for data collection, model training, and deployment of lightweight machine learning models. Rather than treating non-visual signals as secondary inputs, this approach elevates them to first-class citizens for learning and inference at the edge.

Breakout
Intermediate
Advanced
Text Link
No items found.
5:30 -  6:00

Qualcomm, Arduino Edge Impulse & Foundies.io - Fireside Chat

The Top Dog
Panel
Beginner
Text Link
No items found.
6:00 -  9:00

Drinks Reception

Celebrate an amazing day!

Text Link