Edge Impulse’s Imagine event has become an incredible September tradition over the past three years, and our just-wrapped 2023 event may have been the best yet, with a phenomenal lineup of speakers, engaged in-person and virtual audiences, hands-on demos, and a lot of excitement overall for the direction that edge AI and Edge Impulse itself are headed. Let’s review the great day.
Held again throughout the beautiful Computer History Museum in Mountain View, California, the day began with Edge Impulse’s CEO and co-founder taking the stage to talk about the big-picture solutions for which groups are using edge AI, including a continental-sized undertaking by his guest Kate Kallot, CEO of Amini. Amini is launching the first satellite constellation covering Africa exclusively, to help address data scarcity for local groups working in agriculture, conservation, and climate, and is leveraging edge AI to help process and the information that they are collecting and distributing.
Zach announced the latest integration between Edge Impulse and NVIDIA, allowing users to access NVIDIA's advanced TAO algorithms and deploy them to any type of processor — MCUs, NPUs, and GPUS — in the easy-to-use Edge Impulse software.
He also announced the new Edge Impulse and IAR partnership, launching native IAR Workbench support in Edge Impulse. IAR is the leading supplier of professional embedded dev tools, used by tens of thousands of device companies globally, enabling over 8,700 Arm targets. This joint offering will enable embedded teams to achieve the world’s best ML efficiency combining EON Compiler and IAR Compiler technologies
In his keynote, Zach also discussed two primary areas of focus for Edge Impulse: AI for Health and AI for Industry. His announcements for health include:
- Clinical trials at scale: Edge Impulse is enabling rapid TTM with distributed clinical study capabilities integrated with AI algorithm development
- SOC2 Type 2 compliance is now complete
- HIPAA compliance is underway
- HR/HRV: Edge Impulse has just launched its new, best-in-class heart rate algorithm
He also welcomed Neurable CEO Ramses Alcaide to stage, who gave a live demonstration of his EEG-capable headphones that utilize Edge Impulse to help users determine their level of fatigue and focus for worker safety and productivity.
On the Industry side, Zach discussed another NVIDIA integration, allowing Edge Impulse to utilize synthetic data generated by NVIDIA Omniverse to easily train and produce viable models, a big development for the industry, especially for applications where generating sufficient data is challenging or dangerous.
Edge Impulse announced new industrial developments with logistics-based users, in warehouse applications (including pallet cell automation via Ready Robotics, who have modeled their system in Omniverse), in elevators through global elevator leader TKE, in gateway applications through Lexmark, and in IoT with Particle.
Zach also announced our new partner program, enabling more success opportunities to strategic, solution, and technology partners. We’re now working with Nordic to help their next generation of chips enable edge AI capabilities for enterprise developers, and we are also partnered with Capgemini to help our customers build a slew of advanced solutions, from smart animal trackers to safety systems powered by computer vision, and more.
Read the Decoding the Future: Fresh Industry Insights in AI and Machine Learning — Amit Goel, NVIDIA sectionDecoding the Future: Fresh Industry Insights in AI and Machine Learning — Amit Goel, NVIDIA
In the next presentation, NVIDIA director of product management Amit Goel discussed how edge AI will transform all industries, with a possible economic impact of over $13 trillion by 2030. He explained the need for edge AI, looking at the specific costs for different data features, from object detection or counting to face blurring, with bandwidth and latency considerations. He also dove in deeper on synthetic data and the TAO toolkit, and how generative AI is enabling edge AI.
Read the Unleashing the Power of Practical Sensors — Pete Warden, Useful Sensors sectionUnleashing the Power of Practical Sensors — Pete Warden, Useful Sensors
Pete Warden is something of a godfather in the world of edge AI, helping create and lead the initial developments with TensorFlow for Google. He’s now the CEO of his own startup, Useful Sensors, and in his presentation he ran through a fascinating history of sensors, and talked about the movement toward standardizing capabilities in smart sensors so they can be used more freely and effectively.
Read the A Deep Dive into the Art of Data Collection — Steve Kent, Chief Product Officer, Know Labs sectionA Deep Dive into the Art of Data Collection — Steve Kent, Chief Product Officer, Know Labs
Know Labs is working on a massive endeavor in building an entirely new type of sensor that uses radio frequencies to peer through matter and deduce the composition of what is inside of it. Their first effort is to use this to non-invasively determine the blood sugar of people with diabetes. In his Imagine presentation, Know Labs’ Steve Kent talked about the importance of accessing good data, and lots of it, to help build the datasets needed to determine glucose for a global population of up to 2 billion people with diabetes. Their approach and his comments have the potential to also help numerous groups that also need to build their own robust datasets for AI endeavors.
Read the Key Learnings from Integrating AI for Multi-Industry — Anupam Sahai, Lexmark sectionKey Learnings from Integrating AI for Multi-Industry — Anupam Sahai, Lexmark
With their new Optra Edge gateway hardware, Lexmark Ventures is applying their years of experience with distributed device management into new realms, including management of assembly lines, logistics and transportation tracking, patient monitoring, and retail inventory control. Optra Edge uses edge AI for on-device processing, but also allows for connected cloud automation. Anupam Sahai presented about the different applications and use cases that his team is working on, and discussed some of the bigger opportunities and challenges that this segment presents.
Read the AI Observability in the Era of Edge Intelligence — Alessya Visnjic, CEO, WhyLabs sectionAI Observability in the Era of Edge Intelligence — Alessya Visnjic, CEO, WhyLabs
In her presentation, Alessya Visnjic from WhyLabs explained that an effective and responsible AI solution lifecycle doesn’t just include data collection, model training, deployment, and performance monitoring, but that it requires a fifth step — AI application observability and improvement. This helps users identify their AI limitations, biases, and risks. She broke down how and where to incorporate observability into a system, and what details a group would want to track to ensure observability success. She also remarked that this was the first event she’s been at where a speaker before her also mentioned AI observability.
Read the Panel: Getting it Right and Keeping it Right: Top Challenges with AI at the Edge sectionPanel: Getting it Right and Keeping it Right: Top Challenges with AI at the Edge
Moderator: Paul Chen, Head of Global Electronic Design, Mattel
Panelists: Ramses Alcaide, Chief Executive Officer, Neurable; John Robins, Director, Connected Products Business, Synapse; Ryan Ramanathan, Head of Software, Aigen
In this group discussion, Mattel’s Paul Chen, along with Ramses Alcaide, John Robins, and Ryan Ramanathan, explored the complexities and hurdles faced by various technology segments in executing edge AI solutions.
Read the Panel: Responsible AI: Crafting Tech for a Better Tomorrow sectionPanel: Responsible AI: Crafting Tech for a Better Tomorrow
Moderator: Jacob Ward Correspondent, NBC News
Panelists: Kate Kallot, CEO, Amini; Pete Warden, CEO, Useful Sensors; Alessya Visnjic, CEO, WhyLabs
In our second panel, NBC News’ Jacob Ward dove deep into the ways that his panelists are ensuring that their AI tools are used for benefit, how they see themselves as startups operating together with the largest technology companies in the world, and what they recommend others do to help grow the industry responsibly.
Read the Tech Keynote — Jan Jongboom, CTO and co-founder of Edge Impulse sectionTech Keynote — Jan Jongboom, CTO and co-founder of Edge Impulse
In the closing presentation of Imagine, Jan Jongboom announced the latest tech innovations from Edge Impulse, including a deeper look into our HR/HRV processing block, as well as our new data campaign dashboard.
He also gave a walkthrough of our Omniverse integration for synthetic data generation, and how a user might employ it to work with anomaly detection such as FOMO-AD. Jan also showed more details on the NVIDIA Tao integration (side note: Sign up for our NVIDIA Tao webinar here, happening on November 9) and the IAR Workbench integration.
Jan closed his presentation with an overview of Data Explorer V2, and a look at EON Compiler V3, with a final “one more thing” note about the new RAM-optimized mode of EON Compiler V3 that nearly cuts the peak RAM usage in half from TensorFlow Lite Micro, and trims 79K off of the flash usage for a TF Lite Micro model, using the same models. This will reduce processor needs for users, which saves significant money for a group that is deploying a new hardware device. Pretty cool.
Read the Edge Impulse and Partner Demos sectionEdge Impulse and Partner Demos
In our lunch/networking/demo room, we had some great demonstrations from our team and partners. Videos of these to come soon:
- Edge Impulse overview, from ingestion to insight — Edge Impulse software demo
- Streamlining PPG to HR Signal Conversion: Heart rate data captured and shown with Jetson Nano (Edge Impulse)
- Smart cities/moving vehicle detection using ultra-low-power computer vision (Edge Impulse)
- Multiple object detection on Akida (BrainChip)
- High performance YOLO-based object detection on MCU (Alif)
- Detecting motion utilizing low power Wi-Fi and the Thingy:53 (Nordic)
- High efficiency embedded computer vision with Himax WiseEye2 (Himax)
- AC power monitoring and inference with the Particle Photon 2 (Particle)
- AI powered smart ball: Adding AI and an accelerometer to a ball to classify its movement (Infineon)
- Utilizing multiple microphones for predictive maintenance (Sony)
- Automotive safety & security: Detection of safety and security events in a vehicle (Global Sense and Syntiant)
- The everyday brain-computer (Neurable)
- Model choice, compilation, performance simulation (fps and latency), and deployment on MemryX MX3 AI accelerator for AI inference (MemryX)
- Multi video stream decode with YOLO object detection model, and displaying fps inference on tiled images (MemryX)
- In search of the perfect cup of coffee: Roast recognition with computer vision using Texas Instruments" (TI)
At the end of the day, we saw nothing but smiles, hugs, and handshakes from the attendees of the event. We hope that all that were there in person and watching online found it to be as invigorating as we did, and we invite you to reach out if you would like any additional details about what we or the others at Imagine are working on, and how it might help you with your own endeavors.