BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//pretalx//cfp.embedded-recipes.org//er2026//speaker//QFDZ9N
BEGIN:VTIMEZONE
TZID:CET
BEGIN:STANDARD
DTSTART:20001029T040000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:CET
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20000326T030000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
TZNAME:CEST
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:pretalx-er2026-USZT33@cfp.embedded-recipes.org
DTSTART;TZID=CET:20260527T161500
DTEND;TZID=CET:20260527T165500
DESCRIPTION:Modern motorsport telemetry systems generate massive amounts of
  data including GPS traces\, IMU measurements\, CAN signals\, and vehicle 
 dynamics. In most cases\, analysis happens after the session\, often in th
 e cloud. By the time insights are available\, the opportunity to correct d
 riving behavior in real time is already gone. For deterministic feedback d
 uring a session\, cloud-dependent approaches are too slow\, too fragile\, 
 and sometimes simply unavailable.\n\nIn this talk\, we walk through the en
 gineering journey of building a real-time telemetry analysis system that r
 uns entirely at the edge on embedded Linux. The objective was straightforw
 ard: detect driving patterns and performance anomalies during a session wi
 thout relying on connectivity. Achieving that goal required solving a set 
 of practical system-level challenges that extend far beyond data acquisiti
 on and model training.\n\nWe begin with the development pipeline: training
  a model offline\, exporting to ONNX or TFLite\, quantizing for constraine
 d hardware\, and deploying to embedded System-on-Modules. We compare CPU-o
 nly execution against NPU acceleration\, highlighting latency\, memory foo
 tprint\, and sustained-load behavior. Real benchmark results demonstrate w
 here hardware acceleration delivers measurable gains and where it introduc
 es additional constraints.\n\nRunning inference once is not the hard part.
  Shipping a complete embedded systems product is.\n\nThe talk then focuses
  on the integration and production aspects of edge AI systems. We examine 
 kernel driver and user-space runtime alignment\, accelerator operator supp
 ort limitations\, memory pressure under sustained workloads\, and thermal 
 behavior during continuous inference. We discuss containerized deployment 
 on embedded Linux\, using Torizon OS as a reference implementation\, inclu
 ding hardware access from containers\, separation of sensor ingestion and 
 inference pipelines\, reproducible builds\, and safe over-the-air model up
 dates without reflashing the device.\n\nBy the end of the session\, attend
 ees will have a practical blueprint for taking an AI model from experiment
 ation to a production-ready embedded deployment. More importantly\, they w
 ill gain an honest understanding of what breaks\, what scales\, and what m
 ust be designed early when building real-time intelligence on embedded Lin
 ux systems.\n\nThis is not a showcase of AI capabilities\, but a systems e
 ngineering story about building\, benchmarking\, integrating\, and maintai
 ning edge AI under real-world constraints.
DTSTAMP:20260406T234753Z
LOCATION:Auditorium
SUMMARY:From Track to Edge: Shipping Real-Time AI on Embedded Linux - João
  Victor "Teddy" Martins
URL:https://cfp.embedded-recipes.org/er2026/talk/USZT33/
END:VEVENT
END:VCALENDAR
