BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//pretalx//cfp.embedded-recipes.org//er2026//speaker//UJEVAG
BEGIN:VTIMEZONE
TZID:CET
BEGIN:STANDARD
DTSTART:20001029T040000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:CET
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20000326T030000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
TZNAME:CEST
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:pretalx-er2026-7LYZUY@cfp.embedded-recipes.org
DTSTART;TZID=CET:20260528T161500
DTEND;TZID=CET:20260528T165500
DESCRIPTION:For years\, utilizing hardware acceleration for machine learnin
 g at the edge meant being tethered to a vendor’s Board Support Package (
 BSP): a world of stagnant kernels\, proprietary binary blobs\, and zero au
 ditability. The companies that wanted to run a modern ML workload had to a
 ccept their outdated software or spend months fighting it\, in both situat
 ions risking vendor lock-in.\n\nThat era is ending. Thanks to recent work 
 in the Linux accel subsystem and Mesa\, a truly open-source AI stack is no
 w a reality. This talk goes over what is currently supported by the mainli
 ne stack and the existing four hardware drivers: Etnaviv (Vivante)\, Rocke
 t (Rockchip)\, Ethos-U (Arm)\, and Thames (TI C7x). I will also explain wh
 at is missing and what is coming next.
DTSTAMP:20260406T234929Z
LOCATION:Auditorium
SUMMARY:Four NPUs\, One Stack\, Zero Blobs: Edge AI Acceleration in Mainlin
 e - Tomeu Vizoso
URL:https://cfp.embedded-recipes.org/er2026/talk/7LYZUY/
END:VEVENT
END:VCALENDAR
