Artwork

A tartalmat a EDGE AI FOUNDATION biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a EDGE AI FOUNDATION vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.
Player FM - Podcast alkalmazás
Lépjen offline állapotba az Player FM alkalmazással!

Stochastic Training for Side-Channel Resilient AI

18:57
 
Megosztás
 

Manage episode 508032090 series 3574631
A tartalmat a EDGE AI FOUNDATION biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a EDGE AI FOUNDATION vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.

Protecting valuable AI models from theft is becoming a critical concern as more computation moves to edge devices. This fascinating exploration reveals how sophisticated attackers can extract proprietary neural networks directly from hardware through side-channel attacks - not as theoretical possibilities, but as practical demonstrations on devices from major manufacturers including Nvidia, ARM, NXP, and Google's Coral TPUs.
The speakers present a novel approach to safeguarding existing hardware without requiring new chip designs or access to proprietary compilers. By leveraging the inherent randomness in neural network training, they demonstrate how training multiple versions of the same model and unpredictably switching between them during inference can significantly reduce vulnerability to these attacks.
Most impressively, they overcome the limitations of edge TPUs by cleverly repurposing ReLU activation functions to emulate conditional logic on hardware that lacks native support for control flow. This allows implementation of security measures on devices that would otherwise be impossible to modify. Their technique achieves approximately 50% reduction in side-channel leakage with minimal impact on model accuracy.
The presentation walks through the technical implementation details, showing how layer-wise parameter selection can provide quadratic security improvements compared to whole-model switching approaches. For anyone working with AI deployment on edge devices, this represents a critical advancement in protecting intellectual property and preventing system compromise through model extraction.
Try implementing this stochastic training approach on your edge AI systems today to enhance security against physical attacks. Your valuable AI models deserve protection as they move closer to end users and potentially hostile environments.

Send us a text

Support the show

Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

  continue reading

Fejezetek

1. Introduction to AI Model Theft (00:00:00)

2. Security Challenges of Existing Devices (00:00:52)

3. Approach to Secure Edge TPUs (00:01:35)

4. Neural Network Training Fundamentals (00:04:29)

5. Proposed Security Solution (00:07:43)

6. Building If-Else with ReLU (00:10:36)

7. Layer-wise Model Selection (00:13:10)

8. Testing and Results (00:16:33)

9. Conclusion and Future Directions (00:18:40)

59 epizódok

Artwork
iconMegosztás
 
Manage episode 508032090 series 3574631
A tartalmat a EDGE AI FOUNDATION biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a EDGE AI FOUNDATION vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.

Protecting valuable AI models from theft is becoming a critical concern as more computation moves to edge devices. This fascinating exploration reveals how sophisticated attackers can extract proprietary neural networks directly from hardware through side-channel attacks - not as theoretical possibilities, but as practical demonstrations on devices from major manufacturers including Nvidia, ARM, NXP, and Google's Coral TPUs.
The speakers present a novel approach to safeguarding existing hardware without requiring new chip designs or access to proprietary compilers. By leveraging the inherent randomness in neural network training, they demonstrate how training multiple versions of the same model and unpredictably switching between them during inference can significantly reduce vulnerability to these attacks.
Most impressively, they overcome the limitations of edge TPUs by cleverly repurposing ReLU activation functions to emulate conditional logic on hardware that lacks native support for control flow. This allows implementation of security measures on devices that would otherwise be impossible to modify. Their technique achieves approximately 50% reduction in side-channel leakage with minimal impact on model accuracy.
The presentation walks through the technical implementation details, showing how layer-wise parameter selection can provide quadratic security improvements compared to whole-model switching approaches. For anyone working with AI deployment on edge devices, this represents a critical advancement in protecting intellectual property and preventing system compromise through model extraction.
Try implementing this stochastic training approach on your edge AI systems today to enhance security against physical attacks. Your valuable AI models deserve protection as they move closer to end users and potentially hostile environments.

Send us a text

Support the show

Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

  continue reading

Fejezetek

1. Introduction to AI Model Theft (00:00:00)

2. Security Challenges of Existing Devices (00:00:52)

3. Approach to Secure Edge TPUs (00:01:35)

4. Neural Network Training Fundamentals (00:04:29)

5. Proposed Security Solution (00:07:43)

6. Building If-Else with ReLU (00:10:36)

7. Layer-wise Model Selection (00:13:10)

8. Testing and Results (00:16:33)

9. Conclusion and Future Directions (00:18:40)

59 epizódok

Minden epizód

×
 
Loading …

Üdvözlünk a Player FM-nél!

A Player FM lejátszó az internetet böngészi a kiváló minőségű podcastok után, hogy ön élvezhesse azokat. Ez a legjobb podcast-alkalmazás, Androidon, iPhone-on és a weben is működik. Jelentkezzen be az feliratkozások szinkronizálásához az eszközök között.

 

Gyors referencia kézikönyv

Hallgassa ezt a műsort, miközben felfedezi
Lejátszás