Artwork

A tartalmat a Anton Chuvakin biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Anton Chuvakin vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.
Player FM - Podcast alkalmazás
Lépjen offline állapotba az Player FM alkalmazással!

EP241 From Black Box to Building Blocks: More Modern Detection Engineering Lessons from Google

31:33
 
Megosztás
 

Manage episode 503751053 series 2892548
A tartalmat a Anton Chuvakin biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Anton Chuvakin vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.

Guest:

Topics:

  • On the 3rd anniversary of Curated Detections, you've grown from 70 rules to over 4700. Can you walk us through that journey? What were some of the key inflection points and what have been the biggest lessons learned in scaling a detection portfolio so massively?
  • Historically the SecOps Curated Detection content was opaque, which led to, understandably, a bit of customer friction. We’ve recently made nearly all of that content transparent and editable by users. What were the challenges in that transition?
  • You make a distinction between "Detection-as-Code" and a more mature "Software Engineering" paradigm. What gets better for a security team when they move beyond just version control and a CI/CD pipeline and start incorporating things like unit testing, readability reviews, and performance testing for their detections?
  • The idea of a "Goldilocks Zone" for detections is intriguing – not too many, not too few. How do you find that balance, and what are the metrics that matter when measuring the effectiveness of a detection program? You mentioned customer feedback is important, but a confusion matrix isn't possible, why is that?
  • You talk about enabling customers to use your "building blocks" to create their own detections. Can you give us a practical example of how a customer might use a building block for something like detecting VPN and Tor traffic to augment their security?
  • You have started using LLMs for reviewing the explainability of human-generated metadata. Can you expand on that? What have you found are the ripe areas for AI in detection engineering, and can you share any anecdotes of where AI has succeeded and where it has failed?

Resources

  continue reading

248 epizódok

Artwork
iconMegosztás
 
Manage episode 503751053 series 2892548
A tartalmat a Anton Chuvakin biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Anton Chuvakin vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.

Guest:

Topics:

  • On the 3rd anniversary of Curated Detections, you've grown from 70 rules to over 4700. Can you walk us through that journey? What were some of the key inflection points and what have been the biggest lessons learned in scaling a detection portfolio so massively?
  • Historically the SecOps Curated Detection content was opaque, which led to, understandably, a bit of customer friction. We’ve recently made nearly all of that content transparent and editable by users. What were the challenges in that transition?
  • You make a distinction between "Detection-as-Code" and a more mature "Software Engineering" paradigm. What gets better for a security team when they move beyond just version control and a CI/CD pipeline and start incorporating things like unit testing, readability reviews, and performance testing for their detections?
  • The idea of a "Goldilocks Zone" for detections is intriguing – not too many, not too few. How do you find that balance, and what are the metrics that matter when measuring the effectiveness of a detection program? You mentioned customer feedback is important, but a confusion matrix isn't possible, why is that?
  • You talk about enabling customers to use your "building blocks" to create their own detections. Can you give us a practical example of how a customer might use a building block for something like detecting VPN and Tor traffic to augment their security?
  • You have started using LLMs for reviewing the explainability of human-generated metadata. Can you expand on that? What have you found are the ripe areas for AI in detection engineering, and can you share any anecdotes of where AI has succeeded and where it has failed?

Resources

  continue reading

248 epizódok

Minden epizód

×
 
Loading …

Üdvözlünk a Player FM-nél!

A Player FM lejátszó az internetet böngészi a kiváló minőségű podcastok után, hogy ön élvezhesse azokat. Ez a legjobb podcast-alkalmazás, Androidon, iPhone-on és a weben is működik. Jelentkezzen be az feliratkozások szinkronizálásához az eszközök között.

 

Gyors referencia kézikönyv

Hallgassa ezt a műsort, miközben felfedezi
Lejátszás