Artwork

A tartalmat a Lightspeed Venture Partners biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Lightspeed Venture Partners vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.
Player FM - Podcast alkalmazás
Lépjen offline állapotba az Player FM alkalmazással!

Inside the Black Box: The Urgency of AI Interpretability

1:02:17
 
Megosztás
 

Manage episode 509930947 series 3619430
A tartalmat a Lightspeed Venture Partners biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Lightspeed Venture Partners vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.

Recorded live at Lightspeed’s offices in San Francisco, this special episode of Generative Now dives into the urgency and promise of AI interpretability. Lightspeed partner Nnamdi Iregbulem spoke with Anthropic researcher Jack Lindsey and Goodfire co-founder and Chief Scientist Tom McGrath, who previously co-founded Google DeepMind’s interpretability team. They discuss opening the black box of modern AI models in order to understand their reliability and spot real-world safety concerns, in order to build AI systems of the future that we can trust.

Episode Chapters:

00:42 Welcome and Introduction

00:36 Overview of Lightspeed and AI Investments

03:19 Event Agenda and Guest Introductions

05:35 Discussion on Interpretability in AI

18:44 Technical Challenges in AI Interpretability

29:42 Advancements in Model Interpretability

30:05 Smarter Models and Interpretability

31:26 Models Doing the Work for Us

32:43 Real-World Applications of Interpretability

34:32 Philanthropics' Approach to Interpretability

39:15 Breakthrough Moments in AI Interpretability

44:41 Challenges and Future Directions

48:18 Neuroscience and Model Training Insights

54:42 Emergent Misalignment and Model Behavior

01:01:30 Concluding Thoughts and Networking

Stay in touch:

The content here does not constitute tax, legal, business or investment advice or an offer to provide such advice, should not be construed as advocating the purchase or sale of any security or investment or a recommendation of any company, and is not an offer, or solicitation of an offer, for the purchase or sale of any security or investment product. For more details please see lsvp.com/legal.

  continue reading

89 epizódok

Artwork
iconMegosztás
 
Manage episode 509930947 series 3619430
A tartalmat a Lightspeed Venture Partners biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Lightspeed Venture Partners vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.

Recorded live at Lightspeed’s offices in San Francisco, this special episode of Generative Now dives into the urgency and promise of AI interpretability. Lightspeed partner Nnamdi Iregbulem spoke with Anthropic researcher Jack Lindsey and Goodfire co-founder and Chief Scientist Tom McGrath, who previously co-founded Google DeepMind’s interpretability team. They discuss opening the black box of modern AI models in order to understand their reliability and spot real-world safety concerns, in order to build AI systems of the future that we can trust.

Episode Chapters:

00:42 Welcome and Introduction

00:36 Overview of Lightspeed and AI Investments

03:19 Event Agenda and Guest Introductions

05:35 Discussion on Interpretability in AI

18:44 Technical Challenges in AI Interpretability

29:42 Advancements in Model Interpretability

30:05 Smarter Models and Interpretability

31:26 Models Doing the Work for Us

32:43 Real-World Applications of Interpretability

34:32 Philanthropics' Approach to Interpretability

39:15 Breakthrough Moments in AI Interpretability

44:41 Challenges and Future Directions

48:18 Neuroscience and Model Training Insights

54:42 Emergent Misalignment and Model Behavior

01:01:30 Concluding Thoughts and Networking

Stay in touch:

The content here does not constitute tax, legal, business or investment advice or an offer to provide such advice, should not be construed as advocating the purchase or sale of any security or investment or a recommendation of any company, and is not an offer, or solicitation of an offer, for the purchase or sale of any security or investment product. For more details please see lsvp.com/legal.

  continue reading

89 epizódok

All episodes

×
 
Loading …

Üdvözlünk a Player FM-nél!

A Player FM lejátszó az internetet böngészi a kiváló minőségű podcastok után, hogy ön élvezhesse azokat. Ez a legjobb podcast-alkalmazás, Androidon, iPhone-on és a weben is működik. Jelentkezzen be az feliratkozások szinkronizálásához az eszközök között.

 

Gyors referencia kézikönyv

Hallgassa ezt a műsort, miközben felfedezi
Lejátszás