Artwork

A tartalmat a Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown biztosĂ­tja. Az összes podcast-tartalmat, beleĂ©rtve az epizĂłdokat, grafikĂĄkat Ă©s podcast-leĂ­rĂĄsokat, közvetlenĂŒl a Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown vagy a podcast platform partnere tölti fel Ă©s biztosĂ­tja. Ha Ășgy gondolja, hogy valaki az Ön engedĂ©lye nĂ©lkĂŒl hasznĂĄlja fel a szerzƑi joggal vĂ©dett mƱvĂ©t, kövesse az itt leĂ­rt folyamatot https://hu.player.fm/legal.
Player FM - Podcast alkalmazĂĄs
LĂ©pjen offline ĂĄllapotba az Player FM alkalmazĂĄssal!

🎯 Outsider's Guide to AI Risk Management Frameworks: NIST Generative AI | irResponsible AI EP5S01

34:32
 
MegosztĂĄs
 

Manage episode 421983736 series 3578042
A tartalmat a Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown biztosĂ­tja. Az összes podcast-tartalmat, beleĂ©rtve az epizĂłdokat, grafikĂĄkat Ă©s podcast-leĂ­rĂĄsokat, közvetlenĂŒl a Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown vagy a podcast platform partnere tölti fel Ă©s biztosĂ­tja. Ha Ășgy gondolja, hogy valaki az Ön engedĂ©lye nĂ©lkĂŒl hasznĂĄlja fel a szerzƑi joggal vĂ©dett mƱvĂ©t, kövesse az itt leĂ­rt folyamatot https://hu.player.fm/legal.

Got questions or comments or topics you want us to cover? Text us!

In this episode we discuss AI Risk Management Frameworks (RMFs) focusing on NIST's Generative AI profile:
✅ Demystify misunderstandings about AI RMFs: what they are for, what they are not for
✅ Unpack challenges of evaluating AI frameworks
✅ Inert knowledge in frameworks need to be activated through processes and user-centered design to bridge the gap between theory and practice.
What can you do?
🎯 Two simple things: like and subscribe. You have no idea how much it will annoy the wrong people if this series gains traction.
đŸŽ™ïžWho are your hosts and why should you even bother to listen?
Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI.
Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.
All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives.
Follow us for more Responsible AI and the occasional sh*tposting:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/
CHAPTERS:
00:00 - What will we discuss in this episode?
01:22 - What are AI Risk Management Frameworks
03:03 - Understanding NIST's Generative AI Profile
04:00 - What's the difference between NIST's AI RMF vs GenAI Profile?
08:38 - What are other equivalent AI RMFs?
10:00- How we engage with AI Risk Management Frameworks?
14:28 - Evaluating the Effectiveness of Frameworks
17:20 - Challenges of Framework Evaluation
21:05 - Evaluation Metrics are NOT always quantitative
22:32 - Frameworks are inert-- they need to be activated
24:40 - The Gap of Implementing a Framework in Practice
26:45 - User-centered Design solutions to address the gap
28:36 - Consensus-based framework creation is a chaotic process
30:40 - Tip for small businesses to amplify profile in RAI
31:30 - Takeaways
#ResponsibleAI #ExplainableAI #podcasts #aiethics

Support the show

What can you do?
🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!
Follow us for more Responsible AI:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/

  continue reading

Fejezetek

1. đŸŽŻ Outsider's Guide to AI Risk Management Frameworks: NIST Generative AI | irResponsible AI EP5S01 (00:00:00)

2. What are AI Risk Management Frameworks (00:01:22)

3. Understanding NIST's Generative AI Profile (00:03:03)

4. What's the difference between NIST's AI RMF vs GenAI Profile? (00:04:00)

5. What are other equivalent AI RMFs? (00:08:38)

6. How we engage with AI Risk Management Frameworks? (00:10:00)

7. Evaluating the Effectiveness of Frameworks (00:14:28)

8. Challenges of Framework Evaluation (00:17:20)

9. Evaluation Metrics are NOT always quantitative (00:21:05)

10. Frameworks are inert-- they need to be activated (00:22:32)

11. The Gap of Implementing a Framework in Practice (00:24:40)

12. User-centered Design solutions to address the gap (00:26:45)

13. Consensus-based framework creation is a chaotic process (00:28:36)

14. Tip for small businesses to amplify profile in RAI (00:30:40)

15. Takeaways (00:31:30)

6 epizĂłdok

Artwork
iconMegosztĂĄs
 
Manage episode 421983736 series 3578042
A tartalmat a Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown biztosĂ­tja. Az összes podcast-tartalmat, beleĂ©rtve az epizĂłdokat, grafikĂĄkat Ă©s podcast-leĂ­rĂĄsokat, közvetlenĂŒl a Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown vagy a podcast platform partnere tölti fel Ă©s biztosĂ­tja. Ha Ășgy gondolja, hogy valaki az Ön engedĂ©lye nĂ©lkĂŒl hasznĂĄlja fel a szerzƑi joggal vĂ©dett mƱvĂ©t, kövesse az itt leĂ­rt folyamatot https://hu.player.fm/legal.

Got questions or comments or topics you want us to cover? Text us!

In this episode we discuss AI Risk Management Frameworks (RMFs) focusing on NIST's Generative AI profile:
✅ Demystify misunderstandings about AI RMFs: what they are for, what they are not for
✅ Unpack challenges of evaluating AI frameworks
✅ Inert knowledge in frameworks need to be activated through processes and user-centered design to bridge the gap between theory and practice.
What can you do?
🎯 Two simple things: like and subscribe. You have no idea how much it will annoy the wrong people if this series gains traction.
đŸŽ™ïžWho are your hosts and why should you even bother to listen?
Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI.
Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.
All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives.
Follow us for more Responsible AI and the occasional sh*tposting:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/
CHAPTERS:
00:00 - What will we discuss in this episode?
01:22 - What are AI Risk Management Frameworks
03:03 - Understanding NIST's Generative AI Profile
04:00 - What's the difference between NIST's AI RMF vs GenAI Profile?
08:38 - What are other equivalent AI RMFs?
10:00- How we engage with AI Risk Management Frameworks?
14:28 - Evaluating the Effectiveness of Frameworks
17:20 - Challenges of Framework Evaluation
21:05 - Evaluation Metrics are NOT always quantitative
22:32 - Frameworks are inert-- they need to be activated
24:40 - The Gap of Implementing a Framework in Practice
26:45 - User-centered Design solutions to address the gap
28:36 - Consensus-based framework creation is a chaotic process
30:40 - Tip for small businesses to amplify profile in RAI
31:30 - Takeaways
#ResponsibleAI #ExplainableAI #podcasts #aiethics

Support the show

What can you do?
🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!
Follow us for more Responsible AI:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/

  continue reading

Fejezetek

1. đŸŽŻ Outsider's Guide to AI Risk Management Frameworks: NIST Generative AI | irResponsible AI EP5S01 (00:00:00)

2. What are AI Risk Management Frameworks (00:01:22)

3. Understanding NIST's Generative AI Profile (00:03:03)

4. What's the difference between NIST's AI RMF vs GenAI Profile? (00:04:00)

5. What are other equivalent AI RMFs? (00:08:38)

6. How we engage with AI Risk Management Frameworks? (00:10:00)

7. Evaluating the Effectiveness of Frameworks (00:14:28)

8. Challenges of Framework Evaluation (00:17:20)

9. Evaluation Metrics are NOT always quantitative (00:21:05)

10. Frameworks are inert-- they need to be activated (00:22:32)

11. The Gap of Implementing a Framework in Practice (00:24:40)

12. User-centered Design solutions to address the gap (00:26:45)

13. Consensus-based framework creation is a chaotic process (00:28:36)

14. Tip for small businesses to amplify profile in RAI (00:30:40)

15. Takeaways (00:31:30)

6 epizĂłdok

Minden epizĂłd

×
 
Loading …

ÜdvözlĂŒnk a Player FM-nĂ©l!

A Player FM lejĂĄtszĂł az internetet böngĂ©szi a kivĂĄlĂł minƑsĂ©gƱ podcastok utĂĄn, hogy ön Ă©lvezhesse azokat. Ez a legjobb podcast-alkalmazĂĄs, Androidon, iPhone-on Ă©s a weben is mƱködik. Jelentkezzen be az feliratkozĂĄsok szinkronizĂĄlĂĄsĂĄhoz az eszközök között.

 

Gyors referencia kézikönyv