Artwork

A tartalmat a Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.
Player FM - Podcast alkalmazás
Lépjen offline állapotba az Player FM alkalmazással!

Responsible AI: Does it help or hurt innovation? With Anthony Habayeb

45:59
 
Megosztás
 

Manage episode 416901919 series 3475282
A tartalmat a Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.

Artificial Intelligence (AI) stands at a unique intersection of technology, ethics, and regulation. The complexities of responsible AI are brought into sharp focus in this episode featuring Anthony Habayeb, CEO and co-founder of Monitaur, As responsible AI is scrutinized for its role in profitability and innovation, Anthony and our hosts discuss the imperatives of safe and unbiased modeling systems, the role of regulations, and the importance of ethics in shaping AI.
Show notes

Prologue: Why responsible AI? Why now? (00:00:00)

  • Deviating from our normal topics about modeling best practices
  • Context about where regulation plays a role in industries besides big tech
  • Can we learn from other industries about the role of "responsibility" in products?

Special guest, Anthony Habayeb (00:02:59)

  • Introductions and start of the discussion
  • Of all the companies you could build around AI, why governance?

Is responsible AI the right phrase? (00:11:20)

  • Should we even call good modeling and business practices "responsible AI"?
  • Is having responsible AI a “want to have?” or a “need to have?”

Importance of AI regulation and responsibility (00:14:49)

  • People in the AI and regulation worlds have started pushing back on Responsible AI.
  • Do regulations impede freedom?
  • Discussing the big picture of responsibility and governance: Explainability, repeatability, records, and audit

What about bias and fairness? (00:22:40)

  • You can have fair models that operate with bias
  • Bias in practice identifies inequities that models have learned
  • Fairness is correcting for societal biases to level the playing field for safer business and modeling practices to prevail.

Responsible deployment and business management (00:35:10)

  • Discussion about what organizations get right about responsible AI
  • And what organizations can get completely wrong if they aren't careful.

Embracing responsible AI practices (00:41:15)

  • Getting your teams, companies, and individuals involved in the movement towards building AI responsibly

What did you think? Let us know.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

  • LinkedIn - Episode summaries, shares of cited articles, and more.
  • YouTube - Was it something that we said? Good. Share your favorite quotes.
  • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
  continue reading

Fejezetek

1. Prologue: Why responsible AI? Why now? (00:00:00)

2. Special guest, Anthony Habayeb (00:02:59)

3. Is responsible AI the right phrase? (00:11:20)

4. Importance of AI regulation and responsibility (00:14:49)

5. What about bias and fairness? (00:22:40)

6. Responsible deployment and business management (00:35:10)

7. Embracing responsible AI practices (00:41:15)

22 epizódok

Artwork
iconMegosztás
 
Manage episode 416901919 series 3475282
A tartalmat a Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.

Artificial Intelligence (AI) stands at a unique intersection of technology, ethics, and regulation. The complexities of responsible AI are brought into sharp focus in this episode featuring Anthony Habayeb, CEO and co-founder of Monitaur, As responsible AI is scrutinized for its role in profitability and innovation, Anthony and our hosts discuss the imperatives of safe and unbiased modeling systems, the role of regulations, and the importance of ethics in shaping AI.
Show notes

Prologue: Why responsible AI? Why now? (00:00:00)

  • Deviating from our normal topics about modeling best practices
  • Context about where regulation plays a role in industries besides big tech
  • Can we learn from other industries about the role of "responsibility" in products?

Special guest, Anthony Habayeb (00:02:59)

  • Introductions and start of the discussion
  • Of all the companies you could build around AI, why governance?

Is responsible AI the right phrase? (00:11:20)

  • Should we even call good modeling and business practices "responsible AI"?
  • Is having responsible AI a “want to have?” or a “need to have?”

Importance of AI regulation and responsibility (00:14:49)

  • People in the AI and regulation worlds have started pushing back on Responsible AI.
  • Do regulations impede freedom?
  • Discussing the big picture of responsibility and governance: Explainability, repeatability, records, and audit

What about bias and fairness? (00:22:40)

  • You can have fair models that operate with bias
  • Bias in practice identifies inequities that models have learned
  • Fairness is correcting for societal biases to level the playing field for safer business and modeling practices to prevail.

Responsible deployment and business management (00:35:10)

  • Discussion about what organizations get right about responsible AI
  • And what organizations can get completely wrong if they aren't careful.

Embracing responsible AI practices (00:41:15)

  • Getting your teams, companies, and individuals involved in the movement towards building AI responsibly

What did you think? Let us know.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

  • LinkedIn - Episode summaries, shares of cited articles, and more.
  • YouTube - Was it something that we said? Good. Share your favorite quotes.
  • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
  continue reading

Fejezetek

1. Prologue: Why responsible AI? Why now? (00:00:00)

2. Special guest, Anthony Habayeb (00:02:59)

3. Is responsible AI the right phrase? (00:11:20)

4. Importance of AI regulation and responsibility (00:14:49)

5. What about bias and fairness? (00:22:40)

6. Responsible deployment and business management (00:35:10)

7. Embracing responsible AI practices (00:41:15)

22 epizódok

Minden epizód

×
 
Loading …

Üdvözlünk a Player FM-nél!

A Player FM lejátszó az internetet böngészi a kiváló minőségű podcastok után, hogy ön élvezhesse azokat. Ez a legjobb podcast-alkalmazás, Androidon, iPhone-on és a weben is működik. Jelentkezzen be az feliratkozások szinkronizálásához az eszközök között.

 

Gyors referencia kézikönyv