Artwork

A tartalmat a Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown biztosĂ­tja. Az összes podcast-tartalmat, beleĂ©rtve az epizĂłdokat, grafikĂĄkat Ă©s podcast-leĂ­rĂĄsokat, közvetlenĂŒl a Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown vagy a podcast platform partnere tölti fel Ă©s biztosĂ­tja. Ha Ășgy gondolja, hogy valaki az Ön engedĂ©lye nĂ©lkĂŒl hasznĂĄlja fel a szerzƑi joggal vĂ©dett mƱvĂ©t, kövesse az itt leĂ­rt folyamatot https://hu.player.fm/legal.
Player FM - Podcast alkalmazĂĄs
LĂ©pjen offline ĂĄllapotba az Player FM alkalmazĂĄssal!

đŸ”„ Generative AI Use Cases: What's Legit and What's Not? | irResponsible AI EP6S01

26:54
 
MegosztĂĄs
 

Manage episode 431306220 series 3578042
A tartalmat a Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown biztosĂ­tja. Az összes podcast-tartalmat, beleĂ©rtve az epizĂłdokat, grafikĂĄkat Ă©s podcast-leĂ­rĂĄsokat, közvetlenĂŒl a Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown vagy a podcast platform partnere tölti fel Ă©s biztosĂ­tja. Ha Ășgy gondolja, hogy valaki az Ön engedĂ©lye nĂ©lkĂŒl hasznĂĄlja fel a szerzƑi joggal vĂ©dett mƱvĂ©t, kövesse az itt leĂ­rt folyamatot https://hu.player.fm/legal.

Got questions or comments or topics you want us to cover? Text us!

In this episode of irResponsible AI, we discuss
✅ GenAI is cool, but do you really need it for your use case?
✅ How can companies end up doing irresponsible AI by using GenAI for the wrong use cases?
✅ How may we get out of this problem?
What can you do?
🎯 Two simple things: like and subscribe. You have no idea how much it will annoy the wrong people if this series gains traction.
đŸŽ™ïžWho are your hosts and why should you even bother to listen?
Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI.
Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.
All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives.
Follow us for more Responsible AI and the occasional sh*tposting:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/
CHAPTERS:
00:00 - Introduction
01:28 - Misuse of Generative AI
02:27 - Glue example of google gen AI
03:18 - The Challenge of Public Trust and Misinformation
03:45 - Why is this a serious problem?
04:49 - Why should businesses need to worry about it?
05:32 - Auditing Generative AI Systems and Liability Risks
07:18 - Why is this GenAI hype happening?
09:20 - Competitive Pressure and Funding Influence
14:29 - How to avoid failure: investing in Problem Understanding
14:48 - Good use cases of GenAI
17:05 - LLMs are only useful if you know the answer
17:30 - Text-based based video editing as a good example
21:40 - Need for GenAI literacy amongst tech execs
23:30 - Takeaways
#ResponsibleAI #ExplainableAI #podcasts #aiethics

Support the show

What can you do?
🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!
Follow us for more Responsible AI:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/

  continue reading

Fejezetek

1. Introduction (00:00:00)

2. Misuse of Generative AI (00:01:28)

3. The Challenge of Public Trust and Misinformation (00:03:18)

4. Why is this a serious problem? (00:03:45)

5. Why should businesses need to worry about it? (00:04:49)

6. Auditing Generative AI Systems and Liability Risks (00:05:32)

7. Why is this GenAI hype happening? (00:07:18)

8. Competitive Pressure and Funding Influence (00:09:20)

9. How to avoid failure: investing in Problem Understanding (00:14:48)

10. Good use cases of GenAI (00:14:48)

11. LLMs are only useful if you know the answer (00:17:05)

12. Need for GenAI literacy amongst tech execs (00:21:40)

13. Takeaways (00:23:30)

6 epizĂłdok

Artwork
iconMegosztĂĄs
 
Manage episode 431306220 series 3578042
A tartalmat a Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown biztosĂ­tja. Az összes podcast-tartalmat, beleĂ©rtve az epizĂłdokat, grafikĂĄkat Ă©s podcast-leĂ­rĂĄsokat, közvetlenĂŒl a Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown vagy a podcast platform partnere tölti fel Ă©s biztosĂ­tja. Ha Ășgy gondolja, hogy valaki az Ön engedĂ©lye nĂ©lkĂŒl hasznĂĄlja fel a szerzƑi joggal vĂ©dett mƱvĂ©t, kövesse az itt leĂ­rt folyamatot https://hu.player.fm/legal.

Got questions or comments or topics you want us to cover? Text us!

In this episode of irResponsible AI, we discuss
✅ GenAI is cool, but do you really need it for your use case?
✅ How can companies end up doing irresponsible AI by using GenAI for the wrong use cases?
✅ How may we get out of this problem?
What can you do?
🎯 Two simple things: like and subscribe. You have no idea how much it will annoy the wrong people if this series gains traction.
đŸŽ™ïžWho are your hosts and why should you even bother to listen?
Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI.
Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.
All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives.
Follow us for more Responsible AI and the occasional sh*tposting:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/
CHAPTERS:
00:00 - Introduction
01:28 - Misuse of Generative AI
02:27 - Glue example of google gen AI
03:18 - The Challenge of Public Trust and Misinformation
03:45 - Why is this a serious problem?
04:49 - Why should businesses need to worry about it?
05:32 - Auditing Generative AI Systems and Liability Risks
07:18 - Why is this GenAI hype happening?
09:20 - Competitive Pressure and Funding Influence
14:29 - How to avoid failure: investing in Problem Understanding
14:48 - Good use cases of GenAI
17:05 - LLMs are only useful if you know the answer
17:30 - Text-based based video editing as a good example
21:40 - Need for GenAI literacy amongst tech execs
23:30 - Takeaways
#ResponsibleAI #ExplainableAI #podcasts #aiethics

Support the show

What can you do?
🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!
Follow us for more Responsible AI:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/

  continue reading

Fejezetek

1. Introduction (00:00:00)

2. Misuse of Generative AI (00:01:28)

3. The Challenge of Public Trust and Misinformation (00:03:18)

4. Why is this a serious problem? (00:03:45)

5. Why should businesses need to worry about it? (00:04:49)

6. Auditing Generative AI Systems and Liability Risks (00:05:32)

7. Why is this GenAI hype happening? (00:07:18)

8. Competitive Pressure and Funding Influence (00:09:20)

9. How to avoid failure: investing in Problem Understanding (00:14:48)

10. Good use cases of GenAI (00:14:48)

11. LLMs are only useful if you know the answer (00:17:05)

12. Need for GenAI literacy amongst tech execs (00:21:40)

13. Takeaways (00:23:30)

6 epizĂłdok

Minden epizĂłd

×
 
Loading …

ÜdvözlĂŒnk a Player FM-nĂ©l!

A Player FM lejĂĄtszĂł az internetet böngĂ©szi a kivĂĄlĂł minƑsĂ©gƱ podcastok utĂĄn, hogy ön Ă©lvezhesse azokat. Ez a legjobb podcast-alkalmazĂĄs, Androidon, iPhone-on Ă©s a weben is mƱködik. Jelentkezzen be az feliratkozĂĄsok szinkronizĂĄlĂĄsĂĄhoz az eszközök között.

 

Gyors referencia kézikönyv