đ„ Generative AI Use Cases: What's Legit and What's Not? | irResponsible AI EP6S01
Manage episode 431306220 series 3578042
Got questions or comments or topics you want us to cover? Text us!
In this episode of irResponsible AI, we discuss
â
GenAI is cool, but do you really need it for your use case?
â
How can companies end up doing irresponsible AI by using GenAI for the wrong use cases?
â
How may we get out of this problem?
What can you do?
đŻ Two simple things: like and subscribe. You have no idea how much it will annoy the wrong people if this series gains traction.
đïžWho are your hosts and why should you even bother to listen?
Upol Ehsan makes AI systems explainable and responsible so that people who arenât at the table donât end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI.
Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. Heâs the Founder and CEO of BABL AI, an AI auditing firm.
All opinions expressed here are strictly the hostsâ personal opinions and do not represent their employers' perspectives.
Follow us for more Responsible AI and the occasional sh*tposting:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/
CHAPTERS:
00:00 - Introduction
01:28 - Misuse of Generative AI
02:27 - Glue example of google gen AI
03:18 - The Challenge of Public Trust and Misinformation
03:45 - Why is this a serious problem?
04:49 - Why should businesses need to worry about it?
05:32 - Auditing Generative AI Systems and Liability Risks
07:18 - Why is this GenAI hype happening?
09:20 - Competitive Pressure and Funding Influence
14:29 - How to avoid failure: investing in Problem Understanding
14:48 - Good use cases of GenAI
17:05 - LLMs are only useful if you know the answer
17:30 - Text-based based video editing as a good example
21:40 - Need for GenAI literacy amongst tech execs
23:30 - Takeaways
#ResponsibleAI #ExplainableAI #podcasts #aiethics
What can you do?
đŻ You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!
Follow us for more Responsible AI:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/
Fejezetek
1. Introduction (00:00:00)
2. Misuse of Generative AI (00:01:28)
3. The Challenge of Public Trust and Misinformation (00:03:18)
4. Why is this a serious problem? (00:03:45)
5. Why should businesses need to worry about it? (00:04:49)
6. Auditing Generative AI Systems and Liability Risks (00:05:32)
7. Why is this GenAI hype happening? (00:07:18)
8. Competitive Pressure and Funding Influence (00:09:20)
9. How to avoid failure: investing in Problem Understanding (00:14:48)
10. Good use cases of GenAI (00:14:48)
11. LLMs are only useful if you know the answer (00:17:05)
12. Need for GenAI literacy amongst tech execs (00:21:40)
13. Takeaways (00:23:30)
6 epizĂłdok