Artwork

A tartalmat a Soroush Pour biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Soroush Pour vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.
Player FM - Podcast alkalmazás
Lépjen offline állapotba az Player FM alkalmazással!

Ep 7 - Responding to a world with AGI - Richard Dazeley (Prof AI & ML, Deakin University)

1:10:05
 
Megosztás
 

Manage episode 373068638 series 3428190
A tartalmat a Soroush Pour biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Soroush Pour vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.

In this episode, we speak with Prof Richard Dazeley about the implications of a world with AGI and how we can best respond. We talk about what he thinks AGI will actually look like as well as the technical and governance responses we should put in today and in the future to ensure a safe and positive future with AGI.
Prof Richard Dazeley is the Deputy Head of School at the School of Information Technology at Deakin University in Melbourne, Australia. He’s also a senior member of the International AI Existential Safety Community of the Future of Life Institute. His research at Deakin University focuses on aligning AI systems with human preferences, a field better known as “AI alignment”.
Hosted by Soroush Pour. Follow me for more AGI content:
Twitter: https://twitter.com/soroushjp
LinkedIn: https://www.linkedin.com/in/soroushjp/
== Show links ==
-- About Richard --
* Bio: https://www.deakin.edu.au/about-deakin/people/richard-dazeley
* Twitter: https://twitter.com/Sprocc2
* Google Scholar: https://scholar.google.com.au/citations?user=Tp8Sx6AAAAAJ
* Australian Responsible Autonomous Agents Collective: https://araac.au/
* Machine Intelligence Research Lab at Deakin Uni: https://blogs.deakin.edu.au/mila/
-- Further resources --
* [Book] Life 3.0 by Max Tegmark: https://en.wikipedia.org/wiki/Life_3.0* [Policy paper] FLI - Policymaking in the Pause: https://futureoflife.org/wp-content/uploads/2023/04/FLI_Policymaking_In_The_Pause.pdf* Cyc project: https://en.wikipedia.org/wiki/Cyc* Paperclips game: https://en.wikipedia.org/wiki/Universal_Paperclips* Reward misspecification - See "Week 2" of this free online course: https://course.aisafetyfundamentals.com/alignment
-- Corrections --From Richard, referring to dialogue around ~4min mark:
"it was 1956 not 1957. Minsky didn’t make his comment until 1970. It was H. A. Simon and Allen Newell that said ten years after the Dartmouth conference and that was in 1958."
Related, other key statements & dates from Wikipedia (https://en.wikipedia.org/wiki/History_of_artificial_intelligence):1958, H. A. Simon and Allen Newell: "within ten years a digital computer will be the world's chess champion" and "within ten years a digital computer will discover and prove an important new mathematical theorem."1965, H. A. Simon: "machines will be capable, within twenty years, of doing any work a man can do."1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved."1970, Marvin Minsky "In from three to eight years we will have a machine with the general intelligence of an average human being."
Recorded July 10, 2023

  continue reading

15 epizódok

Artwork
iconMegosztás
 
Manage episode 373068638 series 3428190
A tartalmat a Soroush Pour biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a Soroush Pour vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.

In this episode, we speak with Prof Richard Dazeley about the implications of a world with AGI and how we can best respond. We talk about what he thinks AGI will actually look like as well as the technical and governance responses we should put in today and in the future to ensure a safe and positive future with AGI.
Prof Richard Dazeley is the Deputy Head of School at the School of Information Technology at Deakin University in Melbourne, Australia. He’s also a senior member of the International AI Existential Safety Community of the Future of Life Institute. His research at Deakin University focuses on aligning AI systems with human preferences, a field better known as “AI alignment”.
Hosted by Soroush Pour. Follow me for more AGI content:
Twitter: https://twitter.com/soroushjp
LinkedIn: https://www.linkedin.com/in/soroushjp/
== Show links ==
-- About Richard --
* Bio: https://www.deakin.edu.au/about-deakin/people/richard-dazeley
* Twitter: https://twitter.com/Sprocc2
* Google Scholar: https://scholar.google.com.au/citations?user=Tp8Sx6AAAAAJ
* Australian Responsible Autonomous Agents Collective: https://araac.au/
* Machine Intelligence Research Lab at Deakin Uni: https://blogs.deakin.edu.au/mila/
-- Further resources --
* [Book] Life 3.0 by Max Tegmark: https://en.wikipedia.org/wiki/Life_3.0* [Policy paper] FLI - Policymaking in the Pause: https://futureoflife.org/wp-content/uploads/2023/04/FLI_Policymaking_In_The_Pause.pdf* Cyc project: https://en.wikipedia.org/wiki/Cyc* Paperclips game: https://en.wikipedia.org/wiki/Universal_Paperclips* Reward misspecification - See "Week 2" of this free online course: https://course.aisafetyfundamentals.com/alignment
-- Corrections --From Richard, referring to dialogue around ~4min mark:
"it was 1956 not 1957. Minsky didn’t make his comment until 1970. It was H. A. Simon and Allen Newell that said ten years after the Dartmouth conference and that was in 1958."
Related, other key statements & dates from Wikipedia (https://en.wikipedia.org/wiki/History_of_artificial_intelligence):1958, H. A. Simon and Allen Newell: "within ten years a digital computer will be the world's chess champion" and "within ten years a digital computer will discover and prove an important new mathematical theorem."1965, H. A. Simon: "machines will be capable, within twenty years, of doing any work a man can do."1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved."1970, Marvin Minsky "In from three to eight years we will have a machine with the general intelligence of an average human being."
Recorded July 10, 2023

  continue reading

15 epizódok

Alle afleveringen

×
 
Loading …

Üdvözlünk a Player FM-nél!

A Player FM lejátszó az internetet böngészi a kiváló minőségű podcastok után, hogy ön élvezhesse azokat. Ez a legjobb podcast-alkalmazás, Androidon, iPhone-on és a weben is működik. Jelentkezzen be az feliratkozások szinkronizálásához az eszközök között.

 

Gyors referencia kézikönyv