The award-winning WIRED UK Podcast with James Temperton and the rest of the team. Listen every week for the an informed and entertaining rundown of latest technology, science, business and culture news. New episodes every Friday.
…
continue reading
A tartalmat a LessWrong biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a LessWrong vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.
Player FM - Podcast alkalmazás
Lépjen offline állapotba az Player FM alkalmazással!
Lépjen offline állapotba az Player FM alkalmazással!
“Nice-ish, smooth takeoff (with imperfect safeguards) probably kills most ‘classic humans’ in a few decades.” by Raemon
MP3•Epizód kép
Manage episode 510869585 series 3364760
A tartalmat a LessWrong biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a LessWrong vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.
I wrote my recent Accelerando post to mostly stand on it's own as a takeoff scenario. But, the reason it's on my mind is that, if I imagine being very optimistic about how a smooth AI takeoff goes, but where an early step wasn't "fully solve the unbounded alignment problem, and then end up with extremely robust safeguards[1]"...
...then my current guess is that Reasonably Nice Smooth Takeoff still results in all or at least most biological humans dying (or, "dying out", or at best, ambiguously-consensually-uploaded), like, 10-80 years later.
Slightly more specific about the assumptions I'm trying to inhabit here:
Outline:
(03:50) There is no safe muddling through without perfect safeguards
(06:24) i. Factorio
(06:27) (or: Its really hard to not just take peoples stuff, when they move as slowly as plants)
(10:15) Fictional vs Real Evidence
(11:35) Decades. Or: thousands of years of subjective time, evolution, and civilizational change.
(12:23) This is the Dream Time
(14:33) Is the resulting posthuman population morally valuable?
(16:51) The Hanson Counterpoint: So youre against ever changing?
(19:04) Cant superintelligent AIs/uploads coordinate to avoid this?
(21:18) How Confident Am I?
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
October 2nd, 2025
Source:
https://www.lesswrong.com/posts/v4rsqTxHqXp5tTwZh/nice-ish-smooth-takeoff-with-imperfect-safeguards-probably
---
Narrated by TYPE III AUDIO.
…
continue reading
...then my current guess is that Reasonably Nice Smooth Takeoff still results in all or at least most biological humans dying (or, "dying out", or at best, ambiguously-consensually-uploaded), like, 10-80 years later.
Slightly more specific about the assumptions I'm trying to inhabit here:
- It's politically intractable to get a global halt or globally controlled takeoff.
- Superintelligence is moderately likely to be somewhat nice.
- We'll get to run lots of experiments on near-human-AI that will be reasonably informative about how things will generalize to the somewhat-superhuman-level.
- We get to ramp up [...]
Outline:
(03:50) There is no safe muddling through without perfect safeguards
(06:24) i. Factorio
(06:27) (or: Its really hard to not just take peoples stuff, when they move as slowly as plants)
(10:15) Fictional vs Real Evidence
(11:35) Decades. Or: thousands of years of subjective time, evolution, and civilizational change.
(12:23) This is the Dream Time
(14:33) Is the resulting posthuman population morally valuable?
(16:51) The Hanson Counterpoint: So youre against ever changing?
(19:04) Cant superintelligent AIs/uploads coordinate to avoid this?
(21:18) How Confident Am I?
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
October 2nd, 2025
Source:
https://www.lesswrong.com/posts/v4rsqTxHqXp5tTwZh/nice-ish-smooth-takeoff-with-imperfect-safeguards-probably
---
Narrated by TYPE III AUDIO.
669 epizódok
MP3•Epizód kép
Manage episode 510869585 series 3364760
A tartalmat a LessWrong biztosítja. Az összes podcast-tartalmat, beleértve az epizódokat, grafikákat és podcast-leírásokat, közvetlenül a LessWrong vagy a podcast platform partnere tölti fel és biztosítja. Ha úgy gondolja, hogy valaki az Ön engedélye nélkül használja fel a szerzői joggal védett művét, kövesse az itt leírt folyamatot https://hu.player.fm/legal.
I wrote my recent Accelerando post to mostly stand on it's own as a takeoff scenario. But, the reason it's on my mind is that, if I imagine being very optimistic about how a smooth AI takeoff goes, but where an early step wasn't "fully solve the unbounded alignment problem, and then end up with extremely robust safeguards[1]"...
...then my current guess is that Reasonably Nice Smooth Takeoff still results in all or at least most biological humans dying (or, "dying out", or at best, ambiguously-consensually-uploaded), like, 10-80 years later.
Slightly more specific about the assumptions I'm trying to inhabit here:
Outline:
(03:50) There is no safe muddling through without perfect safeguards
(06:24) i. Factorio
(06:27) (or: Its really hard to not just take peoples stuff, when they move as slowly as plants)
(10:15) Fictional vs Real Evidence
(11:35) Decades. Or: thousands of years of subjective time, evolution, and civilizational change.
(12:23) This is the Dream Time
(14:33) Is the resulting posthuman population morally valuable?
(16:51) The Hanson Counterpoint: So youre against ever changing?
(19:04) Cant superintelligent AIs/uploads coordinate to avoid this?
(21:18) How Confident Am I?
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
October 2nd, 2025
Source:
https://www.lesswrong.com/posts/v4rsqTxHqXp5tTwZh/nice-ish-smooth-takeoff-with-imperfect-safeguards-probably
---
Narrated by TYPE III AUDIO.
…
continue reading
...then my current guess is that Reasonably Nice Smooth Takeoff still results in all or at least most biological humans dying (or, "dying out", or at best, ambiguously-consensually-uploaded), like, 10-80 years later.
Slightly more specific about the assumptions I'm trying to inhabit here:
- It's politically intractable to get a global halt or globally controlled takeoff.
- Superintelligence is moderately likely to be somewhat nice.
- We'll get to run lots of experiments on near-human-AI that will be reasonably informative about how things will generalize to the somewhat-superhuman-level.
- We get to ramp up [...]
Outline:
(03:50) There is no safe muddling through without perfect safeguards
(06:24) i. Factorio
(06:27) (or: Its really hard to not just take peoples stuff, when they move as slowly as plants)
(10:15) Fictional vs Real Evidence
(11:35) Decades. Or: thousands of years of subjective time, evolution, and civilizational change.
(12:23) This is the Dream Time
(14:33) Is the resulting posthuman population morally valuable?
(16:51) The Hanson Counterpoint: So youre against ever changing?
(19:04) Cant superintelligent AIs/uploads coordinate to avoid this?
(21:18) How Confident Am I?
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
October 2nd, 2025
Source:
https://www.lesswrong.com/posts/v4rsqTxHqXp5tTwZh/nice-ish-smooth-takeoff-with-imperfect-safeguards-probably
---
Narrated by TYPE III AUDIO.
669 epizódok
Minden epizód
×Üdvözlünk a Player FM-nél!
A Player FM lejátszó az internetet böngészi a kiváló minőségű podcastok után, hogy ön élvezhesse azokat. Ez a legjobb podcast-alkalmazás, Androidon, iPhone-on és a weben is működik. Jelentkezzen be az feliratkozások szinkronizálásához az eszközök között.