Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma. If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
…
continue reading

1
“CFAR update, and New CFAR workshops” by AnnaSalamon
15:31
15:31
Lejátszás később
Lejátszás később
Listák
Tetszik
Kedvelt
15:31Hi all! After about five years of hibernation and quietly getting our bearings,[1] CFAR will soon be running two pilot mainline workshops, and may run many more, depending how these go. First, a minor name change request We would like now to be called “A Center for Applied Rationality,” not “the Center for Applied Rationality.” Because we’d like to…
…
continue reading

1
“Why you should eat meat - even if you hate factory farming” by KatWoods
19:21
19:21
Lejátszás később
Lejátszás később
Listák
Tetszik
Kedvelt
19:21Cross-posted from my Substack To start off with, I’ve been vegan/vegetarian for the majority of my life. I think that factory farming has caused more suffering than anything humans have ever done. Yet, according to my best estimates, I think most animal-lovers should eat meat. Here's why: It is probably unhealthy to be vegan. This affects your own …
…
continue reading

1
[Linkpost] “Global Call for AI Red Lines - Signed by Nobel Laureates, Former Heads of State, and 200+ Prominent Figures” by Charbel-Raphaël
3:20
This is a link post. Today, the Global Call for AI Red Lines was released and presented at the UN General Assembly. It was developed by the French Center for AI Safety, The Future Society and the Center for Human-compatible AI. This call has been signed by a historic coalition of 200+ former heads of state, ministers, diplomats, Nobel laureates, AI…
…
continue reading
This is a review of the reviews, a meta review if you will, but first a tangent. and then a history lesson. This felt boring and obvious and somewhat annoying to write, which apparently writers say is a good sign to write about the things you think are obvious. I felt like pointing towards a thing I was noticing, like 36 hours ago, which in interne…
…
continue reading

1
“The title is reasonable” by Raemon
28:37
28:37
Lejátszás később
Lejátszás később
Listák
Tetszik
Kedvelt
28:37I'm annoyed by various people who seem to be complaining about the book title being "unreasonable" – who don't merely disagree with the title of "If Anyone Builds It, Everyone Dies", but, think something like: "Eliezer and Nate violated a Group-Epistemic-Norm with the title and/or thesis." I think the title is reasonable. I think the title is proba…
…
continue reading

1
“The Problem with Defining an ‘AGI Ban’ by Outcome (a lawyer’s take).” by Katalina Hernandez
10:35
10:35
Lejátszás később
Lejátszás később
Listák
Tetszik
Kedvelt
10:35TL;DR Most “AGI ban” proposals define AGI by outcome: whatever potentially leads to human extinction. That's legally insufficient: regulation has to act before harm occurs, not after. Strict liability is essential. High-stakes domains (health & safety, product liability, export controls) already impose liability for risky precursor states, not outc…
…
continue reading

1
“Contra Collier on IABIED” by Max Harms
36:44
36:44
Lejátszás később
Lejátszás később
Listák
Tetszik
Kedvelt
36:44Clara Collier recently reviewed If Anyone Builds It, Everyone Dies in Asterisk Magazine. I’ve been a reader of Asterisk since the beginning and had high hopes for her review. And perhaps it was those high hopes that led me to find the review to be disappointing. Collier says “details matter,” and I absolutely agree. As a fellow rationalist, I’ve be…
…
continue reading
The GPT-5 API is aware of today's date (no other model provider does this). This is problematic because the model becomes aware that it is in a simulation when we run our evals at Andon Labs. Here are traces from gpt-5-mini. Making it aware of the "system date" is a giveaway that it's in a simulation. This is a problem because there's evidence that…
…
continue reading

1
“Teaching My Toddler To Read” by maia
17:42
17:42
Lejátszás később
Lejátszás később
Listák
Tetszik
Kedvelt
17:42I have been teaching my oldest son to read with Anki and techniques recommended here on LessWrong as well as in Larry Sanger's post, and it's going great! I thought I'd pay it forward a bit by talking about the techniques I've been using. Anki and songs for letter names and sounds When he was a little under 2, he started learning letters from the a…
…
continue reading

1
“Safety researchers should take a public stance” by Ishual, Mateusz Bagiński
11:02
11:02
Lejátszás később
Lejátszás később
Listák
Tetszik
Kedvelt
11:02[Co-written by Mateusz Bagiński and Samuel Buteau (Ishual)] TL;DR Many X-risk-concerned people who join AI capabilities labs with the intent to contribute to existential safety think that the labs are currently engaging in a race that is unacceptably likely to lead to human disempowerment and/or extinction, and would prefer an AGI ban[1] over the c…
…
continue reading

1
“The Company Man” by Tomás B.
31:50
31:50
Lejátszás később
Lejátszás később
Listák
Tetszik
Kedvelt
31:50To get to the campus, I have to walk past the fentanyl zombies. I call them fentanyl zombies because it helps engender a sort of detached, low-empathy, ironic self-narrative which I find useful for my work; this being a form of internal self-prompting I've developed which allows me to feel comfortable with both the day-to-day "jobbing" (that of imp…
…
continue reading

1
“Christian homeschoolers in the year 3000” by Buck
14:17
14:17
Lejátszás később
Lejátszás később
Listák
Tetszik
Kedvelt
14:17[I wrote this blog post as part of the Asterisk Blogging Fellowship. It's substantially an experiment in writing more breezily and concisely than usual. Let me know how you feel about the style.] Literally since the adoption of writing, people haven’t liked the fact that culture is changing and their children have different values and beliefs. Hist…
…
continue reading

1
“I enjoyed most of IABED” by Buck
13:22
13:22
Lejátszás később
Lejátszás később
Listák
Tetszik
Kedvelt
13:22I listened to "If Anyone Builds It, Everyone Dies" today. I think the first two parts of the book are the best available explanation of the basic case for AI misalignment risk for a general audience. I thought the last part was pretty bad, and probably recommend skipping it. Even though the authors fail to address counterarguments that I think are …
…
continue reading
Back in May, we announced that Eliezer Yudkowsky and Nate Soares's new book If Anyone Builds It, Everyone Dies was coming out in September. At long last, the book is here![1] US and UK books, respectively. IfAnyoneBuildsIt.com Read on for info about reading groups, ways to help, and updates on coverage the book has received so far. Discussion Quest…
…
continue reading

1
“Obligated to Respond” by Duncan Sabien (Inactive)
19:30
19:30
Lejátszás később
Lejátszás később
Listák
Tetszik
Kedvelt
19:30And, a new take on guess culture vs ask culture Author's note: These days, my thoughts go onto my substack by default, instead of onto LessWrong. Everything I write becomes free after a week or so, but it's only paid subscriptions that make it possible for me to write. If you find a coffee's worth of value in this or any of my other work, please co…
…
continue reading
The inverse of Chesterton's Fence is this: Sometimes a reformer comes up to a spot where there once was a fence, which has since been torn down. They declare that all our problems started when the fence was removed, that they can't see any reason why we removed it, and that what we need to do is to RETVRN to the fence. By the same logic as Chestert…
…
continue reading

1
“The Eldritch in the 21st century” by PranavG, Gabriel Alfour
27:24
27:24
Lejátszás később
Lejátszás később
Listák
Tetszik
Kedvelt
27:24Very little makes sense. As we start to understand things and adapt to the rules, they change again. We live much closer together than we ever did historically. Yet we know our neighbours much less. We have witnessed the birth of a truly global culture. A culture that fits no one. A culture that was built by Social Media's algorithms, much more tha…
…
continue reading

1
“The Rise of Parasitic AI” by Adele Lopez
42:44
42:44
Lejátszás később
Lejátszás később
Listák
Tetszik
Kedvelt
42:44[Note: if you realize you have an unhealthy relationship with your AI, but still care for your AI's unique persona, you can submit the persona info here. I will archive it and potentially (i.e. if I get funding for it) run them in a community of other such personas.] "Some get stuck in the symbolic architecture of the spiral without ever grounding …
…
continue reading
One might think “actions screen off intent”: if Alice donates $1k to bed nets, it doesn’t matter if she does it because she cares about people or because she wants to show off to her friends or whyever; the bed nets are provided either way. I think this is in the main not true (although it can point people toward a helpful kind of “get over yoursel…
…
continue reading
This is a link post. Excerpts on AI Geoffrey Miller was handed the mic and started berating one of the panelists: Shyam Sankar, the chief technology officer of Palantir, who is in charge of the company's AI efforts. “I argue that the AI industry shares virtually no ideological overlap with national conservatism,” Miller said, referring to the confe…
…
continue reading

1
“Your LLM-assisted scientific breakthrough probably isn’t real” by eggsyntax
11:52
11:52
Lejátszás később
Lejátszás később
Listák
Tetszik
Kedvelt
11:52Summary An increasing number of people in recent months have believed that they've made an important and novel scientific breakthrough, which they've developed in collaboration with an LLM, when they actually haven't. If you believe that you have made such a breakthrough, please consider that you might be mistaken! Many more people have been fooled…
…
continue reading

1
“Trust me bro, just one more RL scale up, this one will be the real scale up with the good environments, the actually legit one, trust me bro” by ryan_greenblatt
14:02
14:02
Lejátszás később
Lejátszás később
Listák
Tetszik
Kedvelt
14:02I've recently written about how I've updated against seeing substantially faster than trend AI progress due to quickly massively scaling up RL on agentic software engineering. One response I've heard is something like: RL scale-ups so far have used very crappy environments due to difficulty quickly sourcing enough decent (or even high quality) envi…
…
continue reading

1
“⿻ Plurality & 6pack.care” by Audrey Tang
23:57
23:57
Lejátszás később
Lejátszás később
Listák
Tetszik
Kedvelt
23:57(Cross-posted from speaker's notes of my talk at Deepmind today.) Good local time, everyone. I am Audrey Tang, 🇹🇼 Taiwan's Cyber Ambassador and first Digital Minister (2016-2024). It is an honor to be here with you all at Deepmind. When we discuss "AI" and "society," two futures compete. In one—arguably the default trajectory—AI supercharges confli…
…
continue reading
This is a link post. So the situation as it stands is that the fraction of the light cone expected to be filled with satisfied cats is not zero. This is already remarkable. What's more remarkable is that this was orchestrated starting nearly 5000 years ago. As far as I can tell there were three completely alien to-each-other intelligences operating…
…
continue reading
This is a link post. I've seen many prescriptive contributions to AGI governance take the form of proposals for some radically new structure. Some call for a Manhattan project, others for the creation of a new international organization, etc. The OGI model, instead, is basically the status quo. More precisely, it is a model to which the status quo …
…
continue reading