AI researcher burnout: the greatest existential threat to humanity?
Read the urgent conclusions of our new report
Drawing on multiple interviews in the community of AI researchers and bolstered by mathematical modelling, we find that the future of humanity hinges on whether we can take sufficient urgent action to safeguard AI researchers from potential burnout.
The field of long-termism has previously understood the greatest existential risks to humanity to include all-powerful superintelligence, nuclear proliferation, bioweapons, nanotechnology, etc.
Our new report, involving extensive interviews and rigorous mathematical modelling, shows a far greater risk that has been hiding in plain sight: AI researcher burnout.
Our calculation is simple:
A burnout rate of 0.001% of AI alignment researchers per year
Leads to a 0.002% increase in the likelihood of AGI being misaligned
Thereby increasing the existential risk of AGI to humanity by 0.003%
Which, given how many humans we expect to live until the end of the universe, means:
1,000,000,000,000,000,000,000,000 potential future humans are murdered every time an AI alignment researcher has a bad day.
Digging into this further, our report finds that:
The quality of the first coffee consumed in the morning by any given AI alignment researcher has outsize consequences, with a poor brew leading to a population the size of China being wiped out.
Each painful break-up between AI alignment researchers causes a rupture to humanity equivalent to 1,000 Hiroshimas. All AI alignment researchers should therefore be enrolled in couples therapy, irrespective of whether they are in an active relationship — or, if they are polyamorous, in pre-emptive thruples, fourples or fiveples therapy.
There is an opportunity to safeguard the future by immediately building new theme parks close to AI alignment hubs. Assuming the parks donate maximum priority queue jump to AI researchers, the contentment they generate for people working in the field will save trillions of potential future human lives, as well as quite a few chickens.
Crucially, our research finds that these theme parks should focus on water rides. Potentially nausea-inducing rollercoasters could set off a devastating butterfly effect that would cause AGI to spiral into a doom loop.
Do you know an unhappy AI alignment researcher? Please let us know as soon as possible — we will arrange for a crisis team to be dispatched.
Sharing this article might be better using this link from the CAAAC site.
♦️ ABOUT ATTENTION
You’re reading Attention, a new publication to make tech fun again (about).
We’re the makers of the Center for the Alignment of AI Alignment Centers (CAAAC) — you probably got here after signing up to CAAAC’s newsletter.
We’ll keep sending you more cutting-edge analysis from CAAAC for a few more weeks. After that, this newsletter will go back to being regular old Attention.
We won’t be offended if you don’t want to stick around — but if you do, you’ll be rewarded by creative, fun projects that cast light on the world of tech in a way we don’t think anywhere else does. Here’s some other stuff we’ve done:
The Box — the world’s first anti-deepfake wearable, coming to MozFest this year
A review of Prime, the Amazon restaurant
News: Elon Musk gives away billions to feed starving children on Mars
News: OpenAI releases new AI agent for your hungover colleague Greg
If you need or want convincing that this Attention thing is worth sticking around for, I’m happy to give you a sneak preview of our next project — just hit reply to this email.
Cheers,
Louis (founder and editor of Attention)
✏️ WRITE/CODE WITH US
Want to write/code something fun about AI, or tech in general? We’d love to chat. Email hi [at] louis [dot] work.
🔗 FOR YOUR ATTENTION
Where I put things that you might enjoy.
Back soon with the next piece from CAAAC!
Photo by Resume Genius