nonymous founders of the Effective Accelerations (e/acc) movement @Bayeslord and Beff Jezos (@BasedBeff) join Erik Torenberg, Dan Romero, and Nathan Labenz to debate views on AI safety. We record our interviews with Riverside. Go to bit.ly/Riverside_MoZ + use code ZEN for 20%.
(3:00) Intro to effective accelerationism (8:00) Differences between effective accelerationism and effective altruism (23:00) Effective accelerationism is bottoms-up (42:00) Transhumanism (46:00) “Equanimity amidst the singularity” (48:30) Why AI safety is the wrong frame (56:00) Pushing back against effective accelerationism (1:06:00) The case for AI safety (1:24:00) Upgrading civilizational infrastructure (1:33:00) Effective accelerationism is anti-fragile (1:39:00) Will we botch AI like we botched nuclear? (1:46:00) Hidden costs of emphasizing downsides (2:00:00) Are we in the same position as neanderthals, before humans? (2:09:00) “Doomerism has an unpriced opportunity cost of upside“
(3:00) Intro to effective accelerationism
(8:00) Differences between effective accelerationism and effective altruism
(23:00) Effective accelerationism is bottoms-up
(42:00) Transhumanism
(46:00) “Equanimity amidst the singularity”
(48:30) Why AI safety is the wrong frame
(56:00) Pushing back against effective accelerationism
(1:06:00) The case for AI safety
(1:24:00) Upgrading civilizational infrastructure
(1:33:00) Effective accelerationism is anti-fragile
(1:39:00) Will we botch AI like we botched nuclear?
(1:46:00) Hidden costs of emphasizing downsides
(2:00:00) Are we in the same position as neanderthals, before humans?
(2:09:00) “Doomerism has an unpriced opportunity cost of upside“
More shownotes and reading material released in our Substack: momentofzen.substack.com/
Thank you Secureframe for sponsoring (Use "Moment of Zen" for 20% discount) and Graham Bessellieu for production.
Music License: AUWPOHS6DAPPCYV1