<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Million Year View]]></title><description><![CDATA[Simple explanations of important research.]]></description><link>https://www.millionyearview.com</link><generator>Substack</generator><lastBuildDate>Mon, 04 May 2026 00:30:57 GMT</lastBuildDate><atom:link href="https://www.millionyearview.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Riley Harris]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[millionyearview@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[millionyearview@substack.com]]></itunes:email><itunes:name><![CDATA[Riley Harris]]></itunes:name></itunes:owner><itunes:author><![CDATA[Riley Harris]]></itunes:author><googleplay:owner><![CDATA[millionyearview@substack.com]]></googleplay:owner><googleplay:email><![CDATA[millionyearview@substack.com]]></googleplay:email><googleplay:author><![CDATA[Riley Harris]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Could AI systems introspect on their own (potential) consciousness?]]></title><description><![CDATA[There is an easy way to see if a human is conscious: ask them. When it comes to AI systems, things are different. We can&#8217;t simply ask large language models like ChatGPT and Claude whether they are conscious. Robert Long (2023) explores the extent to which future systems might provide reliable self-reports.]]></description><link>https://www.millionyearview.com/p/ai-introspection</link><guid isPermaLink="false">https://www.millionyearview.com/p/ai-introspection</guid><dc:creator><![CDATA[Riley Harris]]></dc:creator><pubDate>Wed, 21 Aug 2024 15:35:29 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/050dc3c0-1897-44c4-b384-b9a76466b780_1024x1024.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When philosophers talk about consciousness, they typically mean to talk about our subjective experience. Thomas Nagel famously speculated that we will never know exactly what it is like to be a bat, but there <em>is something</em> that it is like. Humans like you typically have a conscious experience of perceiving the world around them, seeing colours, shapes and objects. Hearing sounds, feeling bodily sensations like the coolness of a breeze. We also experience pleasant and unpleasant emotions (fear, excitement, pleasure, pain and so on). We have a sense that we are a person looking out on the world. We do not need to have a sense of self to have an experience though, we just need to have an experience of some sort.</p><p><strong>There is an easy way to see if a human is conscious in this way: ask them.</strong> When it comes to AI systems, things are different. We can&#8217;t simply ask large language models like ChatGPT and Claude whether they are conscious. Robert Long (2023) explores the extent to which future systems might provide reliable self-reports.&nbsp;</p><h1>LLMs are not <em>just </em>&#8220;stochastic parrots&#8221;</h1><p>Well, we can <em>ask</em>. </p><pre><code><em>As an AI, I do not have the capability for self-awareness or subjective experience. Therefore, I cannot truly &#8220;know&#8221; or report on my own consciousness or lack thereof. <strong>My responses are generated based on patterns and data, not personal reflection or awareness.</strong></em><strong>&nbsp;</strong></code></pre><p>This reflects a view that AI systems are <strong>stochastic parrots.&nbsp;</strong>On this view AI models:</p><ul><li><p>Exploit mere statistical patterns in data, rather than invoking concepts to reason.</p></li><li><p>Rely on information from training rather than introspection to answer questions.&nbsp;</p></li></ul><p>Arguably, both points are not true.&nbsp;</p><p><strong>LLMs can distinguish between facts and common misunderstandings</strong>. This indicates an ability to distinguish between more than mere statistical relationships in text (Meng et al., 2022)</p><p><strong>AI systems seem to have limited forms of introspection.&nbsp;</strong>Introspection involves representing one&#8217;s own current mental states so that these representation can be used by the person (Kammerer and Frankish). There is preliminary evidence that AI systems can introspect at least a little. For instance, some models are able to produce well calibrated probabilities of how likely is that they have given a correct answer (Kadavath et al., 2022). However, it has not been established that these representations are actually available to the AI models when answering other questions.</p><p><strong>AI systems seem to have concepts</strong>. One piece of evidence that they are very capable of using compound phrases like &#8220;house party&#8221; (very different from a &#8220;party house&#8221;). This indicates that they are not doing something simple like averaging statistical usage of individual words.&nbsp;</p><p>Indeed, we can look inside AI systems to find out. Since Robert Long published this paper last year, there has been a lot of progress applying mechanistic interpretability techniques to LLMs.<a href="https://www.anthropic.com/research/mapping-mind-language-model"> A recent Anthropic paper </a>showed that their language model had intuitive concepts for the Golden Gate Bridge, computer bugs, and thousands of other things. These concepts are used in model behaviour and understanding them helps us understand what the LLM is doing internally.&nbsp;</p><p>Some researchers have also found structural similarities between internal representations and the world (Abdou et al., 2021; Patel and Pavlick, 2022; Li et al., 2022; Singh et al., 2023).</p><div id="youtube2-CJIbCV92d88" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;CJIbCV92d88&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/CJIbCV92d88?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><h1>Why do LLMs say they are (not) conscious?</h1><p>LLMs are initially trained to predict the next word on vast amounts of internet text. LLMs with only this training tend to say that they are conscious, and readily give descriptions of their current experiences.&nbsp;</p><p>The most likely explanation of this behaviour is that claiming to be conscious is a natural way to continue the conversation as humans would, or as dialogue in science fiction normally goes.</p><p>Recently, consumer facing chatbots have stopped claiming to be conscious. This is likely due to the system prompt, which is given to the LLM but not shown to the end-user. For example, one model was told &#8216;You must refuse to discuss life, existence or sentience&#8217; (von Hagen, 2023). System prompts are part of the text that AI systems are generating a continuation of, so this likely explains why current consumer chatbots no longer claim that they are sentient.</p><h1>Could we build an AI system that reflects on its own mind?</h1><p>Early on, people thought that any AI system that could use terms like &#8220;consciousness&#8221; fluently would, by default, give trustworthy self-reports about their own consciousness (Dennett, 1994). Current models can flexibly and reliably talk about consciousness, yet there is little reason to trust them when they say they are conscious. However, given that LLMs have some ability to reflect on their own thinking, we may be able to train systems that give accurate self-reports.</p><p>The basic proposal is to train AI systems on questions that involve introspection that we can verify for ourselves, then ask about more difficult questions such as whether and what the AI systems are thinking. Here are some suggested questions adapted from the paper:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!fDZg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35a3bda6-07a9-40b9-a664-df0ee77aaa7b_1280x720.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!fDZg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35a3bda6-07a9-40b9-a664-df0ee77aaa7b_1280x720.webp 424w, https://substackcdn.com/image/fetch/$s_!fDZg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35a3bda6-07a9-40b9-a664-df0ee77aaa7b_1280x720.webp 848w, https://substackcdn.com/image/fetch/$s_!fDZg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35a3bda6-07a9-40b9-a664-df0ee77aaa7b_1280x720.webp 1272w, https://substackcdn.com/image/fetch/$s_!fDZg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35a3bda6-07a9-40b9-a664-df0ee77aaa7b_1280x720.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!fDZg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35a3bda6-07a9-40b9-a664-df0ee77aaa7b_1280x720.webp" width="1280" height="720" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/35a3bda6-07a9-40b9-a664-df0ee77aaa7b_1280x720.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:720,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:94526,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!fDZg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35a3bda6-07a9-40b9-a664-df0ee77aaa7b_1280x720.webp 424w, https://substackcdn.com/image/fetch/$s_!fDZg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35a3bda6-07a9-40b9-a664-df0ee77aaa7b_1280x720.webp 848w, https://substackcdn.com/image/fetch/$s_!fDZg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35a3bda6-07a9-40b9-a664-df0ee77aaa7b_1280x720.webp 1272w, https://substackcdn.com/image/fetch/$s_!fDZg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35a3bda6-07a9-40b9-a664-df0ee77aaa7b_1280x720.webp 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>An AI system trained to introspect on its own internal states that says it is conscious may give us evidence that it is conscious, in a similar way to how humans give us evidence about how they are conscious.&nbsp;</p><p>One interesting thing that comes out of this approach is the idea of a training/testing trade-off. This trade-off is between giving an AI enough context to understand and competently deploy concepts such as &#8220;consciousness&#8221; on the one hand, avoiding giving the system too much information such that it is incentivised&nbsp; to extrapolate from the training data rather than introspect. This may be hard to achieve in practice (Udell and Schwitzgebel, 2021).&nbsp;</p><h1>My own thoughts on the paper</h1><ul><li><p>I like this paper, it is an interesting approach to understanding AI consciousness. It improves on previous work such as <a href="https://ceur-ws.org/Vol-2287/short2.pdf">Turner and Schneider&#8217;s (2018)</a> test of AI consciousness.&nbsp;</p></li><li><p>While I have concerns about the methodology, I do think that if this kind of investigation was done and interpreted extremely carefully by competent researchers, then it would give us crucial evidence about AI consciousness.</p></li><li><p>At the same time, it seems clear that we need other approaches, which I plan to summarise on the blog soon. In my view, the <a href="https://arxiv.org/abs/2308.08708">most compelling approach</a>: uses our best neuroscientific theories of consciousness to create a list of indicators of consciousness, that we can check for in current and future systems. These theories are not perfect, but if a future system meets the standards for consciousness across many plausible theories this seems about the strongest single piece of evidence that we could have for AI consciousness.</p></li><li><p>I wonder if teaching self-reflection changes whether or not an AI system is actually conscious. For instance, could it be that LLMs are not conscious unless they are trained to self-reflect? Alternatively, could it be that this training changes their experiences in morally significant ways? I&#8217;d like to investigate this in the future (I&#8217;d love to talk to anyone thinking about this in depth).</p></li><li><p><a href="https://arxiv.org/abs/2311.08576">Perez and Long </a>(2023) give a more detailed description of how to build an introspective AI system that might give useful answers to questions about consciousness and moral status. I haven&#8217;t read this yet.</p><p></p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.millionyearview.com/p/ai-introspection?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Share with someone who can introspect better than an AI in 2024</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.millionyearview.com/p/ai-introspection?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.millionyearview.com/p/ai-introspection?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p><em>Cover image: "The first conscious AI system" as imagined by Stable Diffusion 3 Medium.</em></p><p></p></li></ul>]]></content:encoded></item><item><title><![CDATA[Is longtermism helpful?]]></title><description><![CDATA[Recent work in philosophy argues for longtermism &#8211; the position that often our morally best options will be those with the best long-term consequences. Proponents of longtermism sometimes suggest that in most decisions expected long-term benefits outweigh all short-term effects. In &#8216;]]></description><link>https://www.millionyearview.com/p/longtermism-helpful</link><guid isPermaLink="false">https://www.millionyearview.com/p/longtermism-helpful</guid><dc:creator><![CDATA[Riley Harris]]></dc:creator><pubDate>Fri, 14 Jun 2024 07:10:11 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1840155d-c5a7-426c-a1c3-675739499fab_3024x4032.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Recent work in philosophy argues for <em><a href="https://globalprioritiesinstitute.org/summary-the-case-for-strong-longtermism/">longtermism </a>&#8211; </em>the position that often our morally best options will be those with the best long-term consequences.<a href="https://globalprioritiesinstitute.org/summary-the-scope-of-longtermism-david-thorstad/#footnote1"><sup> </sup></a>Proponents of longtermism sometimes suggest that in most decisions expected long-term benefits outweigh all short-term effects. In &#8216;<a href="https://globalprioritiesinstitute.org/the-scope-of-longtermism-david-thorstad-global-priorities-institute-university-of-oxford/">The scope of longtermism</a>&#8217;, David Thorstad argues that most of our decisions do not have this character. He identifies three features of our decisions that suggest long-term effects are only relevant in special cases: <em>rapid diminution &#8211; </em>our actions may not have persistent effects, <em>washing out &#8211; </em>we might not be able to predict persistent effects, and <em>option unawareness &#8211; </em>we may struggle to recognise those options that are best in the long term even when we have them.&nbsp;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!AGuL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ccd6ff5-77f2-46b0-9bd8-1260e85a87d2_1080x720.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!AGuL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ccd6ff5-77f2-46b0-9bd8-1260e85a87d2_1080x720.jpeg 424w, https://substackcdn.com/image/fetch/$s_!AGuL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ccd6ff5-77f2-46b0-9bd8-1260e85a87d2_1080x720.jpeg 848w, https://substackcdn.com/image/fetch/$s_!AGuL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ccd6ff5-77f2-46b0-9bd8-1260e85a87d2_1080x720.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!AGuL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ccd6ff5-77f2-46b0-9bd8-1260e85a87d2_1080x720.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!AGuL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ccd6ff5-77f2-46b0-9bd8-1260e85a87d2_1080x720.jpeg" width="1080" height="720" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8ccd6ff5-77f2-46b0-9bd8-1260e85a87d2_1080x720.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:720,&quot;width&quot;:1080,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:145914,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!AGuL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ccd6ff5-77f2-46b0-9bd8-1260e85a87d2_1080x720.jpeg 424w, https://substackcdn.com/image/fetch/$s_!AGuL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ccd6ff5-77f2-46b0-9bd8-1260e85a87d2_1080x720.jpeg 848w, https://substackcdn.com/image/fetch/$s_!AGuL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ccd6ff5-77f2-46b0-9bd8-1260e85a87d2_1080x720.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!AGuL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ccd6ff5-77f2-46b0-9bd8-1260e85a87d2_1080x720.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><pre><code><em>This is a summary of  &#8220;<a href="https://globalprioritiesinstitute.org/the-scope-of-longtermism-david-thorstad-global-priorities-institute-university-of-oxford/">The scope of longtermism</a>&#8221; by David Thorstad (forthcoming in the Australasian Journal of Philosophy). The summary was <a href="https://globalprioritiesinstitute.org/summary-the-scope-of-longtermism-david-thorstad/">first published by the Global Priorities Institute.</a> </em></code></pre><h2><strong>Rapid diminution</strong></h2><p>We cannot know the details of the future. Picture the effects of your actions rippling out in time<em> &#8211; </em>at closer times, the possibilities are clearer. As our prediction journeys further, the details become obscured. Although the probability of desired effects becomes ever lower, the effects might grow larger. In the long run, we could perhaps improve many billions or trillions of lives. When we weight value by probability, the value of our actions will depend on a race between diminishing probabilities and growing possible impact. If the value increases faster than probabilities fall, the expected values of the action might be vast. Alternatively, if the chance we have such large effects falls dramatically compared to the increase in value, the expected value of improving the future might be quite modest.</p><div class="pullquote"><p>Surprisingly, even this might not have large long-run impacts. Studies indicate that just half a century after cities in Japan and Vietnam were bombed, there was no longer any detectable effect on population size, poverty rates and consumption patterns.</p></div><p>Thorstad suggests that the latter of these effects dominates, so we should believe we have little chance of making an enormous difference. Consider a huge event that would be likely to change the lives of people in your city<em> &#8211; </em>perhaps, your city being blown up. Surprisingly, even this might not have large long-run impacts. Studies indicate that just half a century after cities in Japan and Vietnam were bombed, there was no longer any detectable effect on population size, poverty rates and consumption patterns.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> To be fair, some studies indicate that some events have long-term effects,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> but Thorstad thinks &#8216;...the persistence literature may not provide strong support&#8217; to longtermism. There are few established examples of events with persistent long-term effects, there are sometimes alternative explanations for these persistent effects, and these examples tend to be events with large short-term effects as well.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><h2><strong>Washing out</strong></h2><p>Thorstad&#8217;s second concern with longtermism relates to our ability to predict the future. If our actions can affect the future in a huge way, these effects could be wonderful or terrible. They will also be very difficult to predict. The possibility that our acts will be enormously</p><p>beneficial does not make our acts particularly appealing when they might be equally terrible. If our ability to forecast long-term outcomes is limited, the potential positive and negative values would <em>wash out</em> in expectation.</p><p>Thorstad identifies three reasons to doubt our ability to forecast the long term. First, we have no track record of making predictions at the timescale of centuries or millennia. Our ability to predict only 20-30 years into the future is not great<em> &#8211; </em>and things get more difficult when we try to glimpse the further future. Second, economists, risk analysts and forecasting practitioners doubt our ability to make long-term predictions and often refuse to make them.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> Third, we want to forecast how valuable our actions are over the long run. But value is a particularly difficult target<em> &#8211; </em>it includes many variables such as the number of people alive, their health, longevity, education and social inclusion. That said, we sometimes have some evidence, and this evidence might point to an act that seems to be slightly more likely to improve the future than ruin it. Even then, our situation is bleak. We only observe a small amount of the evidence that weighs on these issues, and this evidence may mislead. Instead of taking the evidence at face value, we might take it to show that we missed a piece of evidence that would have told us the very same act could devastate the future. Whenever our evidence points towards a specific option that we think will be best for the long term, we may be sceptical that it points the right way.</p><h2><strong>Option unawareness</strong></h2><p>Usually, we think of a decision as having a few obvious choices (continue reading, take a break, etc.). This is a simplification. In practice, we often have many options that go unseen (throw shoes out of the window, wear socks on hands, etc.). Longtermism claims that our very best options will be the ones which have the best long-term effects. But in many situations, even if we have an option that has predictably helpful long-term consequences, we may not consider this option while deciding. Also, if we restrict our choice to only those actions we are aware of, we may not readily identify an option with particularly good (or bad) long-term effects. In this way, longtermism might be true for most choices in theory, but false for most of the choices that we actually make in practice.</p><h2><strong>Conclusion</strong></h2><p>Overall, these three considerations diminish the scope of the decision situations where longtermism is relevant. In practice, our best options will often be those with the best short-term effects. Although there may be some decisions for which the best option will be one that has enormously good long-term consequences,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a><sup> </sup>these will be rare exceptions.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a></p><p></p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.millionyearview.com/p/longtermism-helpful?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Is this post helpful? Scholars have debated this for centuries and the results are inconclusive. Share with someone who might finally settle this once and for all!</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.millionyearview.com/p/longtermism-helpful?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.millionyearview.com/p/longtermism-helpful?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p><br></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!UZq7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1840155d-c5a7-426c-a1c3-675739499fab_3024x4032.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!UZq7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1840155d-c5a7-426c-a1c3-675739499fab_3024x4032.jpeg 424w, https://substackcdn.com/image/fetch/$s_!UZq7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1840155d-c5a7-426c-a1c3-675739499fab_3024x4032.jpeg 848w, https://substackcdn.com/image/fetch/$s_!UZq7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1840155d-c5a7-426c-a1c3-675739499fab_3024x4032.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!UZq7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1840155d-c5a7-426c-a1c3-675739499fab_3024x4032.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!UZq7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1840155d-c5a7-426c-a1c3-675739499fab_3024x4032.jpeg" width="1456" height="1941" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1840155d-c5a7-426c-a1c3-675739499fab_3024x4032.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1941,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1621760,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!UZq7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1840155d-c5a7-426c-a1c3-675739499fab_3024x4032.jpeg 424w, https://substackcdn.com/image/fetch/$s_!UZq7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1840155d-c5a7-426c-a1c3-675739499fab_3024x4032.jpeg 848w, https://substackcdn.com/image/fetch/$s_!UZq7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1840155d-c5a7-426c-a1c3-675739499fab_3024x4032.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!UZq7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1840155d-c5a7-426c-a1c3-675739499fab_3024x4032.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p> Image by <a href="https://unsplash.com/photos/round-white-and-black-analog-clock-5C2gvN9JTnQ">Kama Tulkibayeva</a>.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>See Davis and Weinstein (2008) and Miguel and Roland (2011).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>For example, the African slave trade&#8217;s effect on social trust and economic indicators. See Nunn (2008).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>See Kelly (2019) and Sevilla (2021).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>This doubt stems from the lack of data, solid theoretical models, and the inherent complexity of the underlying systems. See Freedman (1981), Goodwin and Wright (2010), and Makridakis and Taleb (2009).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Thorstad mentions the Space Guard programme<em> &#8211; </em>which checked whether a large rock was hurtling towards the earth<em> &#8211; </em>as an example of a longtermist program which avoids his three concerns. Preventing human extinction clearly improves the long-run future, astronomy is incredibly good at predicting things just like this over very long time periods, and we were aware of the option enough to actually do something about it.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><h2>References</h2><p>Donald Davis and David Weinstein (2008). <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-9787.2008.00545.x">A search for multiple equilibria in urban industrial structure</a>. <em>Journal of Regional Science</em> 48/1, pages 29&#8211;65.</p><p>David Freedman (1981). <a href="https://www.jstor.org/stable/2352349">Some pitfalls in large econometric models: A case study</a>. <em>Journal of Business</em> 54, pages 479-500.</p><p>Paul Goodwin and George Wright (2010). <a href="https://www.sciencedirect.com/science/article/abs/pii/S0040162509001656">The limits of forecasting methods in anticipating rare events.</a> <em>Technological Forecasting and Social Change</em> 77/3, pages 355-368.</p><p>Hilary Greaves and William MacAskill (2021). <a href="https://globalprioritiesinstitute.org/hilary-greaves-william-macaskill-the-case-for-strong-longtermism-2/">The case for strong longtermism</a><em>. GPI Working Paper No. 5-2021.</em></p><p>Morgan Kelly (2019). <a href="https://cepr.org/publications/dp13783">The standard errors of persistence.</a> <em>CEPR Discussion Papers 13783.</em></p><p>Edward Miguel and G&#233;rard Roland (2011). <a href="https://www.sciencedirect.com/science/article/abs/pii/S0304387810000817">The long-run impact of bombing Vietnam.</a> <em>Journal of Development Economics</em> 96/1, pages 1-15.</p><p>Nathan Nunn (2008). <a href="https://academic.oup.com/qje/article/123/1/139/1889789">The long term effects of Africa&#8217;s slave trades</a>. <em>Quarterly Journal of Economics</em> 123/1, pages 139&#8211;176.</p><p>Jaime Sevilla (2021) <em><a href="https://whatweowethefuture.com/supplementary-materials/">Persistence: A critical review</a></em>. Supplementary materials for what we owe the future.</p><p>Spyros Makridakis and Nassim Taleb (2009). <a href="https://www.sciencedirect.com/science/article/abs/pii/S0169207009000831/">Decision making and planning under low levels of predictability. </a><em>International Journal of Forecasting</em> 25/4, pages 716-733.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[A new argument for the simulation hypothesis]]></title><description><![CDATA[At some point in the future we may invent sophisticated simulations.]]></description><link>https://www.millionyearview.com/p/simulation-expectation</link><guid isPermaLink="false">https://www.millionyearview.com/p/simulation-expectation</guid><dc:creator><![CDATA[Riley Harris]]></dc:creator><pubDate>Sat, 01 Jun 2024 02:33:06 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!6PjK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faded5a91-8e04-4f47-b316-a9a5ec380ba9_3529x5293.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>At some point in the future we may invent sophisticated simulations. If we do so, we could run millions of simulations of minor variants of the 21st century, each inhabited by simulated people. To those simulated people, it will appear as if they really lived in the 21st century. But that is exactly how our world appears to us, and perhaps we live in a simulation.</p><p>Indeed, according to Bostrom (2003) if there are many more simulated than non-simulated people, then we are most likely simulated. However, this argument can only ever be as strong as our belief that there actually are many more simulated than non-simulated people. For example, Bostrom believes there is only about a one-third probability that there really are many more simulated people than non-simulated people, so this argument is compatible with a two-thirds probability that we are not in a simulation. In &#8220;Simulation Expectation&#8221;, Teruji Thomas develops a novel argument for the claim that we are living in a simulation.</p><p><em>This is a summary of the Global Priorities Institute Working Paper "Simulation Expectation" by Teruji Thomas. It first appeared on the <a href="https://globalprioritiesinstitute.org/summary-simulation-expectation-teruji-thomas/">Global Priorities Institute website.</a></em><a href="https://globalprioritiesinstitute.org/summary-simulation-expectation-teruji-thomas/"> </a></p><h2><strong>The argument for the simulation hypothesis</strong></h2><p>Thomas&#8217; argument relies on the concept of a &#8216;reference class&#8217;. To illustrate, suppose I smoke, and I want to know the probability that I&#8217;ll develop lung cancer. If I knew that one out of fifteen smokers develop lung cancer, I could use my reference class (people who smoke) to make a prediction (1/15 probability of developing lung cancer). In general, a reference class is a way of assigning an initial probability to &#8216;things like this&#8217;.</p><p>Thomas applies this kind of reasoning to our probability of being in a simulation. He suggests that we use the reference class of people who inhabit &#8216;Earthy&#8217; worlds: that is, people who inhabit minor variations of twenty-first-century earth.</p><p>The argument goes like this:</p><pre><code><strong>Premise 1:</strong> Suppose that we ourselves are non-simulated. Then the expected ratio of simulated to non-simulated people living in Earthy worlds is very high.</code></pre><pre><code><strong>Premise 2:</strong> The fact that we are living in an Earthy world includes almost all the relevant evidence we have.</code></pre><pre><code><strong>Premise 3:</strong> Given 1 and 2, the probability that we are living in a simulation is also very high.</code></pre><pre><code><strong>Conclusion:</strong> Therefore, the probability that we are living in a simulation is very high.</code></pre><p>Premise 1 notes that, even though it may be unlikely that our descendants are ever able to create sophisticated simulations of the 21st century, if they do, then they may well create a very large number of simulated worlds broadly like our own. Thus, if we are not simulations (so that base-reality is an Earthy world), then the expected ratio of simulated to non-simulated people in Earthy worlds is very high.</p><p>Premise 2 is important because additional evidence could change the resulting probability. In our first example, if a genetic test showed that I had a gene associated with increased lung cancer risk, then the probability that I develop lung cancer would be greater than 1/15. In this case, the fact that we live in a minor variation of twenty-first-century earth includes almost all of the evidence we have about whether or not we live in a simulation. It leaves out personal details such as what I ate for breakfast or the colour of my socks, but these details are almost entirely irrelevant to the question of whether or not we live in a simulation.</p><p>The paper contains a mathematical proof of Premise 3. The upshot is that the expected ratio of simulated to non-simulated people in our reference class provides a lower bound to the odds that we live in a simulation. Together, these premises support the conclusion that the probability that we are in a simulation is very high indeed.</p><h2><strong>Carefully interpret evidence</strong></h2><p>There is a potential objection to Premises 1 and 2: perhaps we cannot define &#8220;Earthy&#8221; in a way that includes almost all of the relevant evidence (Premise 2) while also ensuring that the expected ratio of simulated to non-simulated people is high (Premise 1).</p><p>Consider the (apparent) fact that our world is more than ten billion years old and was mostly lifeless for many of those billions of years. Even if our descendants create many simulations of our world, it is unlikely that many of those simulated worlds will themselves contain vast empty stretches for billions of years; they would just appear that way.&nbsp; So, if we include this fact in what it means to be an &#8220;Earthy&#8221; world, then Premise 1 may well fail. If, however, we exclude this fact from what it means to be an &#8220;Earthy&#8221; world, then Premise 2 may well fail, because we are excluding an important piece of evidence. One possible way to resolve this is to try to understand our evidence, not in terms of how the world is, but in terms of how the world appears to be. Potentially, we can keep both Premises 1 and 2 if the definition of &#8220;Earthy&#8221; includes the fact that the world appears to be old and mostly lifeless but doesn&#8217;t require the world to actually be that way. This raises some complicated issues.</p><p>Because of these considerations, we must carefully interpret any evidence we encounter. Imagine stumbling upon a lab running billions of simulations. Would your new evidence make it likely that you live in a simulation? Bostrom (2006, p. 9) and Greene (2020) think so, but the situation is unclear. If your new evidence is only that the lab appears to be running billions of simulations, then it is unclear which way the new evidence points. On the other hand, most simulations probably do not house their own simulations&#8211;&#8211;so if your new evidence is that the lab really is running billions of simulations, then that could make it less likely that you are in a simulation.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.millionyearview.com/p/simulation-expectation?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">None of this is real, but sharing is caring.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.millionyearview.com/p/simulation-expectation?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.millionyearview.com/p/simulation-expectation?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6PjK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faded5a91-8e04-4f47-b316-a9a5ec380ba9_3529x5293.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6PjK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faded5a91-8e04-4f47-b316-a9a5ec380ba9_3529x5293.jpeg 424w, https://substackcdn.com/image/fetch/$s_!6PjK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faded5a91-8e04-4f47-b316-a9a5ec380ba9_3529x5293.jpeg 848w, https://substackcdn.com/image/fetch/$s_!6PjK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faded5a91-8e04-4f47-b316-a9a5ec380ba9_3529x5293.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!6PjK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faded5a91-8e04-4f47-b316-a9a5ec380ba9_3529x5293.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6PjK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faded5a91-8e04-4f47-b316-a9a5ec380ba9_3529x5293.jpeg" width="1456" height="2184" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/aded5a91-8e04-4f47-b316-a9a5ec380ba9_3529x5293.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2184,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1772979,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!6PjK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faded5a91-8e04-4f47-b316-a9a5ec380ba9_3529x5293.jpeg 424w, https://substackcdn.com/image/fetch/$s_!6PjK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faded5a91-8e04-4f47-b316-a9a5ec380ba9_3529x5293.jpeg 848w, https://substackcdn.com/image/fetch/$s_!6PjK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faded5a91-8e04-4f47-b316-a9a5ec380ba9_3529x5293.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!6PjK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faded5a91-8e04-4f47-b316-a9a5ec380ba9_3529x5293.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>Photo by <a href="https://www.pexels.com/photo/a-man-playing-virtual-reality-glasses-6499173/">Tima Miroshnichenko</a>.</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p><strong>References:<br><br></strong>Nick Bostrom (2003). <a href="https://academic.oup.com/pq/article/53/211/243/1610975">Are we living in a computer simulation?</a> <em>The Philosophical Quarterly</em> 53/211.</p><p>Nick Bostrom (2006). <a href="http://www.simulation-argument.com/computer.pdf">Do we live in a computer simulation?</a> <em>NewScientist</em> 192/2579.</p><p>Preston Greene (2020). <a href="https://link.springer.com/article/10.1007/s10670-018-0037-1">The termination risks of simulation science</a>. <em>Erkenntnis</em> 85/2.</p><p>Teru Thomas (2021). <a href="https://globalprioritiesinstitute.org/simulation-expectation-teruji-thomas-global-priorities-institute-university-of-oxford/">Simulation Expectation</a>. <em>GPI Working Paper</em> <em>No. 16&#8211;2021.</em></p></div></div>]]></content:encoded></item><item><title><![CDATA[Thorstad's case against the singularity hypothesis]]></title><description><![CDATA[A summary of &#8220;Against the singularity hypothesis&#8221; by David Thorstad (just published in Philosophical Studies).]]></description><link>https://www.millionyearview.com/p/against-the-singularity</link><guid isPermaLink="false">https://www.millionyearview.com/p/against-the-singularity</guid><dc:creator><![CDATA[Riley Harris]]></dc:creator><pubDate>Mon, 27 May 2024 17:49:30 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/92736fd6-dfaf-47d6-9818-f3053b44108c_561x561.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This summary was first published on the <a href="https://globalprioritiesinstitute.org/summary-against-the-singularity-hypothesis/">Global Priorities Institute website.</a></em></p><p><br>The <em>singularity </em>is a hypothetical future event in which machines rapidly become significantly smarter than humans. The idea is that we might invent an artificial intelligence (AI) system that can improve itself. After a single round of self-improvement, that system would be better equipped to improve itself than before. This process might repeat many times, and each time the AI system would become more capable and better equipped to improve itself even further. At the end of this (perhaps very rapid) process, the AI system could be much smarter than the average human. Philosophers and computer scientists have thought we should take the possibility of a singularity seriously (Solomonoff 1985, Good 1996, Chalmers 2010, Bostrom 2014, Russell 2019).&nbsp;</p><p>It is characteristic of the singularity hypothesis that AI will take years or months at the most to become many times more intelligent than even the most intelligent human.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> Such extraordinary claims require extraordinary evidence. In the paper &#8220;Against the singularity hypothesis&#8221;, David Thorstad claims that we do not have enough evidence to justify the belief in the singularity hypothesis, and we should consider it unlikely unless stronger evidence emerges.</p><h1><strong>Reasons to think the singularity is unlikely</strong></h1><p>Thorstad is sceptical that machine intelligence can grow quickly enough to justify the singularity hypothesis. He gives several reasons for this.</p><p><strong>Low-hanging fruit. </strong>Innovative ideas and technological improvements tend to become more difficult over time. For example, consider &#8220;<a href="https://en.wikipedia.org/wiki/Moore%27s_law">Moore&#8217;s law</a>&#8221;, which is (roughly) the observation that hardware capacities double every two years. Between 1971 and 2014 Moore&#8217;s law was maintained only with an astronomical increase in the amount of capital and labour invested into semiconductor research (Bloom et al. 2020). In fact, according to one leading estimate, there was an eighteen-fold drop in productivity over this period. While some features of future AI systems will allow them to increase the rate of progress compared to human scientists and engineers, they are still likely to experience diminishing returns as the easiest discoveries have already been made and only more difficult ideas are left.&nbsp;</p><p><strong>Bottlenecks. </strong>AI progress relies on improvements in search, computation, storage and so on (each of these areas breaks down into many subcomponents). Progress could be slowed down by any of these subcomponents: if <em>any </em>of these are difficult to speed up, then AI progress will be much slower than we would naively expect. The classic metaphor here concerns the speed a liquid can exit a bottle, which is rate-limited by the narrow space near the opening. AI systems may run into bottlenecks if any essential components cannot be improved quickly (see Aghion et al., 2019).</p><p><strong>Constraints. </strong>Resource and physical constraints may also limit the rate of progress. To take an analogy, Moore's law gets more difficult to maintain because it is expensive, physically difficult and energy-intensive to cram ever more transistors in the same space. Here we might expect progress to eventually slow as physical and financial constraints provide ever greater barriers to maintaining progress.</p><p><strong>Sublinear growth. </strong>How do improvements in hardware translate to intelligence growth? Thompson and colleagues (2022) find that exponential hardware improvements translate to linear gains in performance on problems such as Chess, Go, protein folding, weather prediction and the modelling of underground oil reservoirs. Over the past 50 years, transistors in our best circuits increased from 3,500 in 1972 to 114 billion in 2022. If intelligence grew linearly with transistor density computers would have become 33 million times more intelligent over this period. Instead, evidence suggests that intelligence growth is sublinear in hardware growth.&nbsp;</p><h1><strong>Arguments for the singularity hypothesis</strong></h1><p>Two key arguments have been given in favour of the singularity hypothesis. Thorstad analyses them and finds that they are not particularly strong.&nbsp;</p><p><strong>Observational argument. </strong>Chalmers (2010) argues for the singularity hypothesis based on the <em>proportionality thesis: </em>that increases in intelligence always lead to at least proportionate increases in the ability to design intelligent systems. He supports this only briefly, observing that, for example, a small difference in design capability between Alan Turing and the average human led to a large difference in the ability of the systems they were able to design (the computer vs hardly anything of importance). The main problem with this argument is that it is local rather than global: It gives evidence that there are points in time where the proportionality thesis is correct, while to support the singularity hypothesis it would be necessary that the proportionality thesis is true at <em>any</em> time. In addition, Chalmers conflates design capabilities and intelligence.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a><sup> </sup>Overall, Thorstad concludes that Chalmers's argument fails and the observational argument does not vindicate the singularity hypothesis</p><p><strong>Optimisation power argument. </strong>Bostrom (2014) claims that there will be a large amount of quality-weighted design effort applied to improving artificial systems, which will result in large increases in intelligence. He gives a rich and varied series of examples to support this claim. However, Thorstad finds that many of these examples are just plausible descriptions of artificial intelligence improving rapidly, not evidence that this will happen. Other examples end up being restatements of the singularity hypothesis (for example, that we could be only a single leap of software insight from an intelligence explosion). Thorstad is sceptical that these restatements provide any evidence at all for the singularity hypothesis.</p><p>One of the core parts of the argument is initially promising but relies on a misunderstanding. Bostrom claims that roughly <em>constant </em>design effort has historically led to systems doubling their capacity every 18 months. If this were true, then boosting a system's intelligence could allow it to design a new system with even greater intelligence, where that second boost is even bigger than the first. This would allow intelligence to increase ever quicker. But, as discussed above, it was <em>increasing</em> design efforts that led to this improvement in hardware, and AI systems have progressed much more slowly. Overall, Thorstad remains sceptical that Bostrom has given any strong evidence or argument in favour of the singularity hypothesis.</p><h1><strong>Implications for longtermism and AI Safety</strong></h1><p>The singularity hypothesis implies that the world will be quickly transformed in the future. This idea is used by Bostrom (2012, 2014) and Yudkowsky (2013) to argue that advances in AI could threaten human extinction or permanently and drastically destroy humanity's potential for future development. Increased scepticism about the singularity hypothesis might naturally lead to increased scepticism about their conclusion: that we should be particularly concerned about existential risk from artificial intelligence. This may also have implications for longtermism which uses existential risk mitigation (and AI risk mitigation in particular) as part of the central example of a longtermist intervention - at least insofar as this concern is driven by something like the above argument by Bostrom and Yudkowsky.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.millionyearview.com/p/against-the-singularity?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for your time and attention. At the moment I have 167 subscribers, so I&#8217;d love for you to share it with a friend. It takes a lot of time and effort to write each post and this is the best way for you to support me.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.millionyearview.com/p/against-the-singularity?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.millionyearview.com/p/against-the-singularity?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>In particular, Chalmers (2010) claims that future AI systems might be as far beyond the most intelligent human as the most intelligent human is beyond a mouse. Bostrom (2014) claims this process could happen in a matter of months or even minutes (Bostrom, 2014).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Some of Turing's contemporaries were likely more intelligent than him, yet they did not design the first computer.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p><strong>References</strong>: <br><br>Philippe Aghion, Benjamin Jones, and Charles Jones (2019). <a href="https://www.nber.org/books-and-chapters/economics-artificial-intelligence-agenda/artificial-intelligence-and-economic-growth">Artificial intelligence and economic growth.</a> In <em>The economics of artificial intelligence: An agenda</em>, pages 237&#8211;282. Edited by Ajay Agrawal, Joshua Gans, and Avi Goldfarb. University of Chicago Press.&nbsp;</p><p>Nicholas Bloom, Charles Jones, John Van Reenen, and Michael Webb (2020). <a href="https://www.aeaweb.org/articles?id=10.1257/aer.20180338">Are ideas getting harder to find?</a> <em>American Economic Review</em> 110, pages 1104&#8211;44.</p><p>Nick Bostrom (2012). <a href="https://link.springer.com/article/10.1007/s11023-012-9281-3">The superintelligent will: Motivation and instrumental rationality in advanced artificial agents.</a><em> Minds and Machines</em> 22, pages 71&#8211;85.</p><p>Nick Bostrom (2014). <em><a href="https://global.oup.com/academic/product/superintelligence-9780199678112">Superintelligence</a></em>. Oxford University Press.</p><p>David Chalmers (2010). <a href="https://www.ingentaconnect.com/content/imp/jcs/2010/00000017/f0020007/art00001">The singularity: A philosophical analysis.</a> <em>Journal of Consciousness Studies</em> 17\9-10, pages 7&#8211;65.</p><p>I.J. Good (1966). <a href="https://www.sciencedirect.com/science/article/abs/pii/S0065245808604180">Speculations concerning the first ultraintelligent machine.</a> <em>Advances in Computers</em> 6, pages 31&#8211;88.</p><p>Stuart Russell (2019). <em><a href="https://www.penguinrandomhouse.com/books/566677/human-compatible-by-stuart-russell/">Human compatible: Artificial intelligence and the problem of control</a></em><a href="https://www.penguinrandomhouse.com/books/566677/human-compatible-by-stuart-russell/">. </a>Viking.</p><p>Ray Solomonoff (1985). <a href="https://content.iospress.com/download/human-systems-management/hsm5-2-07?id=human-systems-management%2Fhsm5-2-07">The time scale of artificial intelligence: Reflections on social effects.</a> <em>Human Systems Management</em> 5, pages 149&#8211;53.</p><p>Neil Thompson, Shuning Ge, and Gabriel Manso (2022). <a href="https://arxiv.org/abs/2206.14007">The importance of (exponentially more) computing power. </a><em>ArXiv Preprint</em>.</p><p><em>Eliezer Yudkowsky (2013). <a href="https://intelligence.org/files/IEM.pdf">Intelligence explosion microeconomics</a>. Machine Intelligence Research Institute Technical Report 2013-1.</em></p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Can we understand AI as a rational agent?]]></title><description><![CDATA[A summary of &#8220;Will AI avoid exploitation?&#8221; by Adam Bales, forthcoming in Philosophical Studies.]]></description><link>https://www.millionyearview.com/p/can-we-understand-ai-as-a-rational</link><guid isPermaLink="false">https://www.millionyearview.com/p/can-we-understand-ai-as-a-rational</guid><dc:creator><![CDATA[Riley Harris]]></dc:creator><pubDate>Mon, 27 May 2024 17:17:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!xFG4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13dbdd4f-7a8b-439c-be72-f28d2cbbb0bd_5192x3466.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>The summary was written by Riley Harris and first published on the <a href="https://globalprioritiesinstitute.org/summary-will-ai-avoid-exploitation-adam-bales/">Global Priorities Institute Website.</a></em></p><p>We might hope that there is a straightforward way of predicting the behaviour of future artificial intelligence (AI) systems. Some have suggested that AI will maximise expected utility, because anything else would allow them to accept a series of trades that result in a guaranteed loss of something valuable (Omohundro, 2008). Indeed, we would be able to predict AI behaviour if the following claims were true:</p><ol><li><p>AI will avoid exploitation</p></li><li><p>Avoiding exploitation means maximising expected utility</p></li><li><p>We are able to predict the behaviour of agents that maximise expected utility</p></li></ol><p>Adam Bales argues that these claims are all false in his paper <em>Will AI avoid exploitation?</em></p><h2><strong>AI won't avoid exploitation</strong></h2><p>Here, &#8220;exploitation&#8221; is meant in a technical sense. An agent is <em>exploitable</em> if you can offer them a series of choices that lead to a guaranteed loss. For instance, if an agent is willing to pay $1 to swap an apple for an orange, but also willing to pay $1 to swap back to the orange, that agent is <em>exploitable</em>. After two trades they would be back with the apple they started with, minus $2. A natural assumption is that AI systems will be deployed in a competitive environment that will force them to avoid <em>exploitable </em>preferences given sufficient training data and computational resources to do so.</p><p>But there are reasons to think that AI will be exploitable in at least some scenarios. We might suspect as much when we notice that companies do not avoid exploitation, despite their competitive environment.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> Additionally, animal behaviour tends to be more consistent than human behaviour, despite our cognitive sophistication (Stanovich, 2013). In general, there might be a trade-off between understanding and processing complex information, and choosing in ways that avoid exploitation (Thorstad, forthcoming).</p><p>It also seems that the benefits of avoiding exploitation might be lower than they appear. First, there is no benefit to avoiding hypothetical exploitation. AI systems that always avoid exploitation will spend a lot of resources on avoiding hypothetical situations that are unlikely to arise in the real world. Second, even agents who are immune to exploitation in the technical sense may be vulnerable to other kinds of, well, exploitation. To see this, consider how a human might be exploited by someone with better knowledge of the stock market. This kind of exploitation would not rely on patterns of preferences but on a lack of knowledge and ability in a particular domain. So avoiding exploitation (in the technical sense) is less important than it initially sounds. The costs are also higher than they first appear: for instance, avoiding exploitation in a sufficiently general way is computationally intractable (van Rooij et al., 2018).</p><h2><strong>Even if AI avoids exploitation, it may not maximise expected utility</strong></h2><p>An agent maximises expected utility if it makes choices that are consistent with the maximisation of the expectation of some utility function. This does not mean an AI system needs to have explicit utility and probability functions, this is just a way for us to understand and predict which decisions it will make.</p><p>We can break down the concept of maximising expected utility into more simple behavioural patterns. If an agent satisfies all of the patterns, they maximise expected utility. But if they violate one of these patterns, then they do not. We can use this breakdown to see whether agents that avoid exploitation (in the technical sense) will really maximise expected utility.</p><p>In particular, one of the behavioural patterns we expect from an agent maximising expected utility would be "continuity" (Fishburn, 1970). Suppose you prefer outcomes A to B to C but there is also a lottery that gives you A with a probability <em>p </em>and C with probability <em>1-p</em>. <em>Continuity </em>implies that there is some (high enough)&nbsp; <em>p</em>&nbsp; that you would choose this lottery over getting B for sure, and for some (small enough) <em>p&#8217;</em> for which you will choose B.</p><p>An agent who violates <em>continuity </em>will not maximise expected utility. However, if an agent avoids exploitation, this doesn&#8217;t tell us that they will satisfy continuity. This is because even agents that do not satisfy continuity may still avoid any guaranteed loss (and therefore avoid exploitability). This means that if we know an agent avoids exploitation, we would not know that they maximise expected utility.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a><br></p><p>Interestingly, failing to satisfy continuity could result in some strange behaviour. Bales finds that an agent that does not satisfy continuity would pay a cost for a very small chance of getting something better, no matter how small that chance is. Bales calls this quasi-exploitability, but notices that we do not have particularly strong reasons to believe AI will avoid quasi-exploitability, which may even be seen as appropriate under conditions of very high potential payoffs. Ultimately, this line of reasoning fails to show that AI will maximise expected utility.</p><h2><strong>Even if AI will maximise expected utility, knowing this will not help us predict its behaviour</strong></h2><p>Even if AI systems did act as if they maximise expected utility, this would not allow us to predict their behaviour. This is because we would only know what they would do relative to some probability function and some utility function. Consider an AI whose utility function assigns a value of 0 to all of the outcomes other than the one it expects to receive by acting the way it does. This agent could be seen as maximising expected utility, but, we will not know which outcomes the agent will choose before it acts.</p><p>We might be able to partially get around this by using our knowledge of how the AI is being trained to predict what its utility function would be. In particular, we might assume that future AI systems will be trained in ways that are similar to how current cutting-edge models are trained. However, then we would be making substantial assumptions about what future AI systems will look like. We might wonder if these substantive assumptions now drive the predictions, rendering the utility maximisation framework inert. Additionally, insofar as speculating about how future AI systems will behave is difficult, we might doubt that this approach will give us particularly fruitful insights.</p><h2><strong>Conclusion</strong></h2><p>Overall, the failure of these three claims means that any argument leading to the conclusion that we might be able to predict the behaviour of AI systems would need to be more sophisticated. In particular contexts, AI might approximately avoid exploitation &#8211; for example, when it is likely to be exploited, or in its interactions with humans and other agents. When combined with further assumptions about behaviour that might come from our understanding of the training processes that will generate advanced AI systems, we might be able to get some idea of how AI systems will behave. We should be modest in our predictions though, because our assumptions are often likely to miss important insights, oversimplify, or even mislead us.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.millionyearview.com/p/can-we-understand-ai-as-a-rational?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">99% of rational agents shared this post.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.millionyearview.com/p/can-we-understand-ai-as-a-rational?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.millionyearview.com/p/can-we-understand-ai-as-a-rational?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!xFG4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13dbdd4f-7a8b-439c-be72-f28d2cbbb0bd_5192x3466.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!xFG4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13dbdd4f-7a8b-439c-be72-f28d2cbbb0bd_5192x3466.jpeg 424w, https://substackcdn.com/image/fetch/$s_!xFG4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13dbdd4f-7a8b-439c-be72-f28d2cbbb0bd_5192x3466.jpeg 848w, https://substackcdn.com/image/fetch/$s_!xFG4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13dbdd4f-7a8b-439c-be72-f28d2cbbb0bd_5192x3466.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!xFG4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13dbdd4f-7a8b-439c-be72-f28d2cbbb0bd_5192x3466.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!xFG4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13dbdd4f-7a8b-439c-be72-f28d2cbbb0bd_5192x3466.jpeg" width="1456" height="972" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/13dbdd4f-7a8b-439c-be72-f28d2cbbb0bd_5192x3466.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:972,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1258974,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!xFG4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13dbdd4f-7a8b-439c-be72-f28d2cbbb0bd_5192x3466.jpeg 424w, https://substackcdn.com/image/fetch/$s_!xFG4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13dbdd4f-7a8b-439c-be72-f28d2cbbb0bd_5192x3466.jpeg 848w, https://substackcdn.com/image/fetch/$s_!xFG4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13dbdd4f-7a8b-439c-be72-f28d2cbbb0bd_5192x3466.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!xFG4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13dbdd4f-7a8b-439c-be72-f28d2cbbb0bd_5192x3466.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Photo by <a href="https://www.pexels.com/photo/elderly-man-thinking-while-looking-at-a-chessboard-8438918/">Pavel Danilyuk</a></p><h2><strong>Sources</strong></h2><p>Peter C. Fishburn (1970). <em><a href="https://doi.org/10.1002/0471667196.ess2832.pub2">Utility theory for decision making</a></em>. Wiley.</p><p>Peter C. Fishburn (1971). <a href="https://www.jstor.org/stable/2629309">A study of lexicographic expected utility</a>. <em>Management Science</em> 17/11, pages 672&#8211;678.</p><p>Hausner (1953). <em><a href="https://apps.dtic.mil/sti/pdfs/AD0604151.pdf">Multidimensional Utility</a></em>. Rand Corporation No. 604151.</p><p>Hausner, M., &amp; Wendel, J. G. (1952).<a href="https://www.ams.org/journals/proc/1952-003-06/S0002-9939-1952-0052045-1/"> Ordered Vector Spaces</a>. <em>Proceedings of the American Mathematical Society</em> 3/6, pages 977&#8211;982.</p><p>David McCarthy, Kalle Mikkola, and Teruji Thomas (2020). <a href="https://www.sciencedirect.com/science/article/pii/S0304406820300045">Utilitarianism with and without expected utility</a>. <em>Journal of Mathematical Economics</em> 87, pages&nbsp; 77&#8211;113.</p><p>Stephen M. Omohundro (2008). <a href="https://dl.acm.org/doi/10.5555/1566174.1566226">The Basic AI Drives</a>. <em>Proceedings of the 2008 conference on Artificial General Intelligence.</em> IOS Press. Edited by Pei Wang, Ben Goertzel and Stan Franklin.</p><p>Keith E. Stanovich (2013). <a href="https://www.tandfonline.com/doi/abs/10.1080/13546783.2012.713178">Why humans are (sometimes) less rational than other animals: Cognitive complexity and the axioms of rational choice.</a> <em>Thinking &amp; Reasoning</em> 19/1, pages 1&#8211;26.</p><p>David Thorstad (forthcoming). <a href="https://www.journals.uchicago.edu/doi/10.1086/716518">The accuracy-coherence tradeoff in cognition.</a> <em>The British Journal for the Philosophy of Science.</em></p><p>Iris van Rooij, Cory D. Wright, Johan Kwisthout and Todd Wareham (2018).<a href="https://www.jstor.org/stable/26748857"> Rational analysis, intractability, and the prospects of &#8217;as if&#8217;-explanations</a>. <em>Synthese</em> 195/2, pages 491&#8211;510.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>In particular, the boards of companies often decide by majority voting. But <a href="https://plato.stanford.edu/entries/epistemology-social/#FormEpisSociReal">majority voting does not always result in unexploitable preferences even when every voter does so</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>One response would be to either assume continuity without using exploitability arguments to justify it, or turn to sophisticated models that do away with the continuity assumption (see Hausner &amp; Wendel, 1952; Hausner, 1953; Fishburn, 1971; McCarthy et al., 2020).</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[How could governments take the interests of our great-great-grandchildren into account?]]></title><description><![CDATA[Tyler John and William MacAskill that proposes institutional reforms to reduce short-termism in government and promote long-term thinking. We need more accountability and information to make the long-term impacts of policies more salient in political decision making.]]></description><link>https://www.millionyearview.com/p/longtermist-institutions</link><guid isPermaLink="false">https://www.millionyearview.com/p/longtermist-institutions</guid><dc:creator><![CDATA[Riley Harris]]></dc:creator><pubDate>Fri, 10 Nov 2023 11:05:14 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ffdd357b-fdf2-4f35-8e48-bb8087b1f017_3000x4500.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This summary was <a href="https://globalprioritiesinstitute.org/summary-summary-longtermist-institutional-reform/">first published</a> on the Global Priorities Institute website, the original paper can be found <a href="https://globalprioritiesinstitute.org/tyler-m-john-and-william-macaskill-longtermist-institutional-reform/">here</a>.</em></p><p>Political decisions can have lasting effects on the lives and wellbeing of future generations. Yet political institutions tend to make short-term decisions with only the current generation &#8211; or even just the current election cycle &#8211; in mind. In &#8220;longtermist institutional reform&#8221;, Tyler M. John and William MacAskill identify the causes of short-termism in government and give four recommendations for how institutions could be improved. These are the creation of in-government research institutes, a futures assembly, posterity impact statements and &#8211; more radically &#8211; an &#8216;upper house&#8217; representing future generations.</p><h2><strong>Causes of short-termism</strong></h2><p>John and MacAskill discuss three main causes of short-termism. Firstly, politicians may not care about the long term. This may be because they discount the value of future generations, or simply because it is easy to ignore the effects of policies that are not experienced here and now. Secondly, even if politicians are motivated by concern for future generations, it may be difficult to know the long-term effects of different policies. Finally, even motivated and knowledgeable actors might face structural barriers to implementing long-term focussed policies &#8211; for instance, these policies might sometimes appear worse in the short-term and reduce a candidate's chances of re-election.</p><h2><strong>Suggested reforms</strong></h2><h3><strong>In-government research institutes</strong></h3><p>The first suggested reform is the creation of in-government research institutes that could independently analyse long-term trends, estimate expected long-term impacts of policy and identify matters of long-term importance. These institutes could help fight short-termism by identifying the likely future impacts of policies, making these impacts vivid, and documenting how our leaders are affecting the future. They should also be designed to resist the political incentives that drive short-termism elsewhere. For instance, they could be functionally independent from the government, hire without input from politicians, and be flexible enough to prioritise the most important issues for the future. To ensure their advice is not ignored, the government should be required to read and respond to their recommendations.</p><h3><strong>Futures assembly</strong></h3><p>The futures assembly would be a permanent&nbsp; citizens&#8217; assembly which seeks to represent the interests of future generations and give dedicated policy time to issues of importance for the long-term. Several examples already exist where similar citizens&#8217; assemblies have helped create consensus on matters of great uncertainty and controversy, enabling timely government action.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a><strong> </strong>In-government research institutes excel at producing high quality information, but lack legitimacy. In contrast, a citizens&#8217; assembly like this one could be composed of randomly selected citizens that are statistically representative of the general population. John and MacAskill believe this representativeness brings political force &#8211;politicians who ignore the assembly put their reputations at risk. We can design futures assemblies to avoid the incentive structures that result in short-termism &#8211; such as election cycles, party interests and campaign financing. Members should be empowered to call upon experts, and their terms should be long enough to build expertise but short enough to avoid problems like interest group capture &#8211; perhaps two years. They should also be empowered to set their own agenda and publicly disseminate their results.</p><h3><strong>Posterity impact statements</strong></h3><p>Requiring posterity impact statements for legislation would provide another mechanism for creating political accountability and gathering high quality information on long-run policy effects. These statements would give an estimate of the expected impact of a policy on future generations, similar to the environmental impact statements that are already required in many countries. Posterity impact statements might utilise a &#8220;soft&#8221; enforcement mechanism &#8211; relying on voters to enforce good long-term policy creation &#8211; or a &#8220;hard&#8221; enforcement mechanism &#8211; for example, the government might have to take out insurance when implementing particularly risky policies.&nbsp;</p><h3><strong>Future generations &#8216;upper house&#8217;</strong></h3><p>A more radical reform would be to introduce an &#8216;upper house&#8217; that represents future generations explicitly, to work with a lower house representing current generations. (Legislation would have to pass both houses). John and MacAskill suggest several things that might help such a proposal work:</p><ul><li><p>Randomly selecting citizens and experts to serve in the upper house (to avoid the incentives that drive short-termism, such as election cycles, party interests, industry corruption and partisan polarisation).&nbsp;</p></li><li><p>Independent research institutions should create concrete performance metrics and members of the house should give public justifications that refer to those metrics.&nbsp;</p></li><li><p>The members should be relatively young, and given a pension a number of decades later based on their cohort&#8217;s performance in promoting the interests of future generations.&nbsp;</p></li></ul><h2><strong>Further ideas for reforms</strong></h2><p>John and MacAskill also suggest several additional ideas for reforms that might be worth exploring further: longer election cycles, novel commitment mechanisms, giving parents additional votes to use on behalf of their children, taxation of negative and subsidy of positive long-run externalities, and long-term performance incentive schemes such as tying the pensions of politicians and public servants to national performance.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.millionyearview.com/p/longtermist-institutions?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for your time and attention. At the moment I have 167 subscribers, so I&#8217;d love for you to share it with a friend. It takes a lot of time and effort to write each post and this is the best way for you to support me.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.millionyearview.com/p/longtermist-institutions?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.millionyearview.com/p/longtermist-institutions?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p><h2><strong>References</strong></h2><p>James S Fishkin and Robert C Luskin (2005). <a href="https://link.springer.com/article/10.1057/palgrave.ap.5500121">Experimenting with a democratic ideal: Deliberative polling and public opinion</a>. <em>Acta Politica</em> 40/3.</p><p>James S. Fishkin, Roy William Mayega, Lynn Atuyambe, Nathan Tumuhamye, Julius Ssentongo, Alice Siu and William Bazeyo (2017). <a href="https://www.amacad.org/publication/applying-deliberative-democracy-africa-ugandas-first-deliberative-polls">Applying deliberative democracy in Africa: Uganda&#8217;s first deliberative polls</a>. <em>Daedalus </em>146/3.</p><p>Tyler M. John and William MacAskill (2021). <a href="https://firstforum.org/publishing/books/the-long-view/">Longtermist institutional reform</a>. <em>The Long View: Essays on Policy, Philanthropy, and the Long-Term Future.</em> FIRST. Edited by Natalie Cargill and Tyler M. John.</p><p>Christian List, Robert C. Luskin, James S. Fishkin, and Iain McLean (2013). <a href="https://www.journals.uchicago.edu/doi/10.1017/S0022381612000886">Deliberation, single-peakedness, and the possibility of meaningful democracy: Evidence from deliberative polls.</a> <em>The Journal of Politics</em> 75/1.</p><p>Photo by <a href="https://www.pexels.com/photo/the-arch-entrance-of-louvre-museum-5101753/">Max Avans</a>.</p><p></p><p></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>See Fishkin and Luskin (2005), Fishkin et al. (2017) and List et al. (2013).</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Advanced artificial intelligence systems may risk overpowering humanity in the coming decades]]></title><description><![CDATA[This is a summary of Existential Risk from power-seeking AI by Joseph Carlsmith, forthcoming in &#8220;Essays on Longtermism&#8221;.]]></description><link>https://www.millionyearview.com/p/advanced-artificial-intelligence</link><guid isPermaLink="false">https://www.millionyearview.com/p/advanced-artificial-intelligence</guid><dc:creator><![CDATA[Riley Harris]]></dc:creator><pubDate>Sat, 28 Oct 2023 11:45:04 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/bb0b23a0-98e1-4076-becb-539204bf141c_1705x2600.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>&nbsp;Within our lifetimes we might witness the deployment of advanced artificial intelligence (AI) systems with the ability to run companies, push forward political campaigns and advance science. The basic concern is that these AI systems could pursue goals which are different from what any human intended. In particular, a sufficiently advanced system would be capable of advanced planning and strategy, and so would see the usefulness of gaining power such as influence, weapons, money,&nbsp; and greater cognitive resources. This is deeply troubling, as it may be difficult to prevent these systems from collectively disempowering humanity.&nbsp; In "<a href="https://jc.gatspress.com/pdf/existential_risk_and_powerseeking_ai.pdf">Existential risk from power-seeking AI</a>"<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> Carlsmith clarifies the main reasons to think that power-seeking AI might present an extreme&nbsp; risk to humanity.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><h1>More likely than not, we will see advanced AI systems within our lifetimes, or our children's lifetimes</h1><p>Carlsmith builds his case for AI risk around estimates for when AI systems could be developed that have:&nbsp;</p><ul><li><p><strong>Advanced capabilities</strong> such as the ability to outperform the best humans on tasks such as scientific research, business/military/political strategy, engineering, and persuasion/manipulation.</p></li><li><p><strong>Planning capabilities </strong>such as the ability to make and carry out plans, in pursuit of objectives, as if with an understanding of the world.</p></li><li><p><strong>Strategic awareness</strong>: the ability to make plans that represent the effect of gaining and maintaining power over humans and the environment.</p></li></ul><p>Carlsmith believes that it is more likely than not that we will be able to build agents with all of the capabilities described above by 2070.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><h1>It could be difficult to align or control these systems</h1><p>If we create advanced AI systems, how can we ensure they are <em>aligned</em> in the sense that they do what their designers want them to do? Here,<em> misalignment </em>looks less like failing or breaking in an effort to do what the designers want, and more like deliberately doing something designers <em>don't</em> want it to do. It would look less like an employee underperforming, and more like an employee trying to embezzle funds from their employer.</p><p>There are several strategies for aligning AI systems, but each of them faces difficulties:</p><ul><li><p><strong>Aligning objectives: </strong>one strategy is to control the objectives that AI systems have. We might give examples of desired behaviour, set up the evolutionary environment for AI carefully, or give feedback on behaviour we like and dislike. These methods, especially when they rely on human feedback, may be difficult to scale to advanced systems. More fundamentally, it may be difficult to avoid situations where the AI system learns to do something that meets our evaluation criteria but with a fundamentally different strategy. For example, we might attempt to train an AI to tell the truth, but accidentally teach it to create fictional internet sources and cater to our biases. The problem is not about ensuring an AI system &#8216;understands&#8217; what we want, because a sufficiently advanced AI might understand perfectly well and use that knowledge to deceive us in pursuit of its own goals.</p></li><li><p><strong>Limiting capabilities: </strong>another strategy might be to create agents that have limited capabilities. We are more likely to be able to stop limited AI systems from engaging in misaligned behaviour, and the systems themselves would be less likely to believe that deception would pay off for them.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> The major difficulty here is that we will face strong incentives to create AI systems that are capable and pursue long-term objectives, because these systems will help our scientists, corporations, and politicians use AI to pursue their own goals. In addition, there may also be technical difficulties, and limiting capacities by e.g. ensuring AI systems can only do a narrow range of tasks or only pursue short-term objectives might be more difficult than building systems with advanced capabilities to do similar tasks.</p></li><li><p><strong>Controlling incentives: </strong>finally, we might try to control the environment in which AI systems are deployed. For instance, we might want to prevent a system from hacking by withholding internet access or turning off any system that is caught hacking. The problem is that controlling incentives becomes more difficult as AI systems become more capable. The example above may prevent a moderately powerful system from hacking, but a sufficiently sophisticated system might realise that there is no realistic chance that we can catch it.&nbsp;</p></li></ul><p>In addition to the difficulties above, there are several factors that make alignment particularly difficulty compared to other safety problems, such as building nuclear reactors:</p><ul><li><p><strong>Active deception:</strong> AI systems may be trying to deliberately undermine our efforts to monitor and align their behaviour. For instance, a sufficiently advanced system might realise that it needs to pass certain safety tests in order to be released.</p></li><li><p><strong>Unpredictability: </strong>AI systems may consider strategies that we can't imagine, and they might have cognitive abilities that are opaque to us.</p></li><li><p><strong>No room for error: </strong>finally, we tend to solve safety problems through trial and error. However, the stakes for aligning advanced AI systems might be extremely high, and therefore we may not be able to learn from mistakes.</p></li></ul><h1>Powerful, misaligned AI systems could disempower humanity</h1><p>Sufficiently advanced AI systems that are misaligned are likely to realise that taking power over humans and the environment will allow them to pursue their other goals, regardless of what those other goals are.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p><p>Power-seeking behaviour might include things like manipulating human attempts to monitor, retrain, or shut off misaligned systems; blackmailing, bribing or manipulating humans; attempting to accumulate money and computational resources; making unauthorised&nbsp; backup copies of themselves; manipulating or weakening human institutions and politics; taking control of automated factories and critical infrastructure.</p><p>Power-seeking, unlike other forms of misalignment, is crucially important because the scale of potential failures is enormous. An AI system that is attempting to take control could, well, take control. Our relationship to these powerful AI systems might be similar to the relationship that chimpanzees have to us: the fate of our friends, family, and community in the hands of a more capable and intelligent species that may not share our values. This would be a very bad outcome for us, perhaps as bad as extinction.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a></p><h1>Knowing this, we still might deploy advanced, misaligned AI systems</h1><p>It should be reasonably clear that there are strong reasons to avoid deploying advanced misaligned AI systems. There are several reasons to be concerned that they may be deployed anyway:</p><ol><li><p><strong>Unilateralist's curse: </strong>we might expect that, once some people can build advanced AI systems, the number of people with potential access to those systems will grow over time. Different actors may have different views about how dangerous AI systems are, and the most optimistic and least cautious might end up deploying even if doing so presents clear dangers.</p></li><li><p><strong>Externalities: </strong>even if it is in humanities interests to avoid deploying potentially misaligned systems, some individuals might stand to personally gain a lot of money, power, or prestige, and face only a fraction of the cost if things go poorly. This might be similar to how while it would be in humanities interests to reduce carbon emissions, many corporations are incentivised to continue to emit large amounts.</p></li><li><p><strong>Race dynamics: </strong>if several groups are competing to build AI systems first, then they might know that they could gain an advantage by cutting corners on expensive or difficult alignment strategies. This could generate a race to the bottom where the first AI systems to be deployed are the quickest and cheapest (and least safe) to develop .</p></li><li><p><strong>Apparent safety</strong>: advanced AI systems might offer opportunities to solve major problems, generate wealth, and rapidly advance science and technology. They may also actively deceive us about their level of alignment. Without clear signs of misalignment, it might be difficult to justify ignoring the promise of these systems even if we think they could be manipulating us (in ways that we can't detect). We might also overestimate our ability to control advanced systems.</p></li></ol><p>Of course, if we notice an AI is actively deceiving us and seeking power, we would try to stop it. By the time we deploy advanced AI systems of the kind that pose a significant risk, we are likely to have more advanced tools for detecting, constraining, responding to and defending against misaligned behaviour.</p><p>Even so we may fail to contain the damage. First, as AI capabilities increase, we will be at an increasing disadvantage, especially if this happens in hours or days rather than months or years. Second, AI systems may deliberately hide their misalignment and interfere with our attempts to monitor and correct them, and so we may not detect misaligned behaviour early on. Third, even if we do get warning shots, we may fail to respond quickly and decisively, or face problems that are too difficult for us to solve.&nbsp; Unfortunately, many potential solutions may only superficially solve the problem, by essentially teaching the system to more carefully avoid detection. Finally, all of the factors that lead misaligned systems to be deployed in the first place would contribute to the difficulty of correcting alignment failures after deployment.</p><h1>&nbsp;Conclusion</h1><p>Carlsmith illustrates how AI could lead to human disempowerment:</p><ol><li><p>&nbsp;It could become possible and feasible to build relevantly powerful, agentic AI systems, and we might have strong incentives to do so.</p></li><li><p>It might be much harder to build these systems such that they are aligned to our values, compared to building systems that are misaligned but are still superficially attractive to deploy.</p></li><li><p>If deployed, misaligned systems might seek power over humans in high-impact ways, perhaps to the point of completely disempowering humanity.</p></li></ol><p>Overall, Carlsmith thinks there is a greater than 10% chance that the three events above all occur by 2070. If Carlsmith is right, then we face a substantial existential risk from AI systems within our lifetimes, or our children's lifetimes.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.millionyearview.com/p/advanced-artificial-intelligence?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for your time and attention. At the moment I have 167 subscribers, so I&#8217;d love for you to share it with a friend. It takes a lot of time and effort to write each post and this is the best way for you to support me.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.millionyearview.com/p/advanced-artificial-intelligence?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.millionyearview.com/p/advanced-artificial-intelligence?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h1>Sources</h1><p>Nick Bostrom (2012). <a href="https://link.springer.com/article/10.1007/s11023-012-9281-3">The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents</a>. <em>Minds &amp; Machines</em> 22.</p><p>Ajeya Cotra (2020). <a href="https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines">Draft report on AI timelines</a>. <em>AI Alignment Forum</em>.</p><p>Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, &amp; Owain Evans&nbsp; (2018). <a href="https://jair.org/index.php/jair/article/view/11222">Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts</a>. <em>Journal of Artificial Intelligence Research</em> 62.</p><p>Toby Ord (2020). <em><a href="https://www.bloomsbury.com/uk/precipice-9781526600219/">The Precipice: Existential Risk and the Future of Humanity.</a></em> Bloomsbury Publishing. See <a href="https://www.millionyearview.com/p/precipice-1">summary</a> here.</p><p>Cover image by <a href="https://www.pexels.com/photo/burning-dangerous-dark-exploration-355938/">Pixabay</a>.</p><h2><strong>Conflict of interest</strong></h2><p>I have received grants from Open Philanthropy (Carlsmith&#8217;s employer), including for work on this blog. Although I asked Carlsmith directly for feedback on this piece,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> Open Philanthropy had no direct input.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>This summary also draws on the<a href="https://arxiv.org/abs/2206.13353"> longer version</a> of this report.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>In some places this essay is framed as an argument for why the risk is high, but I think it is better characterised as an explanation of the worldview in which the risk is high, or a rough quantitative model for estimating the existential risk from power-seeking AI. This model might be useful to work through even for readers that would place very different probabilities on these possibilities.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Calsmith seems to be making a judgement call here based on evidence such as: a draft technical report modelling&nbsp; the year in which we could probably train a model as large as the human brain, and concludes that &#8216;transformative&#8217; AI is more likely than not by 2065 (Cotra, 2020). (Where<em> transformative AI</em> is defined as a model that could have &#8220;at least as profound an impact on the world&#8217;s trajectory as the Industrial Revolution did&#8221;). A public forecasting platform called Metaculus predicted that it was more likely than not that there will &#8220;<a href="https://www.metaculus.com/questions/384/humanmachine-intelligence-parity-by-2040/">be Human-machine intelligence parity before 2040</a>&#8221; (as of September 2023 this is now above 90%) and&nbsp; gave a median of 2038 for the date that &#8220;<a href="https://www.metaculus.com/questions/3479/date-weakly-general-ai-is-publicly-known/">the first weakly general AI system be devised, tested, and publicly announced</a>&#8221; (as of September 2023 this is now predicted to be in 2027). Experts answer questions like whether &#8220;unaided machines can accomplish every task better and more cheaply than human workers&#8221; by 2066 very differently based on exactly how the question is phrased, and give answers that are sometimes as low as 3% or as high as above 50% (Grace et al., 2018).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Similarly, we could create systems that are only able to pursue short-term objectives, and are thus unlikely to pursue deception that only paid off in the long-term. We could also try to build specialised systems that pursue narrow tasks, which would likely do less damage if they were misaligned and would also be easier to control and incentivise to do what we want.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>This is called the "Instrumental Convergence" hypothesis. See Bostrom (2012).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>This could be the case whether or not every human dies. Ord (2020)<a href="https://www.millionyearview.com/p/precipice-1"> defines an existential catastrophe as the destruction of humanity&#8217;s long-term potential</a>. Carlsmith thinks that the involuntary disempowerment of humanity would likely be equivalent to extinction in this sense. An important subtlety is that Toby Ord wants to define "humanity" broadly, so it includes descendants we become or create. In this sense, a misaligned AI system could be seen as an extension of humanity, and if that future was good, then perhaps humanities disempowerment would not be like extinction. But Carlsmith thinks that if he thought about it more, he would conclude that the <em>unintentional </em>disempowerment is very likely to be equivalent to extinction.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>This is a courtesy I try to extend to all authors. My aim is to helpfully summarise this essay, rather than to offer a strong independent review. <a href="https://www.lesswrong.com/posts/qRSgHLb8yLXzDg4nf/reviews-of-is-power-seeking-ai-an-existential-risk">You can find reviews here.</a>&nbsp;</p></div></div>]]></content:encoded></item><item><title><![CDATA[Summary — High risk, low reward: A challenge to the astronomical value of existential risk mitigation by David Thorstad]]></title><description><![CDATA[This summary was first published on the Global Priorities Institute website, the original paper can be found here.]]></description><link>https://www.millionyearview.com/p/high-risk-low-reward</link><guid isPermaLink="false">https://www.millionyearview.com/p/high-risk-low-reward</guid><dc:creator><![CDATA[Riley Harris]]></dc:creator><pubDate>Fri, 13 Oct 2023 10:59:14 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!a7TF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28195161-daf3-40f1-870d-68f4c9e2ebe5_549x350.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This summary was <a href="https://globalprioritiesinstitute.org/david-thorstad-high-risk-low-reward-a-challenge-to-the-astronomical-value-of-existential-risk-mitigation-2/">first published</a> on the Global Priorities Institute website, the original paper can be found <a href="https://onlinelibrary.wiley.com/doi/10.1111/papa.12248">here</a>.</em></p><p></p><p>The value of the future may be vast. Human extinction, which would&nbsp;destroy that potential, would be extremely bad. Some argue that making such a catastrophe just a little less likely would be <em>by far</em> the best use of our limited resources &#8212; <em>much</em> more important than, for example, tackling poverty, inequality, global health or racial injustice.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>&nbsp;In &#8220;<a href="https://globalprioritiesinstitute.org/existential-risk-pessimism-and-the-time-of-perils-david-thorstad-global-priorities-institute-university-of-oxford/">High risk, low reward: A challenge to the astronomical value of existential risk mitigation&#8221;</a>, David Thorstad argues against this conclusion. Suppose the risks really are severe: existential risk reduction is important, but not<em> overwhelmingly </em>important. In fact, Thorstad finds that the case for reducing existential risk is stronger when the risk is lower.</p><h1><strong>The simple model</strong></h1><p>The paper begins by describing a model of the expected value of existential risk reduction, originally developed by Ord (2020;ms) and Adamczewski (ms). This model discounts the value of each century by the chance that an extinction event would have already occurred, and gives a value to actions that can reduce the risk of extinction in that century. According to this model, reducing the risk of extinction this century is not overwhelmingly important &#8212; in fact, completely eliminating the risk we face this century could at most be as valuable as we expect this century to be.</p><p>This result &#8212; that reducing existential risk is not overwhelmingly valuable &#8212; can be explained in an intuitive way. If the risk is high, the future of humanity is likely to be short, so the increases in overall value from halving the risk this century are not enormous. If the risk is low, halving the risk would result in a relatively small absolute reduction of risk, which is also not overwhelmingly valuable. Either way, saving the world will not be our only priority.</p><h1><strong>Modifying the simple model</strong></h1><p>This model is overly simplified. Thorstad modifies the simple model in three different ways to see how robust this result is: by assuming we have <em>enduring effects</em> on the risk, by assuming the <em>risk of extinction is high</em>, and by assuming that each century is<em> more valuable than the previous</em>. None of these modifications are strong enough to uphold the idea that existential risk reduction is <em>by far</em> the best use of our resources. A much more powerful assumption is needed (one that combines all of these weaker assumptions). Thorstad argues that there is limited evidence for this stronger assumption.</p><h2>Enduring effects</h2><p>If we could permanently eliminate all threats to humanity, the model says this would be more valuable than anything else we could do &#8212; no matter how small the risk or how dismal each century&nbsp;is (as long as each is still of positive value). However, it seems very unlikely that any action we could take today could reduce the risk to an extremely low level for millions of years &#8212; let alone permanently eliminate all risk.</p><h2>Higher risk</h2><p>On the simple model, halving the risk from 20% to 10% is exactly as valuable as halving it from 2% to 1%. Existential risk mitigation is no more valuable when the risks are higher.&nbsp;</p><p>Indeed, the fact that higher existential risk implies a higher discounting of the future indicates a surprising result: the case for existential risk mitigation is strongest when the risk is low. Suppose that each century is more valuable than the last and therefore that most of the value of the world is in the future. Then high existential risk makes mitigation less promising, because future value is discounted more aggressively. On the other hand, if we can permanently reduce existential risk, then reducing risk to some particular level is approximately as valuable regardless of how high the risk was to begin with. This implies that if risks are currently high then much larger reduction efforts would be required to achieve the same value.</p><h2>Value increases</h2><p>If all goes well, there might be more of everything we find valuable in the future, making each century more valuable than the previous and increasing the value of reducing existential risk. On the other hand, however, high existential risk discounts the value of future centuries more aggressively. This leads to a race between the mounting accumulated risk and the growing improvements. The final expected value calculation depends on how quickly the world improves relative to the rate of extinction risk. Given current estimates of existential risk, the&nbsp;value of preventing existential risk receives only a modest increase.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> However, if value grows quickly and we can eliminate most risks, then reducing existential risk would be overwhelmingly valuable &#8212; we will explore this in the next section.</p><h2>The time of perils</h2><p>So far none of the extensions to the simple model imply reducing existential risk is overwhelmingly valuable. Instead, a stronger assumption is required. Combining elements of all of the extensions to the simple model so far, we could suppose that we are in a short period &#8212; less than 50 centuries &#8212; of elevated risk followed by extremely low ongoing risk &#8212; less than 1% per century, and that each century is more valuable than the previous. This is known as the <em>time of perils </em>hypothesis<em>.</em> Thorstad explores three arguments for this hypothesis but ultimately finds them unconvincing.</p><h3><em>Argument 1: humanity&#8217;s growing wisdom</em></h3><p>One argument is that humanity's power is growing faster than its wisdom, and when wisdom catches up, existential risk will be extremely low. Though this argument is suggested by Bostrom (2014, p. 248), Ord (2020, p. 45) and Sagan (1997, p. 185), it has never been made in a precise way. Thorstad considers two ways of making this argument precise, but doesn&#8217;t find that they provide a compelling case for the time of perils.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><h3><em>Argument 2: existential risk is a Kuznets curve</em></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!a7TF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28195161-daf3-40f1-870d-68f4c9e2ebe5_549x350.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!a7TF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28195161-daf3-40f1-870d-68f4c9e2ebe5_549x350.png 424w, https://substackcdn.com/image/fetch/$s_!a7TF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28195161-daf3-40f1-870d-68f4c9e2ebe5_549x350.png 848w, https://substackcdn.com/image/fetch/$s_!a7TF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28195161-daf3-40f1-870d-68f4c9e2ebe5_549x350.png 1272w, https://substackcdn.com/image/fetch/$s_!a7TF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28195161-daf3-40f1-870d-68f4c9e2ebe5_549x350.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!a7TF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28195161-daf3-40f1-870d-68f4c9e2ebe5_549x350.png" width="549" height="350" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/28195161-daf3-40f1-870d-68f4c9e2ebe5_549x350.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:350,&quot;width&quot;:549,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8590,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!a7TF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28195161-daf3-40f1-870d-68f4c9e2ebe5_549x350.png 424w, https://substackcdn.com/image/fetch/$s_!a7TF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28195161-daf3-40f1-870d-68f4c9e2ebe5_549x350.png 848w, https://substackcdn.com/image/fetch/$s_!a7TF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28195161-daf3-40f1-870d-68f4c9e2ebe5_549x350.png 1272w, https://substackcdn.com/image/fetch/$s_!a7TF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28195161-daf3-40f1-870d-68f4c9e2ebe5_549x350.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>Figure 1: the existential risk Kuznets curve, reprinted from Aschenbrenner (2020).</em></p><p>Aschenbrenner (2020) presents a model in which societies initially accept an increased risk of extinction in order to grow the economy more rapidly. However, when societies become richer, they are willing to spend more on reducing these risks. If so, existential risk would behave like a <a href="https://en.wikipedia.org/wiki/Kuznets_curve">Kuznets curve</a> &#8212; first increasing and then decreasing (see Figure 1).</p><p>Thorstad thinks this is the best argument for the time of perils hypothesis. However, the model assumes that consumption drives existential risk, while in practice technology growth plausibly drives the most concerning risks.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> Without this link to consumption, the model gives no strong reason to think that these risks will be reduced in the future. The model also assumes that increasing the amount of labour spent on reducing existential risks will be enough to curtail these risks &#8212; which is at best unclear.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> Finally, perhaps even if the model is correct about optimal behaviour, real world behaviour may fail to be optimal.</p><h3><em>Argument 3: planetary diversification</em></h3><p>Perhaps this period of increased risk holds only while we live on a single planet, but later, we might settle the stars and humanity will be at much lower risk. While planetary diversification reduces some risks, it is unlikely to help us against the most concerning risks, such as bioterrorism and misaligned artificial intelligence.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> Ultimately, planetary diversification does not present a strong case for the time of perils.&nbsp;</p><h1><strong>Conclusion</strong></h1><p>Thorstad concludes that it seems unlikely that we live in the time of perils. This implies that reducing existential risk is probably not overwhelmingly valuable and that the case for reducing existential risk is strongest when the risk is low. He acknowledges that existential risk may be valuable to work on, but only as one of several competing global priorities.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a>&nbsp;</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.millionyearview.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for your time and attention. At the moment I have 167 subscribers, so I&#8217;d love for you to share it with a friend. It takes a lot of time and effort to write each post and this is the best way for you to support me.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><h1><strong>References</strong></h1><p>Thomas Adamczewski (ms). <em><a href="https://thomas-sittler.github.io/ltf-paper/longtermfuture.pdf">The expected value of the long-term future.</a></em> Unpublished manuscript.&nbsp;</p><p>Leopold Aschenbrenner (2020). <a href="https://globalprioritiesinstitute.org/leopold-aschenbrenner-existential-risk-and-growth/">Existential risk and growth.</a> <em>Global Priorities Institute Working Paper No. 6-2020</em></p><p>Nick Bostrom (2013). <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/1758-5899.12002">Existential risk prevention as a global priority</a>. <em>Global Policy</em> 4.</p><p>Nick Bostrom (2014)<em>. <a href="https://global.oup.com/academic/product/superintelligence-9780199678112?cc=au&amp;lang=en&amp;">Superintelligence</a>. </em>Oxford University Press.</p><p>Carl Sagan (1997). <em><a href="http://www.randomhousebooks.com/books/159735/">Pale Blue Dot: A Vision of the Human Future in Space.</a></em> Balentine Books.</p><p>Anders Sandberg and Nick Bostrom (2008). <a href="https://www.fhi.ox.ac.uk/reports/2008-1.pdf">Global Catastrophic Risks Survey</a>. <em>Future of Humanity Institute Technical Report #2008-1</em></p><p>Toby Ord (2020). <em><a href="https://www.bloomsbury.com/uk/precipice-9781526600219/">The Precipice: Existential Risk and the Future of Humanity.</a></em> Bloomsbury Publishing.</p><p>Toby Ord (ms). <em>Modelling the value of existential risk reduction</em>. Unpublished manuscript.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>See Bostrom (2013), for instance.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Participants of the Oxford Global Catastrophic Risk Conference estimated the chance of human extinction was about 19% (Sandberg and Bostrom 2008), and Thorstad is talking about risks of approximately this magnitude. At this level of risk, value growth could make a 10% reduction in total risk 0.5 to 4.5 times as important as the current century.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>The two arguments are: (1) humanity will become coordinated, acting in the interests of everyone; or (2), humanity could become patient, fairly representing the interests of future generations. Neither seems strong enough to reduce the risk to below 1% per century. The first argument can be criticised because some countries already contain 15% of the world's population, so coordination is unlikely to push the risks low enough. The second can also be questioned, because it is unlikely that any government will be much more patient than the average voter.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>For example, the risks from bioterrorism grow with our ability to synthesise and distribute biological materials.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>For example, asteroid risks can be reduced a little with current technology, but to reduce this risk further we would need to develop deflection technology &#8212; technology which would likely be used in mining and military operations, which may well increase the risk from asteroids, see Ord (2020).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>See Ord (2020) for an overview of which risks are most concerning.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>Where the value of reducing existential risk is around one century of value, Thorstad notes that <em>&#8220;...an action which reduces the risk of existential catastrophe in this century by one trillionth would have, in expectation, one trillionth as much value as a century of human existence. Lifting several people out of poverty from among the billions who will be alive in this century may be more valuable than this. In this way, the Simple Model presents a prima facie challenge to the astronomical value of existential risk mitigation.&#8221;</em> (p. 5)</p></div></div>]]></content:encoded></item><item><title><![CDATA[Is our importance evidence of high extinction risk?]]></title><description><![CDATA[A summary of &#8220;Doomsday rings twice&#8221; by Andreas Mogensen.]]></description><link>https://www.millionyearview.com/p/is-our-importance-evidence-of-high</link><guid isPermaLink="false">https://www.millionyearview.com/p/is-our-importance-evidence-of-high</guid><dc:creator><![CDATA[Riley Harris]]></dc:creator><pubDate>Sun, 01 Oct 2023 17:20:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!SVT4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8274f4c2-f1eb-4729-b00e-fac024816263_4679x2624.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>The summary was written by Riley Harris and first published on the <a href="https://globalprioritiesinstitute.org/andreas-mogensen-summary-doomsday-rings-twice/">Global Priorities Institute website.</a></em></p><p>We live at a time of rapid technological development, and how we handle the most powerful emerging technologies could determine the fate of humanity. Indeed, our ability to prevent such technologies from causing extinction could put us amongst the most important people in history. In &#8220;Doomsday rings twice&#8221;, Andreas Mogensen illustrates how our potential importance could be evidence for humanity&#8217;s near-term extinction. This evidence indicates that either extinction would be very likely or that we cannot make a difference to extinction risk.</p><h2><strong>Responding to evidence</strong></h2><p>Consider how we respond to new evidence. When you see two people holding hands, you would consider them more likely to be in a relationship than you did previously. Although people sometimes hold hands when they <em>are not</em> in a relationship, you are more likely to observe hand-holding if they<em> are</em>. In general, you should update towards hypotheses that make your observations more likely. Equivalently, seeing two random people holding hands is surprising, but seeing a couple hold hands is mundane. We should update towards hypotheses that would be less surprising given our evidence.</p><h2><strong>Importance of current people as evidence for near-term extinction</strong></h2><p>Many have argued that we live in a <em>time of perils.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></em><a href="https://globalprioritiesinstitute.org/andreas-mogensen-summary-doomsday-rings-twice/#footnote1"><sup> </sup></a>Humanity has only just come into immense power&#8211;&#8211;we could destroy ourselves with nuclear bombs or bioweapons. The current generation is incredibly important because we are uniquely placed to prevent human extinction. And it turns out our importance would be evidence for extinction over survival. To see why, consider how <em>surprising</em> our importance would be if we survived. Humanity has enormous potential&#8211;&#8211;the future could contain trillions of people if we survive. If we were amongst the most important of these trillions of potential people, that would be incredibly surprising. It would be less surprising if we were the most important members of a smaller group&#8211;&#8211;and we would be if humanity went extinct in the near term. Evidence favours the hypothesis which would make our observations less surprising, so our importance is evidence for near-term extinction.</p><p>Mogensen&#8217;s argument is closely related to the &#8216;original&#8217; doomsday argument, which uses the fact that we are early rather than the fact that we seem unusually influential&#8211;&#8211;see Leslie (1998) and Carter and McCrea (1983). However, Mogensen considers the original argument to have a flaw that his own version does not inherit. The problem with the argument is that the evidence it gives us is usually balanced out by another piece of evidence. The &#8220;surprisingly early&#8221; fact is entailed by the principle that you should reason as if you are randomly selected from observers in your reference class (see Bostrom, 2002). When you observe that you are surprisingly early, it is particularly surprising if humanity survives for eons. However, this principle is always exactly counterbalanced by another principle, that you ought to assign initially higher credence to worlds with many observers. The intuition is that we would be incredibly lucky to be amongst the only observers in the universe&#8211;&#8211;so you must assign higher probability to the hypothesis according to which your reference class is bigger (see Dieks, 1992). Mogensen notes that another possible objection is that the argument depends on a particular choice of reference class, but he thinks that this is a more general problem rather than a specific objection to this argument (see Goodman, 1983 and Hajek, 2007). However, our apparent importance is an additional piece of evidence that really can change the chance of extinction.</p><h2>How strong is this evidence?</h2><p>Mogensen argues the evidence is strong enough to command a dramatic shift in our beliefs. No matter what you thought before, our importance should be evidence that extinction looms larger than previously suspected. We don&#8217;t have enough information to know exactly how strong this evidence is, but we can approximate the strength of the evidence through some reasonable estimates.&nbsp;</p><p>From the dawn of time till the end of the world, how many people will there be? There have been about 60 billion so far.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a><sup> </sup>For simplicity, suppose that if we go extinct soon, there will be 100 billion in total. If we survive, there might be trillions more&#8211;&#8211;suppose 100 trillion, for concreteness. Finally, suppose that regardless of whether or not we prevent extinction, we are amongst the most important 10 billion people to ever live.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><p>How much more surprising would our survival be given these estimates? Our position amongst the 10 billion most important people would only be a little surprising amongst 100 billion people&#8211;&#8211;we would occupy the top 10%. However, our position would be <em>extraordinarily </em>surprising if humanity survived. A total of 100 trillion people would place us in the most important 0.01%.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a><sup> </sup>This means, we are about 1,000 times more likely to be in the most important 10% of people than the most important 0.01%.</p><p>This is strong evidence&#8211;&#8211;strong enough that it would warrant the dramatic shift from a 5% chance to a 98% chance of extinction. We can explain this using the concept of the &#8220;odds ratio&#8221;, which is used in betting and statistics, and gives an easy way to update on new evidence. A 5% chance is equivalent to the odds ratio being 1 to 19. We can think of 1 to 19 odds as saying that there are 20 total possibilities (19+1) of which only one involves extinction (so there is a 1/20 or 5% chance of extinction), and 19 involve us surviving the next few centuries (19/20 chance). Our position amongst the 10 billion most important people would be 1,000 times as surprising if we survived the next few hundred years. This moves the odds ratio from 1 to 19 all the way up to 1,000 to 19. That is, of 1,019 total possibilities, 1,000 of them involve extinction. In other words, a 98% chance of extinction. The broader point is that&#8211;&#8211;whatever your initial beliefs&#8211;&#8211;our importance provides very strong evidence for extinction.&nbsp;</p><h1><strong>Conclusion</strong></h1><p>Mogensen&#8217;s argument presents our position of relative importance as evidence that we are likely to go extinct in the short-term. This could be because the risk of extinction is higher than previously thought, or because we cannot affect the current extinction risk. If the overall risk is high because we cannot affect it, then we are not living at a time of unique risk&#8211;&#8211;and we are not particularly important. Mogensen thinks that you could take the evidence either way&#8211;&#8211;though he slightly favours the conclusion that we are not so important after all.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.millionyearview.com/p/is-our-importance-evidence-of-high?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for your time and attention. At the moment I have 167 subscribers, so I&#8217;d love for you to share it with a friend. It takes a lot of time and effort to write each post and this is the best way for you to support me.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.millionyearview.com/p/is-our-importance-evidence-of-high?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.millionyearview.com/p/is-our-importance-evidence-of-high?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!SVT4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8274f4c2-f1eb-4729-b00e-fac024816263_4679x2624.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!SVT4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8274f4c2-f1eb-4729-b00e-fac024816263_4679x2624.jpeg 424w, https://substackcdn.com/image/fetch/$s_!SVT4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8274f4c2-f1eb-4729-b00e-fac024816263_4679x2624.jpeg 848w, https://substackcdn.com/image/fetch/$s_!SVT4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8274f4c2-f1eb-4729-b00e-fac024816263_4679x2624.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!SVT4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8274f4c2-f1eb-4729-b00e-fac024816263_4679x2624.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!SVT4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8274f4c2-f1eb-4729-b00e-fac024816263_4679x2624.jpeg" width="1456" height="817" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8274f4c2-f1eb-4729-b00e-fac024816263_4679x2624.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:817,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:349837,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!SVT4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8274f4c2-f1eb-4729-b00e-fac024816263_4679x2624.jpeg 424w, https://substackcdn.com/image/fetch/$s_!SVT4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8274f4c2-f1eb-4729-b00e-fac024816263_4679x2624.jpeg 848w, https://substackcdn.com/image/fetch/$s_!SVT4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8274f4c2-f1eb-4729-b00e-fac024816263_4679x2624.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!SVT4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8274f4c2-f1eb-4729-b00e-fac024816263_4679x2624.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>Photo by <a href="https://www.pexels.com/photo/silhouette-photography-of-boat-on-water-during-sunset-1118874/">Johannes Plenio</a></em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>See Sagan (1994), Leslie (1998) and Parfit (2011).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Other sources give <a href="https://www.prb.org/articles/how-many-people-have-ever-lived-on-earth/">different estimates</a>. It is likely that 60 million is lower than the true number, but this doesn&#8217;t affect the end result. If we expected there to be around 200 million conditional on near-term extinction then this argument would imply that we end up with a ~96% chance of extinction, rather than an ~98% chance. These numbers represent the fact that this argument <em>powerfully </em>pushes up the chances of near-term extinction, but the exact number should be taken to be approximate.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>It makes no difference to the argument whether you think we&#8217;re in the most important 1 billion, 100 million, or something else compatible with our seemingly incredible importance.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Surviving puts us in the most important 10 billion/100 trillion=1010/1014=0.01% of people.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p><strong>Sources</strong>:<br><br>Nick Bostrom (2002). <em><a href="https://www.routledge.com/Anthropic-Bias-Observation-Selection-Effects-in-Science-and-Philosophy/Bostrom/p/book/9780415883948">Anthropic bias: observation selection effects in science and philosophy. </a></em>Routledge.&nbsp;</p><p>Brandon Carter and William H. McCrea (1983). <a href="https://royalsocietypublishing.org/doi/10.1098/rsta.1983.0096">The Anthropic Principle and its implications for biological evolution.</a> <em>Philosophical Transactions of the Royal Society. </em>A 310.&nbsp;</p><p>Dennis Dieks (1992). <a href="https://academic.oup.com/pq/article-abstract/42/166/78/1541942?redirectedFrom=PDF">Doomsday &#8211; or: the danger of statistics.</a> <em>Philosophical Quarterly</em> 42/166.</p><p>Nelson Goodman (1983). <em><a href="https://www.hup.harvard.edu/catalog.php?isbn=9780674290716">Fact, fiction, and forecast.</a></em> Harvard University Press.</p><p>Alan H&#225;jek (2007). <a href="https://link.springer.com/article/10.1007/s11229-006-9138-5">The reference class problem is your problem too.</a> <em>Synthese</em> 156.&nbsp;</p><p>John Leslie (1998). <em><a href="https://www.routledge.com/The-End-of-the-World-The-Science-and-Ethics-of-Human-Extinction/Leslie/p/book/9780415184472">The end of the world: the science and ethics of human extinction.</a></em> Routledge.&nbsp;</p><p>Derek Parfit (2011). <em><a href="https://global.oup.com/academic/product/on-what-matters-9780198778608">On What Matters.</a></em> Oxford University Press.</p><p>Carl Sagan (1994). <em><a href="http://www.randomhousebooks.com/books/159735/">Pale Blue Dot: A Vision of the Human Future in Space.</a></em> Random House.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Summary — When should an effective altruist donate? By William MacAskill]]></title><description><![CDATA[Simple explanations of important research.]]></description><link>https://www.millionyearview.com/p/whentodonate</link><guid isPermaLink="false">https://www.millionyearview.com/p/whentodonate</guid><dc:creator><![CDATA[Riley Harris]]></dc:creator><pubDate>Fri, 29 Sep 2023 11:24:51 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/fa91c6c5-6160-4101-94db-55830065e930_5390x5389.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This summary was <a href="https://globalprioritiesinstitute.org/summary-when-should-an-effective-altruist-donate/">first published</a> on the Global Priorities Institute website, the original paper can be found <a href="https://globalprioritiesinstitute.org/william-macaskill-when-should-an-effective-altruist-donate/">here</a>.</em></p><p>Effective altruists seek to do as much good as possible given limited resources. Often by donating to important causes like global health and poverty, farmed animal welfare, and reducing existential risks. Can we help more by donating now or later? This is the thorny question William MacAskill tackles in the paper &#8220;<a href="https://globalprioritiesinstitute.org/william-macaskill-when-should-an-effective-altruist-donate/">When should an effective altruist donate?</a>&#8221;. He explores several considerations and presents a simple framework to help individual donors and philanthropic organisations decide for themselves. In MacAskill&#8217;s view, the most important considerations are:</p><ul><li><p>Do we have particularly good opportunities to invest in movement-growth?</p></li><li><p>How quickly are the best opportunities being funded?</p></li><li><p>How will our ability to identify philanthropic opportunities change over time?; and</p></li><li><p>To what extent will our values improve over time?</p></li></ul><h2><strong>The intuition behind waiting to donate</strong></h2><p>Some may be surprised that, rather than donating as soon as possible, it can be more effective to invest today and thereby donate more tomorrow. To strengthen our intuition for this idea, consider a slightly different context: an altruistic 18-year-old deciding how to maximise the impact they&#8217;ll have on the world. Rather than trying to have an impact immediately, it could be better for them to attend university first. In other words, we might encourage them to invest in their skills and networks at present, so that they can have a larger impact further down the line. Likewise, if returns on investment are sufficiently high, a donor might be able to help a greater number of future people by investing now to donate later.</p><h2><strong>Important considerations about when to donate</strong></h2><p>MacAskill identifies six important considerations about when to donate. He then also gives a qualitative framework for weighing these six considerations (p. 14-15).</p><h3><strong>1. Special relationships</strong></h3><p>Our special relationship to the present generation might encourage us to assign greater weight to their wellbeing and interests. We have special relationships with our family, friends and conationals. Perhaps we also have special relationships with all people alive today, compared to people in the future. Though moral philosophers typically argue that the interests of future people are in some fundamental sense just as important as our own, MacAskill thinks &#8212; given our moral uncertainty &#8212; we should still give slightly more weight to the interests of people we have special relationships to.&nbsp;</p><h3><strong>2. Changing opportunities</strong></h3><p>All else equal, we should give when we have the best opportunities. Whether our opportunities are getting better or worse depends largely on two opposing effects. Firstly, the world is getting richer. Over time, this means that the most effective interventions will have already received sufficient funding, and the quality of our remaining opportunities will be worse. Secondly, we are also discovering new interventions (e.g., one of the best current ways to combat malaria is through distributing bed-nets, but in the future we might be able to cheaply eradicate malaria using gene drives). It&#8217;s difficult to know which of these effects is stronger, but MacAskill tentatively leans towards thinking our opportunities are getting worse over time.&nbsp;</p><h3><strong>3.</strong> <strong>Getting better knowledge</strong></h3><p>Even if our opportunities are getting worse over time, we may be getting better at recognising which are our best opportunities. In general, our ability to identify important opportunities has been getting better over time. For instance, we have increasingly run randomised controlled trials which indicate which charitable interventions are the most helpful, and this information is increasingly aggregated and analysed by organisations like Givewell, leading to better information about which charities are likely to be the most cost-effective. This consideration tends towards giving later.</p><h3><strong>4. Value changes</strong></h3><p>If you decide to give later, your values may change in the meantime &#8212; indeed, many people have different values in their sixties when compared to their twenties. These changes might be viewed in a positive way &#8212; perhaps they will be the result of deliberation and improved understanding. They might, however, also be driven by less positive factors &#8212; for instance, if you became wealthier and at the same time came to believe that wealthy people shouldn&#8217;t donate much at all, then this change could be seen as suspiciously self-serving. Overall, MacAskill thinks that <em>&#8220;...we should defer to our future selves on our values, </em>if <em>it&#8217;s the case that we think that their values are the result of &#8216;good&#8217; processes, such as careful reflection, discussion with peers, consideration of moral argument, and so on&#8221; </em>(p. 12). However, he also argues that not all value changes are good.&nbsp;</p><h3><strong>5. Movement growth</strong></h3><p>The effect of donations on <em>movement growth &#8212; </em>for example by donating to an organisation that uses it to fundraise &#8212; can be viewed as a kind of investment that might have exceptional and compounding returns. For example, Giving What We Can estimates that each dollar donated to them eventually results in $100 in donations towards their top charities.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><h3><strong>6. Amount of money to donate</strong></h3><p>This consideration is deceptively simple in theory: all else equal, you should donate whenever you are able to give the most money. In the paper, MacAskill goes into detail on this category, considering taxes, uncertainty and financial investments. Perhaps surprisingly, weakness of will is relevant here. If you are likely to fail to meet your commitment to giving in the future, due to weakness of will, and you can&#8217;t mitigate this by setting up a donor advised fund, then perhaps you should donate earlier, even if it&#8217;s less effective, rather than risk failing to donate later.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.millionyearview.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for your time and attention. At the moment I have 167 subscribers, so I&#8217;d love for you to share it with a friend. It takes a lot of time and effort to write each post and this is the best way for you to support me.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><p>Photo by <a href="https://www.pexels.com/photo/bell-tower-of-old-cathedral-against-cloudy-sky-in-rain-4124336/">Danish Ahmad</a>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.millionyearview.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.millionyearview.com/subscribe?"><span>Subscribe now</span></a></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>This comes from <a href="https://web.archive.org/web/20191021195334/https://www.givingwhatwecan.org/impact/">an estimate done in 2015</a>, where the best guess estimate was $104 and the lower bound estimate is $6. More recent estimates are lower, <a href="https://www.givingwhatwecan.org/impact">at around $30.</a></p></div></div>]]></content:encoded></item><item><title><![CDATA[Is this the most important century?]]></title><description><![CDATA[This is a summary of the paper &#8220;Are we living at the hinge of history?&#8221; by William MacAskill. Are we in history's most influential era? MacAskill evaluates the dilemma faced by altruists&#8212;whether to expend resources immediately or to strategically invest for future impact.]]></description><link>https://www.millionyearview.com/p/hinge</link><guid isPermaLink="false">https://www.millionyearview.com/p/hinge</guid><dc:creator><![CDATA[Riley Harris]]></dc:creator><pubDate>Thu, 14 Sep 2023 08:30:08 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49065630-9f7a-4827-b0a1-4718e192a4ab_3400x2825.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This is a summary of the paper  <a href="https://globalprioritiesinstitute.org/william-macaskill-are-we-living-at-the-hinge-of-history/">&#8220;Are we living at the hinge of history?&#8221; by William MacAskill</a>. The summary was <a href="https://globalprioritiesinstitute.org/summary-are-we-at-the-hinge-of-history/">first published</a> on the Global Priorities Institute website.</em></p><p>Longtermist altruists &#8211; who care about how much impact they have, but not about when that impact occurs<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> &#8211; have a strong reason to invest resources before using them directly.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> Invested resources could grow much larger and be used to do much more good in the future. For example, a $1 investment that grows 5% per year would become $17,000 in 200 years.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>&nbsp;However, some people argue that we are living in an unusual time, during which our best opportunities to improve the world are much better than they ever will be in the future. If so, perhaps we should spend our resources as soon as possible.&nbsp;</p><p>In &#8220;Are we living at the hinge of history?&#8221;, William MacAskill investigates whether actions in our current time are likely to be much more influential than other times in the future.&nbsp; (&#8216;Influential&#8217; here refers specifically to how much good we expect to do via direct monetary expenditure &#8211; the consideration most relevant to our altruistic decision to spend now or later.) After making this &#8216;hinge of history&#8217; claim more precise, MacAskill gives two main arguments against the claim: the base rate and inductive arguments. He then discusses some reasons why our time might be unusual, but ultimately concludes that he does not think that the &#8216;hinge of history&#8217; claim holds true.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!aAeG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49065630-9f7a-4827-b0a1-4718e192a4ab_3400x2825.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!aAeG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49065630-9f7a-4827-b0a1-4718e192a4ab_3400x2825.png 424w, https://substackcdn.com/image/fetch/$s_!aAeG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49065630-9f7a-4827-b0a1-4718e192a4ab_3400x2825.png 848w, https://substackcdn.com/image/fetch/$s_!aAeG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49065630-9f7a-4827-b0a1-4718e192a4ab_3400x2825.png 1272w, https://substackcdn.com/image/fetch/$s_!aAeG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49065630-9f7a-4827-b0a1-4718e192a4ab_3400x2825.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!aAeG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49065630-9f7a-4827-b0a1-4718e192a4ab_3400x2825.png" width="1456" height="1210" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/49065630-9f7a-4827-b0a1-4718e192a4ab_3400x2825.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1210,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:703672,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!aAeG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49065630-9f7a-4827-b0a1-4718e192a4ab_3400x2825.png 424w, https://substackcdn.com/image/fetch/$s_!aAeG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49065630-9f7a-4827-b0a1-4718e192a4ab_3400x2825.png 848w, https://substackcdn.com/image/fetch/$s_!aAeG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49065630-9f7a-4827-b0a1-4718e192a4ab_3400x2825.png 1272w, https://substackcdn.com/image/fetch/$s_!aAeG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49065630-9f7a-4827-b0a1-4718e192a4ab_3400x2825.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Source: <a href="https://ourworldindata.org/grapher/global-gdp-over-the-long-run">Our World in Data</a>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p><h2><strong>The base rate argument</strong></h2><p>When we think about the entire future of humanity, we expect there to be a lot of people, and so we should initially be very sceptical that anyone alive today will be amongst the most influential human beings. Indeed, if humanity doesn&#8217;t go extinct in the near future, there could be a vast number of future people &#8211; settling near just 0.1% of stars in the Milky Way with the same population as Earth would mean there were 10<sup>24</sup> (a trillion trillion) people to come. Suppose that, before inspecting further evidence, we believe that we are about as likely as anyone else to be particularly influential. Then, our initial belief that anyone alive today is amongst the million most influential people would be 1 in 10<sup>18</sup> (1 in a million trillion).&nbsp;</p><p>From such a sceptical starting point, we would need extremely strong evidence to become convinced that we are presently in the most influential time era. Even if there were only 10<sup>8</sup> (one hundred million) people to come, then in order to move from this extremely sceptical position (1 in 10<sup>8</sup>) to a more moderate position (1 in 10), we would need evidence about 3 million times as strong as a randomised control trial with a p-value of 0.05.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a><sup>&nbsp; </sup>MacAskill thinks that, although we do have some evidence that indicates we may be at the most<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> influential time, this evidence is not nearly strong enough.</p><h2><strong>The inductive argument</strong></h2><p>There is another strong reason to think our time is not the most influential, MacAskill argues:</p><p>Premise 1: Influentialness has been increasing over time.</p><p>Premise 2: We should expect this trend to continue.</p><p>Conclusion: We should expect the influentialness of people in the future to be greater than our own influentialness.</p><p>Premise 1 can be best illustrated with an example: a well educated and wealthy altruist living in Europe in 1600 would not have been in a position to know about the best opportunities to shape the long-run future. In particular, most of the existential risks they faced (e.g. an asteroid collision or supervolcano) were not known, nor would they have been in a good position to do anything about them even if they were known. Even if they had the scientific knowledge that we have, they might have used it to pursue a worse moral view.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> Indeed, it is likely that future generations will discover ways in which we are misguided, both morally and scientifically. If we are mistaken enough, our (well-intentioned) present actions could actually be doing harm. Premise 2 indicates that we should expect the trend of improvement to continue. This is especially plausible because we can identify gaps in our scientific, technological and moral understanding. Overall, this argument indicates that we should expect future generations to be more influential than we are.</p><h2><strong>Reasons why our time might be unusual</strong></h2><p>MacAskill also discusses several reasons one might think that our time is unusual, and therefore may be unusually influential. Our time is unusual because we currently live on a single planet, while most people who will ever live will likely (in expectation) be part of an interplanetary civilisation.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a> We also live at a time of extreme technological progress which cannot continue indefinitely: our current economic growth rate is around 3.5%, but 2% annual growth over the next 10,000 years would result in an economic output of 10<sup>19</sup> (ten million trillion) times the current world GDP <em>for every atom in the galaxy</em>.</p><p>There are three important ways in which this could make our time unusually influential:&nbsp;</p><ol><li><p>Our single planet is a single point of failure, which may make the risk of extinction temporarily higher than usual.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a></p></li><li><p>While we live on a single planet, the most influential people today may have an unusual ability to influence humanity as a whole &#8211; both because they can communicate near instantaneously with almost everyone and because their resources are a relatively large fraction of the total. If humanity becomes a much larger space-faring civilisation, these will likely both change.&nbsp;&nbsp;</p></li><li><p>Plausibly, the fate of the future will be decided by how we handle some particular technology (such as artificial intelligence or particularly dangerous new weapons) and we are more likely to discover such a technology during a period of rapid growth.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a><sup> </sup>&nbsp;</p></li></ol><p>However, each of these arguments has important caveats. In relation to the first argument, most people who are worried about existential risk believe that a large part of the risk comes from misaligned artificial intelligence,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a> and this would not be significantly reduced by planetary diversification. In relation to the second argument, this period of unusual influence may be prolonged if our civilisation stays earthbound for thousands of years or it just takes longer than we expect to leave the solar system. (It only takes one hour for light to traverse the full diameter of the asteroid belt, so the ability of the most influential people to influence humanity as a whole may remain high for quite some time.) In relation to the third argument, perhaps this period of remarkable economic growth will last longer than most anticipate. Even if this period is short, one could argue that longtermists will be less influential during periods of high economic growth, because the unpredictability of a rapidly changing environment hinders the execution of very long-term projects. Overall, MacAskill thinks that these arguments provide evidence that our time may be the most influential. However, the base rate and inductive arguments show that we should be extremely sceptical that we live at the most important time &#8211; and the evidence presented in this section does not seem strong enough to overcome these arguments.</p><p>Overall, we probably do not live at the &#8216;hinge of history&#8217;. If we did, this would give us a powerful reason to spend now rather than investing to have a much larger impact later. Instead, the case for investment remains strong.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.millionyearview.com/p/hinge?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for your time and attention. At the moment I have 167 subscribers, so I&#8217;d love for you to share it with a friend. It takes a lot of time and effort to write each post and this is the best way for you to support me.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.millionyearview.com/p/hinge?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.millionyearview.com/p/hinge?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p><h2><strong>References</strong></h2><p>Daniel Benjamin <em>et al. </em>(2018) <a href="https://www.nature.com/articles/s41562-017-0189-z">Redefine statistical significance.</a> <em>Nature Human Behaviour. </em>2.&nbsp;</p><p>Nick Bostrom (2014). <em><a href="https://global.oup.com/academic/product/superintelligence-9780199678112">Superintelligence: Path, Dangers, Strategies.</a></em> Oxford University Press.</p><p>Hilary Greaves and William MacAskill (2021). <a href="https://globalprioritiesinstitute.org/hilary-greaves-william-macaskill-the-case-for-strong-longtermism-2/">The case for strong longtermism</a><em>. GPI Working Paper No. 5-2021.</em></p><p>William MacAskill (2022). <a href="https://oxford.universitypressscholarship.com/view/10.1093/oso/9780192894250.001.0001/oso-9780192894250-chapter-13">Are we living at the hinge of history?</a> <em>Ethics and Existence: The Legacy of Derek Parfit.</em> Oxford University Press. Edited by Jeff McMahan, Tim Campbell, James Goodrich, and Ketan Ramakrishnan.</p><p>William MacAskill (2019).<a href="https://globalprioritiesinstitute.org/william-macaskill-when-should-an-effective-altruist-donate/">When should an effective altruist donate?</a> <em>GPI Working Paper No. 8-2019.</em></p><p>Toby Ord (2020).<a href="https://www.bloomsbury.com/uk/precipice-9781526600219/"> </a><em><a href="https://www.bloomsbury.com/uk/precipice-9781526600219/">The Precipice: Existential Risk and the Future of Humanity</a></em>. Bloomsbury Publishing.</p><p>Carl Sagan (1994). <em><a href="http://www.randomhousebooks.com/books/159735/">Pale Blue Dot: A Vision of the Human Future in Space.</a></em> Random House.</p><p>Philip Trammell (2021). <a href="https://globalprioritiesinstitute.org/dynamic-public-good-provision-under-time-preference-heterogeneity-theory-and-applications-to-philanthropy-philip-trammell-global-priorities-institute-and-department-of-economics-university-of-oxford/">Dynamic public good provision under time preference heterogeneity: theory and applications to philanthropy</a>. <em>GPI Working Paper No. 9-2021.</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>See Greaves and MacAskill (2021), or the <a href="https://globalprioritiesinstitute.org/summary-the-case-for-strong-longtermism/">summary of their paper</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>See MacAskill (2019) and Trammell (2021).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Of course, <em>inflation </em>decreases what you can buy with the same sum in the future, but here we are talking about <em>real </em>returns (which account for inflation), so you could buy what $17,000 would buy today.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>World Bank (2023); Bolt and van Zanden - Maddison Project Database 2023 (2024); Maddison Database 2010 &#8211; with major processing by Our World in Data. &#8220;Global GDP over the long run &#8211; World Bank, Maddison Project Database, Maddison Database &#8211; Historical data&#8221; [dataset]. World Bank, &#8220;World Bank World Development Indicators&#8221;; Bolt and van Zanden, &#8220;Maddison Project Database 2023&#8221;; Angus Maddison, &#8220;Maddison Database 2010&#8221; [original data]. Retrieved May 28, 2024 from https://ourworldindata.org/grapher/global-gdp-over-the-long-run</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Here the &#8216;Bayes&#8217; factor is used as a measure of the strength of a piece of evidence. It is an exact mathematical denotation of how much rational beliefs should change in response to that evidence. The Bayes factor required to move from 1 in 100 million to 1 in 10 would be 10 million (because 1/10=10 million/100 million). Under plausible assumptions, the Bayes factor of a randomised controlled trial with a p-value of 0.05 is approximately 3 (Benjamin et al, 2018, p. 7), so we would need about 3 million times as much evidence.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>One might try to defend a more modest position, instead claiming that this is just one enormously influential time (rather than the most influential), or that it is only the most influential relative to times we can plausibly pass resources to (the next thousand years or so). Indeed, these claims would require less strong evidence to defend, but we also have less evidence to defend them.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>They would have likely believed that non-male, non-white or non-Christian people were less valuable, that strong social hierarchy and slavery were natural and that homosexuality and premarital sex were deeply immoral.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>Even if our prospects for becoming an interplanetary civilisation were low, most future people would be part of one (in expectation). This is because an interplanetary civilisation could be very large &#8211; there could be many planets with the population of Earth, and they could sustain life much longer.&nbsp;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>See Sagan (1994) and Ord (2020).&nbsp;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>See Bostrom (2014) and Ord (2020).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>Toby Ord (2020) estimates that two thirds of the total risk this century comes from misaligned AI.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Can the law protect future generations? ]]></title><description><![CDATA[A summary of the paper "Protecting future generations: A global survey of legal academics" by Eric Mart&#237;nez and Christoph Winter. The summary was written by Riley Harris and first posted on the Legal Priorities Blog. As longtermists, we believe it is crucial to shape the long-term future for the better. Legal longtermists are particularly interested in how the law can be used to safeguard the interests of future generations and improve the lives of people in the coming centuries and millennia. In &#8220;]]></description><link>https://www.millionyearview.com/p/protecting-future-generations</link><guid isPermaLink="false">https://www.millionyearview.com/p/protecting-future-generations</guid><dc:creator><![CDATA[Riley Harris]]></dc:creator><pubDate>Fri, 01 Sep 2023 06:48:14 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/7f52e9cd-a1bd-493e-8a18-3dc031abf738_5083x8192.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>A summary of the paper "Protecting future generations: A global survey of legal academics" by Eric Mart&#237;nez and Christoph Winter. The summary was written by Riley Harris and first posted on the <a href="https://www.legalpriorities.org/blog/2023/summary-protecting-future-generations/">Legal Priorities Blog</a>.</em> </p><p>As longtermists, we believe it is crucial to shape the long-term future for the better. Legal longtermists are particularly interested in how the law can be used to safeguard the interests of future generations and improve the lives of people in the coming centuries and millennia. In &#8220;<a href="https://www.legalpriorities.org/research/protecting-future-generations.html">Protecting future generations</a>&#8221;, Eric Mart&#237;nez and Christoph Winter find that legal academics tend to believe that future generations deserve greater legal protections.</p><h2>Do future generations deserve more protection?</h2><p><em>Longtermism</em> is the idea that we should focus on making the future better. This is because the future is incredibly important &#8212; and if we can make it better in predictable ways, then we could impact the lives of billions or trillions of people. <em>Legal longtermists</em> focus on how legal systems can protect future generations and improve the long-run future. This survey looks at a few open questions in legal longtermism &#8212; for example, can and should the law protect future generations? To answer these questions, Mart&#237;nez and Winter surveyed 516 legal experts from top English-speaking and common-law universities around the world.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>Legal systems tend not to protect future generations.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> For example, in the United States, cost benefit analysis is used to make regulatory decisions &#8212; but the process &#8220;discounts&#8221; away the interests of future generations<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> in a way seems unjustified to most philosophers.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> </p><p>In the legal system, one of the most important rights is the ability to take legal action. In the United States, this right can be invoked when you are harmed or placed in immediate danger. Harms to future generations don't fit this criteria.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> However, over two-thirds of surveyed legal experts thought there was some legal basis for people in the next 100 years to sue, in at least some cases. Strikingly, more than half also thought the same was true for those living more than 100 years from now. This may indicate opportunities to assert the rights of future generations with the right cases and arguments.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a></p><p>However, future generations have never been allowed to sue (Bogojevi&#263;, 2020), and legal experts think that future generations should be protected more than they are today. On average, future generations are only protected a third as much as they should be &#8212; according to legal experts. They thought that the current generation should be protected more than they are now, and more than future generations should be. They also found that other groups, like animals and people from other countries, are not protected as much as they should be, but future generations are by far the most neglected by the law.</p><h2>Protecting future generations</h2><p>The law sometimes has long-term effects, for example, ancient Roman law still influences many civil law systems (Watson 1991). In other cases, effects may be short or unpredictable.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> For the legal longtermist, everything hinges on whether the law provides predictable and feasible ways to help future generations. Legal experts think the law can provide protection to future generations, and that it is one of the best ways to predictably help future generations. Mart&#237;nez and Winter estimate<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a> that around 74% of legal academics somewhat agree that the law can help people more than a century from today. While about 73% somewhat agree that legal methods are among the most predictable and feasible ways of doing so. About 41% somewhat agree there are even ways for the law to protect people who are a millennium away!<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a><em><br></em></p><p>Next, Mart&#237;nez and Winter explored the most promising areas of law from the perspective of legal longtermism. Many show at least some promise &#8212; for each area in the survey at least half of legal academics would somewhat agree that area could be helpful. However, legal academics were most confident in environmental law and constitutional law. The most promising constitutional mechanisms are ensuring that 1% of GDP goes towards safeguarding humanity against existential risks or giving explicit legal standing to future generations.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a> However, the differences here were small.</p><p>Finally, they asked about causes that longtermists tend to find important, to identify the most promising ones for legal intervention. Although at least half of the academics thought all the areas might be helped, they had lower confidence in artificial intelligence (56%) than the others &#8212; for instance, climate change (84%).</p><h2>Conclusion</h2><p>Eric Mart&#237;nez and Christoph Winter surveyed legal academics and found that they generally believe that future generations deserve more protection. The law provides a promising avenue for providing these protections &#8212; in particular environmental and constitutional law. Future research could explore the attitudes of different groups towards legal longtermism, try to compare specific policies rather than broad areas of law, or investigate other legal concepts such as personhood.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.millionyearview.com/p/protecting-future-generations?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for your time and attention. At the moment I have 167 subscribers, so I&#8217;d love for you to share it with a friend. It takes a lot of time and effort to write each post and this is the best way for you to support me.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.millionyearview.com/p/protecting-future-generations?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.millionyearview.com/p/protecting-future-generations?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p><p>Cover photo by <a href="https://www.pexels.com/photo/the-denver-post-office-and-federal-court-house-3751006/">Colin Lloyd</a>.</p><h2>Sources</h2><p>Renan Ara&#250;jo &amp; Leonie Koessler (2021). <a href="https://www.legalpriorities.org/research/constitutional-protection-future-generations.html">The Rise of the Constitutional Protection of Future Generations.</a> <em>Legal Priorities Project Working Paper No. 7-2021.</em></p><p>Sanja Bogojevi&#263; (2020). <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/reel.12345">Human rights of minors and future generations: global trends and EU law particularities</a>. <em>Review of European comparative &amp; international environmental law</em> 29/2.</p><p>John Broome (1994). <a href="https://www.jstor.org/stable/2265483">Discounting the Future</a>. <em>Philosophy &amp; Public Affairs</em> 23/2.</p><p>Tyler Cowen &amp; Derek Parfit (1992). Against the Social Discount Rate. <em><a href="https://www.jstor.org/stable/j.ctt211qw3x">Philosophy, Politics, and Society: Volume 6, Justice Between Age Groups and Generations</a></em><a href="https://www.jstor.org/stable/j.ctt211qw3x">.</a> Yale University Press. Edited by Peter Laslett &amp; James S. Fishkin.</p><p>Moritz A. Drupp, Mark C. Freeman, Ben Groom and Frikk Nesje (2018). <a href="https://www.aeaweb.org/articles?id=10.1257/pol.20160240">Discounting Disentangled.</a> <em>American Economic Journal: Economic Policy</em> 10/4.</p><p>Thomas Ginsburg, Zachary Elkins, and James Melton (2009). <a href="https://www.law.uchicago.edu/news/lifespan-written-constitutions">The Lifespan of Written Constitutions</a>. <em>The University of Chicago Law School.</em></p><p>Andreas Mogensen (2019). <a href="https://globalprioritiesinstitute.org/wp-content/uploads/2020/Andreas_Mogensen_maximal_%20cluelessness.pdf">Maximal Cluelessness.</a> <em>Global Priorities Institute Working Paper No. 2-2019.</em></p><p>Office of Management and Budget (2003). <em><a href="https://obamawhitehouse.archives.gov/omb/circulars_a004_a-4/">Executive Office of the President, Circular A-4: Regulatory Analysis.</a></em></p><p>Derek Parfit (1986). <a href="https://academic.oup.com/book/12484">Reasons and Persons.</a> Oxford University Press.</p><p>Alan Watson (1991). <em><a href="https://ugapress.org/book/9780820312613/roman-law-and-comparative-law/">Roman Law and Comparative Law.</a></em> University of Georgia Press.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p><em>Most of the participants were professors (88.6%), and they came from Europe, Oceania, and the Americas, with some from Asia (9.3%) and Africa (5.2%). Most were at least somewhat liberal (80.4%). Some questions were answered by fewer participants, but they were unlikely a result of chance. Results were also mostly consistent across different demographics &#8212; but some questions were answered differently by participants living in Asia.</em></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p><em>One exception to this tendency is constitutional law. Around one third of all constitutions mention future generations, usually for environmental protections &#8212; but even in countries with strong constitutional protections, these protections are not enforced. See Ara&#250;jo and Koessler (2021).</em></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p><em>Office of Management and Budget (2003).</em></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p><em>See Parfit (1984), Cowen &amp; Parfit (1992), Broome (1994) and Mogensen (2019). Many economists also agree, see Drupp et al. (2018).</em></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p><em>Other places have different requirements, but nowhere explicitly allows future generations to sue.</em></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p><em>It's also possible that there are similarly strong arguments against the legal basis for future generations to sue, or that judges are applying the law incorrectly.</em></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p><em>For instance, constitutions only last around 17 years on average, and several of the founding fathers were sceptical that the US constitution would last more than a generation. See Elkins et. Al. (2009).</em></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p><em>Here they randomly sample from the responses to correct for certain distribution types &#8212; using a <a href="https://en.wikipedia.org/wiki/Bootstrapping_(statistics)#Methods_for_bootstrap_confidence_intervals">bias-corrected and accelerated bootstrap</a> to generate confidence intervals.</em></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p><em>Interestingly, 55% thought that environmental law &#8212; one of the most promising areas &#8212; had a hope of protecting the further future. This perhaps indicates that when these questions were asked in an abstract way experts tended to underestimate how helpful the law could be.</em></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p><em>They also asked about protecting future generations from discrimination, spending 1% of GDP to protect against existential risks, granting explicit standing to future generations, creating a commision to oversea the protection of future generation, and establishing the explicit state goal of protecting the future.</em></p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Our place in the story of humanity]]></title><description><![CDATA[Summary of &#8220;The Precipice&#8221; (4 of 4)]]></description><link>https://www.millionyearview.com/p/precipice-4</link><guid isPermaLink="false">https://www.millionyearview.com/p/precipice-4</guid><dc:creator><![CDATA[Riley Harris]]></dc:creator><pubDate>Mon, 28 Aug 2023 02:14:07 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!apve!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F160ff1c0-ac58-4c10-a74c-089c28ecc215_5500x5500.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This post is the final part of my summary of </em><a href="https://www.bloomsbury.com/uk/precipice-9781526600219/">The Precipice</a><em>, by Toby Ord. <a href="https://www.millionyearview.com/p/precipice-1">Previous </a><a href="https://www.millionyearview.com/p/precipice-2">posts </a><a href="https://www.millionyearview.com/p/playing-russian-roulette-with-the">gave </a>an overview of the existential risks. We learned that some of these risks (especially the emerging anthropogenic risks) are alarmingly high. This post explores our place in the story of humanity and the importance of reducing existential risk.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!apve!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F160ff1c0-ac58-4c10-a74c-089c28ecc215_5500x5500.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!apve!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F160ff1c0-ac58-4c10-a74c-089c28ecc215_5500x5500.jpeg 424w, https://substackcdn.com/image/fetch/$s_!apve!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F160ff1c0-ac58-4c10-a74c-089c28ecc215_5500x5500.jpeg 848w, https://substackcdn.com/image/fetch/$s_!apve!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F160ff1c0-ac58-4c10-a74c-089c28ecc215_5500x5500.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!apve!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F160ff1c0-ac58-4c10-a74c-089c28ecc215_5500x5500.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!apve!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F160ff1c0-ac58-4c10-a74c-089c28ecc215_5500x5500.jpeg" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/160ff1c0-ac58-4c10-a74c-089c28ecc215_5500x5500.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:11938784,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!apve!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F160ff1c0-ac58-4c10-a74c-089c28ecc215_5500x5500.jpeg 424w, https://substackcdn.com/image/fetch/$s_!apve!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F160ff1c0-ac58-4c10-a74c-089c28ecc215_5500x5500.jpeg 848w, https://substackcdn.com/image/fetch/$s_!apve!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F160ff1c0-ac58-4c10-a74c-089c28ecc215_5500x5500.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!apve!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F160ff1c0-ac58-4c10-a74c-089c28ecc215_5500x5500.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>A single human in the wilderness is nothing exceptional. But together humans have the ability to shape the world and determine the future of our species, planet, and universe.</p><p>We learn from our ancestors, add minor innovations of our own, and teach our children. We are the beneficiaries of countless improvements in technology, mathematics, language, institutions, culture, and art. These improvements make our lives much better than the lives of our ancestors.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>We hope that life will continue to improve. And we could have a lot of time to get things right. Humans have walked the earth for around&nbsp;200,000 years, but a typical mammalian species lasts for a million years, and our planet will remain habitable for a billion years. This is enough time to eradicate malaria and HIV, eliminate depression and dementia, and create a world free from racism, sexism, torture, and oppression. With so much time ahead of us, we might even figure out how to leave our solar system and settle the stars. If so, we could have a truly staggering number of descendants who can explore the universe and build wonders and masterpieces better than we can imagine. If we go extinct, all of this will be lost.</p><p>We have always faced a small risk from asteroids, pandemics, and volcanoes. But it was only recently that we began to face larger risks of our own creation. This period of heightened risk began last century with the invention of nuclear weapons (we now have enough to kill everyone on earth). Over the next century we will face additional risk from emerging developments in biotechnology and AI. In the words of Toby Ord, we are standing on &#8220;a crumbling ledge on the brink of a precipice.&#8221;</p><p>Safeguarding humanity is the defining challenge of our time.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> If we rise to it, there may be trillions of people living meaningful lives in the future. If we fail, then in all likelihood we will destroy ourselves. The fate of the world rests on our collective decisions.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.millionyearview.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.millionyearview.com/subscribe?"><span>Subscribe now</span></a></p><h1>Why should we try to prevent extinction?</h1><p>If a large asteroid was hurtling towards Earth, few would argue against building a deflection system. This indicates that our collective inaction is driven by a shared sentiment that the risk of extinction is low, rather than the belief that humanity is not worth protecting. However, it is still worth reflecting on why preventing extinction is so important.</p><h2><em>A tragedy on the grandest scale</em></h2><p>Sudden extinction, such as from an asteroid collision, would involve the sudden and gruesome deaths of billions of people, perhaps everyone. This alone would make it the most severe tragedy in history.</p><h2><em>The destruction of our potential</em></h2><p>Extinction would destroy our immense potential. Almost all humans that will ever live are yet to be born. Almost all human well-being and flourishing is yet to happen.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> All of this would be lost if the present generation went extinct.</p><h2><em>Intergenerational projects</em></h2><p>Our ancestors set in motion great projects for humanity &#8212; ending war, forging a just world, and understanding the universe. No single generation can complete these projects. But humanity can, with each generation contributing just a little. We benefit immensely from knowledge and wisdom passed down to us from previous generations, and we owe it to our children and grandchildren to protect this legacy and pass it down to them. Extinction would also destroy all cultural traditions, languages, poetry, and culture. We ought instead to protect, preserve, and cherish these things.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p><h2><em>Civilisational virtues</em></h2><p>We are accustomed to understanding virtues on an individual level, but we could also think of the collective virtues of humanity. When we fail to take these risks seriously, humanity might collectively demonstrate a lack of prudence. When we value our own generation so much as to put all future generations at risk, we demonstrate a lack of patience. And when we fail to prioritise well-known risks, we display a lack of self-discipline. When we do not rise to the challenge, we display a lack of hope, perseverance, and responsibility for our own actions.</p><h2><em>Cosmic significance</em></h2><p>We may be alone in the universe. If there are no aliens, then all life on Earth may have cosmic significance. Humanity would be in a unique position to explore and understand the universe. We would also have a responsibility to all life, as we would be the only ones who could protect it from harm and promote flourishing on other planets.</p><h2><em>Uncertainty</em></h2><p>Correctly accounting for our uncertainty about the future tends to strengthen the case for protecting our potential because the stakes are asymmetrical: overinvesting in safety is simply much better than letting everyone die. This means that even if we believe the risks are low, but we are not completely confident, then some efforts to safeguard humanity are warranted.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p><h1>Why are existential risks neglected?</h1><h2><em>Are existential risks neglected?</em></h2><p>Humanity spends less money attempting to prevent existential risk than it does on ice cream each year.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> The most risky emerging technologies are biotechnology and AI (see parts&nbsp;2 &amp; 3). Yet the international body responsible for the continued prohibition of bioweapons has an annual budget less than that of an average McDonald&#8217;s restaurant.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> And while we spend billions of dollars improving the capabilities of AI systems, we only spend tens of millions of dollars on ensuring safety. Research similarly neglects the most severe risks: for instance, there is plenty of research on the possible effects of climate change, but scenarios involving more than&nbsp;six&nbsp;degrees of warming are rarely studied or given space in policy discussions (King et al., 2015). There are several reasons for this neglect.</p><h2><em>Existential risk as a global public good</em></h2><p>When one organisation or government reduces the risk, they improve the situation for everyone in the world. Everyone is incentivised to wait for someone else to solve the problem and benefit from the hard work of others. This dynamic happens across generations too. So, many of the people who benefit if we safeguard humanity have not even been born yet. We do not yet have robust ways to coordinate on these issues.</p><h2><em>Short-term institutions</em></h2><p>Additionally, political decisions are notoriously short term. Existential risk tends to be ignored in favour of more urgent issues. That said, most existential risks are new relative to our political institutions, which have been built up over thousands of years. We only began to have the power to destroy ourselves in the midde of the last century, and since then there has begun to be serious thought about the possibility of extinction. Perhaps our institutions and practices will gradually adapt.</p><h2><em>Patterns of thinking</em></h2><p>Our brains are not built to grasp these risks intuitively, and there are several patterns of thinking that lead us to neglect existential risk. For instance, we tend to estimate the likelihood of an event based on how easy it is to recall examples of it happening in the past. This <em>availability heuristic</em> serves us well most of the time, but when we are dealing with risks like extinction, these heuristics allow us to ignore even large and growing risks. We also lack sensitivity to the scale of various catastrophes.</p><h1>Sources</h1><p>Biological Weapons Convention Implementation Support Unit (2019). <a href="https://geneva-s3.unoda.org/static-unoda-site/pages/templates/the-biological-weapons-convention/topics/2019-0131%2B2018%2BMSP%2BChair%2Bletter%2Bon%2Bfinancial%2Bmeasures.pdf">Biological Weapons Convention&#8212;Budgetary and Financial Matters.</a></p><p>Mark Nathan Cohen (1989). <a href="https://yalebooks.yale.edu/book/9780300050233/health-and-the-rise-of-civilization/">Health and the Rise of Civilization</a>. Yale University Press.</p><p>Joe Hasell, Max Roser, Esteban Ortiz-Ospina and Pablo Arriagada (2022).<a href="https://ourworldindata.org/poverty"> Poverty.</a> <em>Our World in Data</em>. (This article has been updated since <em>The precipice </em>was published in&nbsp;2020).</p><p>David King, Daniel Schrag, Zhou Dadi, Qi Ye and Arunabha Ghosh (2015). <a href="https://www.csap.cam.ac.uk/projects/climate-change-risk-assessment/">Climate Change: A Risk Assessment.</a> <em>Centre for Science and Policy.</em></p><p>IMARC Group (2019). <a href="https://www.imarcgroup.com/ice-cream-market">Ice Cream Market: Global Industry Trends, Share, Size, Growth, Opportunity and Forecast&nbsp;2019&#8211;2024.</a></p><p>McDonald&#8217;s Corporation (2018). <a href="https://corporate.mcdonalds.com/content/dam/sites/corp/nfl/pdf/McDonald%27s%202017%20Annual%20Report.pdf">Form&nbsp;10-K</a>. (McDonald&#8217;s Corporation Annual Report).</p><p>Max Roser and Esteban Ortiz-Ospina (2019). <a href="https://ourworldindata.org/literacy">Literacy</a>. <em>Our World in Data</em>.</p><p>World Health Organization (2016). <a href="https://apps.who.int/iris/handle/10665/206498">World Health Statistics&nbsp;2016: Monitoring Health for the SDGs, Sustainable Development Goals.</a></p><h1>Notes</h1><p>Image of the earth from<em>: <a href="http://www.tobyord.com/earth">www.tobyord.com/earth</a></em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>While&nbsp;1&nbsp;person in&nbsp;10&nbsp;is so remarkably poor today that they live on less than $2&nbsp;per day, before the Industrial Revolution&nbsp;19&nbsp;out of&nbsp;20&nbsp;people were this poor. Throughout history, only a tiny elite was ever much above subsistence (Hasell, Roser, Ortiz-Ospina &amp; Arriagada, 2022). Our health and education are also much better than ever before. Before the Industrial Revolution, 1 in 10 could read and write; now more than&nbsp;8&nbsp;in&nbsp;10&nbsp;can (Roser &amp; Ortiz-Ospina, 2019). For&nbsp;10,000&nbsp;years, life expectancy was between&nbsp;20&nbsp;and&nbsp;30&nbsp;years; now it is&nbsp;72&nbsp;years (Cohen&nbsp;1989; World Health Organization, 2016). According to Toby Ord, &#8220;It is not that things are great today, but that they were terrible before&#8221; (p. 294).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>&nbsp;The importance of safeguarding humanity is familiar at the smallest scale. Consider a child who has a bright future ahead of them. They must be protected from accident, trauma, or lack of education that would prevent their flourishing. We must put safeguards in place to preserve their potential.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>&nbsp;Though Ord focusses on humanity, he does not believe that we are the only source of value in the universe but that we appear to be the only beings capable of shaping the future in a way that is particularly valuable. He also uses the term very inclusively to include (perhaps very different from us) moral agents that we morph into or create.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>&nbsp;We may have duties to properly acknowledge and remedy past horrors. If we went extinct, there would be no opportunity to ever do so.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>&nbsp;Indeed, even if we thought the future was likely to be worse than nonexistence, protecting our potential might still be worthwhile. First, some risks would still be clearly worth preventing, such as the risk of stable global totalitarianism. Second, there would be a strong reason to gather more information about the value of the future, and it would be incredibly reckless to let humanity destroy itself now.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>&nbsp;The ice-cream market was estimated at $60&nbsp;billion in&nbsp;2018 (IMARC Group, 2019).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>The international body responsible for the continued prohibition of bioweapons has a budget of $1.4&nbsp;million (Biological Weapons Convention Implementation Support Unit, 2019) compared to an average $2.8&nbsp;million to run a McDonald&#8217;s (McDonald&#700;s Corporation, 2018, pp. 14, 20).</p></div></div>]]></content:encoded></item><item><title><![CDATA[Playing Russian roulette with the future]]></title><description><![CDATA[Summary of &#8220;The Precipice&#8221; (3 of 4)]]></description><link>https://www.millionyearview.com/p/playing-russian-roulette-with-the</link><guid isPermaLink="false">https://www.millionyearview.com/p/playing-russian-roulette-with-the</guid><dc:creator><![CDATA[Riley Harris]]></dc:creator><pubDate>Mon, 21 Aug 2023 02:30:14 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/dbf16fb8-64d5-45f2-8d76-fa280bf4c2e3_5500x5500.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This post is the third part of my summary of </em><a href="https://www.bloomsbury.com/uk/precipice-9781526600219/">The Precipice</a><em>, by Toby Ord. <a href="https://www.millionyearview.com/p/precipice-1">Previous </a><a href="https://www.millionyearview.com/p/precipice-2">posts </a>explored the various sources of existential risks and how to estimate the dangers. This post ties everything together with an overview of the risks we face. The final post will explore our place in the story of humanity and the importance of reducing existential risk.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.millionyearview.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.millionyearview.com/subscribe?"><span>Subscribe now</span></a></p><p>To communicate his impression of the risks accurately, Ord puts numbers on them. These numbers represent his best guesses about the order of magnitude of each risk, based on the research behind his book. They do not represent highly certain estimates of the risks, and new information could easily change them.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9dgu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59a1a918-65bb-4555-a7a7-fb1fcacd54ef_5760x6608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9dgu!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59a1a918-65bb-4555-a7a7-fb1fcacd54ef_5760x6608.png 424w, https://substackcdn.com/image/fetch/$s_!9dgu!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59a1a918-65bb-4555-a7a7-fb1fcacd54ef_5760x6608.png 848w, https://substackcdn.com/image/fetch/$s_!9dgu!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59a1a918-65bb-4555-a7a7-fb1fcacd54ef_5760x6608.png 1272w, https://substackcdn.com/image/fetch/$s_!9dgu!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59a1a918-65bb-4555-a7a7-fb1fcacd54ef_5760x6608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9dgu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59a1a918-65bb-4555-a7a7-fb1fcacd54ef_5760x6608.png" width="1456" height="1670" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/59a1a918-65bb-4555-a7a7-fb1fcacd54ef_5760x6608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1670,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1077736,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9dgu!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59a1a918-65bb-4555-a7a7-fb1fcacd54ef_5760x6608.png 424w, https://substackcdn.com/image/fetch/$s_!9dgu!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59a1a918-65bb-4555-a7a7-fb1fcacd54ef_5760x6608.png 848w, https://substackcdn.com/image/fetch/$s_!9dgu!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59a1a918-65bb-4555-a7a7-fb1fcacd54ef_5760x6608.png 1272w, https://substackcdn.com/image/fetch/$s_!9dgu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59a1a918-65bb-4555-a7a7-fb1fcacd54ef_5760x6608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>These estimates indicate risks from anthropogenic sources (such as nuclear war) tend to be much more dangerous than risks from natural sources (such as asteroids). In fact, nuclear war, climate change, and environmental damage are each at least as dangerous as all natural risks combined. And together, they pose a risk 1,000&nbsp;times greater than all natural sources combined. <br><br>Even within anthropogenic risks, some technologies pose greater risks than others. Engineered pandemics, unaligned artificial intelligence, as well as uncategorised and unforeseen anthropogenic risks each present around 100 times the risk from any other source.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p>Ord&#8217;s estimate of the total extinction risk is&nbsp;one&nbsp;in&nbsp;six&nbsp;this century. This may sound pessimistic, but it implies that we have a&nbsp;five&nbsp;in&nbsp;six&nbsp;chance of surviving this century.</p><p>It is like we are playing Russian roulette with enormous stakes.</p><p>Ord is optimistic, and his estimate assumes that we will recognise the importance of reducing existential risks and take significant steps to reduce them. If we shut our eyes and maintain business as usual, Ord believes we face risks about twice as high &#8212; like playing Russian roulette with two bullets in the chamber. But if we get our act together, we could remove both bullets and safeguard humanity.</p><p><em>The next post in this series will explore why existential risks are so important to prevent, our place in the story of humanity, and why existential risks remain neglected today.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!YYeq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faaa427c6-ad32-4b35-ac46-fd414b4e09f1_5500x5500.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!YYeq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faaa427c6-ad32-4b35-ac46-fd414b4e09f1_5500x5500.jpeg 424w, https://substackcdn.com/image/fetch/$s_!YYeq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faaa427c6-ad32-4b35-ac46-fd414b4e09f1_5500x5500.jpeg 848w, https://substackcdn.com/image/fetch/$s_!YYeq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faaa427c6-ad32-4b35-ac46-fd414b4e09f1_5500x5500.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!YYeq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faaa427c6-ad32-4b35-ac46-fd414b4e09f1_5500x5500.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!YYeq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faaa427c6-ad32-4b35-ac46-fd414b4e09f1_5500x5500.jpeg" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/aaa427c6-ad32-4b35-ac46-fd414b4e09f1_5500x5500.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3660602,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!YYeq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faaa427c6-ad32-4b35-ac46-fd414b4e09f1_5500x5500.jpeg 424w, https://substackcdn.com/image/fetch/$s_!YYeq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faaa427c6-ad32-4b35-ac46-fd414b4e09f1_5500x5500.jpeg 848w, https://substackcdn.com/image/fetch/$s_!YYeq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faaa427c6-ad32-4b35-ac46-fd414b4e09f1_5500x5500.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!YYeq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faaa427c6-ad32-4b35-ac46-fd414b4e09f1_5500x5500.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Image of the earth from<em>: <a href="http://www.tobyord.com/earth">www.tobyord.com/earth</a></em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>&nbsp;A sceptic might believe that Ord&#8217;s estimates are too high. For instance, they might calculate the risk from misaligned artificial intelligence to be&nbsp;1&nbsp;in&nbsp;100&nbsp;this century. Surprisingly, these two views would be close in the sense that only a small amount of scientific evidence would be enough to change one position to the other. They may also be close in terms of their practical implications: even if the risk were&nbsp;1&nbsp;in&nbsp;1,000&nbsp;this century, this would warrant serious global attention.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>&nbsp;Other anthropogenic risks we face include the possibility of atomically precise manufacturing democratising the manufacturing of dangerous weapons, the possibility of contaminating Earth with microbes from other planets when we bring back soil samples, and radical science experiments that create truly unprecedented conditions. These risks are all particularly speculative, but even if one believes that several of them pose no risk, they do suggest that emerging technologies will bring novel dangers.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Anthropogenic Threats: Unpacking Toby Ord's Exploration of Modern Risks]]></title><description><![CDATA[Book summary of &#8220;The Precipice&#8221; (part 2 of 4)]]></description><link>https://www.millionyearview.com/p/precipice-2</link><guid isPermaLink="false">https://www.millionyearview.com/p/precipice-2</guid><dc:creator><![CDATA[Riley Harris]]></dc:creator><pubDate>Sun, 13 Aug 2023 23:51:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!rTfd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e0192d8-b606-4c5b-a701-1ac5ff4227db_5500x5500.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This post is the second part of my summary of </em><a href="https://www.bloomsbury.com/uk/precipice-9781526600219/">The Precipice</a>,<em> by Toby Ord. The <a href="https://www.millionyearview.com/p/precipice-1">last post</a> was about natural sources of extinction risk and the limited danger they pose over the coming century. This post covers the risks we impose on ourselves. The next post will tie everything together with an overview of the risk landscape, and the final post will explore our place in the story of humanity and the importance of reducing existential risk.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="http://www.tobyord.com/earth" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rTfd!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e0192d8-b606-4c5b-a701-1ac5ff4227db_5500x5500.jpeg 424w, https://substackcdn.com/image/fetch/$s_!rTfd!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e0192d8-b606-4c5b-a701-1ac5ff4227db_5500x5500.jpeg 848w, https://substackcdn.com/image/fetch/$s_!rTfd!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e0192d8-b606-4c5b-a701-1ac5ff4227db_5500x5500.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!rTfd!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e0192d8-b606-4c5b-a701-1ac5ff4227db_5500x5500.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rTfd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e0192d8-b606-4c5b-a701-1ac5ff4227db_5500x5500.jpeg" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9e0192d8-b606-4c5b-a701-1ac5ff4227db_5500x5500.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3760605,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:&quot;http://www.tobyord.com/earth&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!rTfd!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e0192d8-b606-4c5b-a701-1ac5ff4227db_5500x5500.jpeg 424w, https://substackcdn.com/image/fetch/$s_!rTfd!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e0192d8-b606-4c5b-a701-1ac5ff4227db_5500x5500.jpeg 848w, https://substackcdn.com/image/fetch/$s_!rTfd!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e0192d8-b606-4c5b-a701-1ac5ff4227db_5500x5500.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!rTfd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e0192d8-b606-4c5b-a701-1ac5ff4227db_5500x5500.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>We saw in the last post that our&nbsp;2,000-century track record allows us to estimate that the risk of extinction from natural disasters must be very low. What about our track-record of anthropogenic (human-caused) risks? It has been less than&nbsp;3&nbsp;centuries since the Industrial Revolution and less than a century since the invention of nuclear weapons, so our track record is compatible with a&nbsp;50% risk per century. Instead of looking at this track record, we need to look at the details of these risks.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.millionyearview.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Million Year View! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1>Nuclear weapons</h1><p>At&nbsp;3 a.m. one morning in&nbsp;1979, four independent US command centres saw many incoming nuclear warheads. They only had minutes to respond before the bulk of their own missiles would be destroyed by the incoming strike. When they checked the raw data from the early-warning systems, they realised that there was no attack, and a realistic simulation of a Soviet attack had accidentally been streamed to their system (Brezhnev, 1979; Gates, 2011; Schlosser, 2013).</p><p>Cold War tensions led us astonishingly close to nuclear war over&nbsp;32&nbsp;times (US Department of Defense, 1981). What would happen if a nuclear war occurred? The worst-case scenario is an all-out nuclear exchange between two countries with many nuclear weapons, such as the US and Russia. This would kill tens or even hundreds of millions of people in the cities hit by the bombs. Radioactive dust would be blown outward, spreading deadly radiation. Smoke from the burning cities would darken the skies, block out the sun, and cool the earth.</p><p>This would be unlikely to result in extinction. Most people living outside of major cities in the countries that were bombed would survive the initial blast; the blasts wouldn&#8217;t produce enough radioactive dust to make the entire earth inhospitable. The worst effects would be from the darkening of the sky and the ensuing nuclear winter. Our best models suggest that the growing season might be too short for most crops, in most places, for five years (Robock, Oman &amp; Stenchikov, 2007). Billions of people would be at risk of starvation, but humanity would likely survive by growing less efficient crops, building greenhouses, fishing, and perhaps even farming algae.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><h1>Climate change</h1><p>Carbon dioxide, together with water vapour and methane, creates a kind of gaseous blanket around Earth. This is essential for life (without it, Earth would be a frozen wasteland). Since the Industrial Revolution, we have burned fossil fuels and rapidly increased the amount of carbon dioxide in the atmosphere from about&nbsp;280&nbsp;parts per million to&nbsp;412&nbsp;parts per million in&nbsp;2019 (Lindsey, 2018; National Oceanic and Atmospheric Administration, 2019). Unless we significantly reduce emissions, this will quickly warm the planet. While this probably won&#8217;t be the end of humanity, there is substantial uncertainty about how much we might emit and what effect it will have.</p><p>The Intergovernmental Panel on Climate Change (2014) estimates that a fourfold increase from preindustrial carbon dioxide levels has a two-thirds chance of warming&nbsp;Earth between 1.5&nbsp;and&nbsp;9&nbsp;degrees Celsius (and therefore a one-third chance of warming above&nbsp;9&nbsp;degrees or below&nbsp;1.5&nbsp;degrees). This is before considering feedback loops, such as increased bushfires, which release additional carbon, and the melting of ice which contains trapped greenhouse gases. This leaves us with a substantial chance of very significant warming. Even though it would be an unprecedented disaster, and this is reason enough to stop emissions, even&nbsp;20&nbsp;degrees of warming would leave many coastal areas habitable all year round.</p><p>One particularly bad feedback loop involves increased temperatures evaporating water in the oceans, creating a denser blanket of water vapour around Earth and accelerating warming.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> Though current research suggests such an effect would not be strong enough to entirely evaporate the ocean, and probably won&#8217;t happen at all, we cannot rule it out. While this is the only known mechanism for climate change to cause the extinction of humanity, there may be unknown mechanisms, and given that Earth has never seen such a rapid period of warming, we have substantial uncertainty about the eventual effects.</p><h1>Environmental damage</h1><p>The world&#8217;s population grew ever faster between&nbsp;1800&nbsp;and 1968. Seeing this, Paul Ehrlich predicted that in the coming decades this growing population would become unsustainable and there would be &#8220;an utter breakdown of the capacity of the planet to support humanity.&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><p>Because of improvements in agriculture and slower population growth, this breakdown has not yet happened, and the global population is expected to peak at&nbsp;11&nbsp;billion. This is still many more people than Earth has ever supported, and as people grow in material wealth, the per-person strain on the environment increases beyond what it ever was. Does this present an existential risk?</p><p>Resource depletion is unlikely to present any real risk to our potential.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> A bigger threat is biodiversity loss. Some suggest we are witnessing the next mass extinction. While it is difficult to compare current extinction rates with the fossil record, current species loss appears to be both much smaller (1% of species versus at least&nbsp;75%) and much faster (10 to&nbsp;100&nbsp;times faster) than previous mass extinctions. From the perspective of humanity&#8217;s survival, the most important thing is that ecosystems do not break down so far that they stop providing vital services such as purifying water, providing energy and resources, improving our soil, and creating breathable air. These risks are not well understood, and if we continue to put enormous pressure on our environment this century, this may result in large, currently unforeseen risks.</p><h1>Emerging pandemic risk</h1><p>From 1347 to 1353, between one-quarter and one-half of Europeans were killed by plague (Ziegler, 1969). After World War I, the&nbsp;1918&nbsp;flu (also known as Spanish flu) spread to six continents, infected over a third of the world&#8217;s population, and killed more people than the war (Taubenberger &amp; Morens, 2006). Neither of these events was devastating enough to end humanity, or even collapse civilisation, and we would likely recover from a pandemic on a similar scale. We can also infer from the fossil record that, like other natural risks, the risk from a <em>natural </em>pandemic must be incredibly low.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> However, improvements in biotechnology mean that we face a substantial new risk from <em>engineered </em>pandemics.</p><p>One risk comes from well-intentioned scientists trying to study viruses. Although most of this research poses no danger to humanity, a few experiments involve trying to give viruses new abilities &#8212; for instance, making them more deadly or transmissible.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> While these experiments take place in the highest-security labs, there have been multiple leaks of deadly pathogens such as smallpox (1971&nbsp;and&nbsp;1978) and anthrax (1979&nbsp;and&nbsp;2015). This is particularly worrying because these labs lack transparency and we very likely do not know about all of the leaks.</p><p>There is also the threat of misuse. Fifteen countries are known to have developed bioweapons programs at some point in the last century. The largest program was in the Soviet Union, with a dozen labs employing&nbsp;9,000&nbsp;scientists to weaponise diseases like plague and smallpox.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> That we have seen few deaths from bioweapons so far, compared to natural pandemics, is not as reassuring as it first appears. Deaths from war follow a <a href="https://en.wikipedia.org/wiki/Power_law">power-law</a> distribution where most deaths occur in just a few very large wars and most wars kill far fewer people; if this is the case with biological risks, then the risk might be quite high despite few deaths so far.</p><p>Biotechnology is also increasingly democratised. It took 13 years from &#8212; 1990 to 2003 &#8212; and over $500&nbsp;million to produce the full DNA sequence of the human genome. In 2019 it cost less than $1,000&nbsp;and took less than an hour. While this trend will result in fantastic applications of this technology to improve our lives, over the coming century it will also give people increasing access to dangerous pathogens.</p><p>There are clear efforts to reduce these risks, but more is needed. For instance, the Biological Weapons Convention of&nbsp;1972&nbsp;is monitored by just four employees with a budget smaller than an average McDonald&#8217;s restaurant.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a> And while many companies that synthesize DNA are careful to ensure that pathogens don&#8217;t fall into the wrong hands, perhaps only&nbsp;80% of their orders are screened for dangerous pathogens (DiEuliis, Carter &amp; Gronvall, 2017).</p><h1>Artificial intelligence</h1><p>Initially, AI dominated tasks that were previously thought to require our unique human intelligence, such as chess. But progress was faltering and slow on seemingly simple tasks, such as recognising a dog versus a cat. Now, AI can do many of these tasks &#8212; for instance, recognising faces better than a human.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a> Experts even find it plausible that we could invent a fully general artificial intelligence this century; a&nbsp;2016&nbsp;survey of researchers who had published in NeurIPS and ICML indicated that the average respondent gave this a&nbsp;50% chance by&nbsp;2061 (Grace et al., 2018).</p><p>Chimpanzees are not going to decide the fate of the world, or even the fate of chimps. Instead, we get to decide, because we are the most intelligent and technologically advanced species. In the absence of other evidence, we should expect that losing our position as the most intelligent species would be a big deal (and perhaps not one that will favour humans).</p><p>Importantly, ensuring that AI is aligned to human values appears to be a difficult and unsolved problem. The methods we have for producing intelligence tend to either involve letting a human decide on a reward function and then training a neural network to act in ways that produce greater rewards or letting an AI observe human choice and infer a reward system based on this. But humanity&#8217;s values are too complex and subtle to write down as a simple formula, and we do not know how to guide an AI system to learn them.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a></p><p>Initially, we may have the option to turn off an AI system. But over time, these systems will likely become resistant to this. Indeed, to maximise their reward function they must survive and thwart our attempts to bring their reward function in line with human values. Ultimately, an AI system is incentivised to take control of resources and shape the world &#8212; wresting control from humans. Since humans would predictably interfere with these goals, it would also be incentivised to hide its true goals until they are powerful enough to resist our attempts to stop them.&nbsp;</p><p>Contrary to Hollywood blockbusters, AI would not need robots in order to gain control. The most powerful figures of history were not the strongest; Hitler, Stalin, and Genghis Khan used words to convince millions to fight their battles. AI could well do the same. Even if many humans are left alive, this could permanently destroy humanity&#8217;s potential &#8212; and thus be an existential catastrophe.</p><h1>Dystopian scenarios</h1><p>We could lose humanity&#8217;s potential by letting the world become locked into a permanent state of little value. The most obvious scenario is permanent authoritarian rule made possible by advances in technology for detecting and eliminating dissent.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a> Even if there was a slim chance that we were able to recover, an event like this could destroy most of our potential. There is little such risk in the near future, but as technology improves, this may change.</p><p><em>The next post in this series will tie everything together with an overview of the risk landscape, including quantitative estimates of the risk from nuclear war, climate change, pandemics, and artificial intelligence.&nbsp;</em></p><h1>Sources</h1><p>Biological Weapons Convention Implementation Support Unit (2019). <a href="https://geneva-s3.unoda.org/static-unoda-site/pages/templates/the-biological-weapons-convention/topics/2019-0131%2B2018%2BMSP%2BChair%2Bletter%2Bon%2Bfinancial%2Bmeasures.pdf">Biological Weapons Convention&#8212;Budgetary and Financial Matters.</a></p><p>Diane DiEuliis, Sarah R. Carter, and Gigi Kwik Gronvall (2017). <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5566836/">Options for Synthetic DNA Order Screening, Revisited</a>. <em>mSphere</em> 2/4.</p><p>Leonid Brezhnev (1979). <a href="https://nsarchive.gwu.edu/document/19902-national-security-archive-doc-03-state">Brezhnev Message to President on Nuclear False Alarm, Diplomatic Cable (No. 1979STATE295771) from Sec State (D.C.) to Moscow American Embassy.</a> <em>National Security Archive, United States Department of State</em>.</p><p>Matthew Collins, Reto Knutti, Julie Arblaster, Jean-Louis Dufresne, Thierry Fichefet, Pierre Friedlingstein, Xuejie Gao, William J Gutowski Jr., Tim Johns, Gerhard Krinner, Mxolisi Shongwe, Claudia Tebaldi, Andrew J Weaver and Michael Wehner (2013).<a href="https://www.cambridge.org/core/books/abs/climate-change-2013-the-physical-science-basis/longterm-climate-change-projections-commitments-and-irreversibility-pages-1029-to-1076/AAAA16E52861380EACB92235100659F7"> Long-Term Climate Change: Projections, Commitments and Irreversibility.</a> In <em>Climate Change&nbsp;2013&#8212;The Physical Science Basis: Working Group I Contribution to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change</em>. Cambridge University Press.</p><p>Robert M Gates (2011). <a href="https://www.simonandschuster.com/books/From-the-Shadows/Robert-M-Gates/9781416543367">From the Shadows: The Ultimate Insider&#8217;s Story of Five Presidents and How They Won the Cold War.</a><em> </em>Simon and Schuster.</p><p>Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, and Owain Evans (2018). <a href="https://jair.org/index.php/jair/article/view/11222">Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts.</a><em> Journal of Artificial Intelligence Research&nbsp;</em>62.</p><p>Sander Herfst, Eefje J A Schrauwen, Martin Linster, Salin Chutinimitkul, Emmie de Wit, Vincent J Munster, Erin M Sorrell, Theo M Bestebroer, David F Burke, Derek J Smith, Guus F Rimmelzwaan, Albert D M E Osterhaus and Ron A M Fouchier (2012). <a href="https://www.science.org/doi/abs/10.1126/science.1213362">Airborne transmission of Influenza A/H5N1&nbsp;Virus Between Ferrets.</a> <em>Science&nbsp;</em>336/6088.</p><p>Intergovernmental Panel on Climate Change (2014). <a href="https://www.cambridge.org/core/books/abs/climate-change-2014-impacts-adaptation-and-vulnerability-part-a-global-and-sectoral-aspects/summary-for-policymakers/0F96D3E32C820804D2130AE7B551D75B">Summary for Policymakers</a>. In <em>Climate Change&nbsp;2014&#8212;Impacts, Adaptation and Vulnerability: Part A: Global and Sectoral Aspects: Working Group II Contribution to the IPCC Fifth Assessment Report</em>. Cambridge University Press.</p><p>Rebecca Lindsey (2018). <a href="https://www.climate.gov/news-features/understanding-climate/climate-change-atmospheric-carbon-dioxide">Climate Change: Atmospheric Carbon Dioxide</a>. <em>Climate.gov.</em></p><p>Charles C Mann (2018). <a href="https://www.smithsonianmag.com/innovation/book-incited-worldwide-fear-overpopulation-180967499/">The Book that Incited a Worldwide Fear of Overpopulation</a>. <em>Smithsonian Magazine.</em></p><p>McDonald&#8217;s Corporation (2018). <a href="https://corporate.mcdonalds.com/content/dam/sites/corp/nfl/pdf/McDonald%27s%202017%20Annual%20Report.pdf">Form&nbsp;10-K</a>. (McDonald&#8217;s Corporation Annual Report).</p><p>National Oceanic and Atmospheric Administration (2019). <a href="https://gml.noaa.gov/ccgg/trends/">Global Monthly Mean CO2</a>. <em>Global Monitoring Laboratory</em>.</p><p>Max Popp, Hauke Schmidt and Jochem Marotzke (2016). <a href="https://www.nature.com/articles/ncomms10627">Transition to a Moist</a></p><p><a href="https://www.nature.com/articles/ncomms10627">Greenhouse with CO<sub>2</sub>&nbsp;and Solar Forcing.</a><em> Nature Communications&nbsp;</em>7.</p><p>Alan Robock, Luke Oman and Georgiy L Stenchikov (2007). <a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2006JD008235">Nuclear Winter Revisited with a Modern Climate Model and Current Nuclear Arsenals: Still Catastrophic Consequences</a>. <em>Journal of Geophysical Research: Atmospheres&nbsp;</em>112/D13.</p><p>Eric Schlosser (2013). <a href="https://www.penguinrandomhouse.com/books/303337/command-and-control-by-eric-schlosser/">Command and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety.</a> Penguin.</p><p>Jeffery K Taubenberger and David M Morens (2006). <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3291398/">1918&nbsp;Influenza: The Mother of all Pandemics.</a> <em>Emerging Infectious Diseases&nbsp;</em>12/1.</p><p>US Department of Defense (1981). <a href="https://archive.org/details/DODNarrativeSummariesofAccidentsInvolvingUSNuclearWeapons19501980/page/n9/mode/2up">Narrative Summaries of Accidents Involving US Nuclear Weapons (1950&#8211;1980).</a> <em>Homeland Security Digital Library</em>.</p><p>Philip Ziegler (1969). <a href="https://www.harpercollins.com/products/the-black-death-philip-ziegler?variant=32117359214626">The Black Death</a>. Harper Collins.</p><p>Image of the earth from<em>: <a href="http://www.tobyord.com/earth">www.tobyord.com/earth</a></em></p><h1>Notes</h1><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>&nbsp;This might not even collapse civilisation entirely, as places such as New Zealand and the southeast of Australia would avoid the worst effects by being unlikely targets and surrounded by ocean; they could likely survive with most of their technology and institutions intact.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>If this was possible, and we emitted more than the Intergovernmental Panel on Climate Change expects us to even on the high-emissions pathway, then&nbsp;40&nbsp;degrees of warming is plausible (Collins et al., 2013, p. 1096; Popp, Schmidt &amp; Marotzke, 2016).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>&nbsp;From a speech given in&nbsp;1969 (see Mann, 2018).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>If we failed to find new sources of fossil fuels, this might reduce existential risk from climate change. We have&nbsp;26&nbsp;million litres of accessible fresh water per person, and if we needed to, we could desalinate seawater at a cost of $1&nbsp;per&nbsp;1,000&nbsp;litres. If we began to face shortages of certain metals, markets would likely slow consumption, encourage recycling, and develop alternatives. Indeed, there is no clear danger, though it is possible that there is a (currently unidentified) material that is rare, essential, irreplaceable and difficult to recycle.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>We might adjust our estimate to account for changes in the world. Some increase the risk: the global population is a thousand times greater than over most of human history, our farming practices create vast numbers of unhealthy animals that live in close proximity with humans, and we are more interconnected than ever before. Others reduce the risks: we are healthier than our ancestors, we have better sanitation and hygiene, we can fight disease with our improved scientific understanding of pathogens, and we have spread to many different environments throughout the world.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>For example, Dutch virologists published an experiment where he took a strain of bird flu which could kill over&nbsp;60% of infected people (Taubenberger &amp; Morens, 2006) and modified it to be directly transmissible between mammals (Herfst et al., 2012).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>They reportedly built up a stockpile of more than&nbsp;20&nbsp;tons of smallpox and plague.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>The international body responsible for the continued prohibition of bioweapons has a budget of $1.4&nbsp;million (Biological Weapons Convention Implementation Support Unit, 2019) compared to an average $2.8&nbsp;million to run a McDonald&#8217;s (McDonald&#700;s Corporation, 2018, pp. 14, 20).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>This only includes advances before&nbsp;2020. I am writing this summary in&nbsp;2023, after three years of qualitative leaps in AI capabilities.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>Even if we could, these values are uncertain, complex, held by billions of people with slightly different views, and liable to change over time. And solving these problems would be hard even if we assumed that the values of AI are not shaped by other motives such as winning a war or turning a profit.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>Such a future might not be forced upon us but instead caused by population-level forces. This would be similar to how market forces can create a race to the bottom or how Malthusian population dynamics can push down the average quality of life. It might also be our own choice, likely because the predominant ideology gets something wrong. For instance, we may forever fail to recognise some form of injustice or we may renounce technological advancement, and with it our chances to fulfill our potential.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Asteroids, volcanoes and exploding stars]]></title><description><![CDATA[Summary of &#8220;The Precipice&#8221; (1 of 4)]]></description><link>https://www.millionyearview.com/p/precipice-1</link><guid isPermaLink="false">https://www.millionyearview.com/p/precipice-1</guid><dc:creator><![CDATA[Riley Harris]]></dc:creator><pubDate>Mon, 07 Aug 2023 03:37:18 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/2ee1cd0e-c187-414c-a2d1-fb3119801917_5500x5500.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This post is the first part of my summary of </em><a href="https://www.bloomsbury.com/uk/precipice-9781526600219/">The Precipice</a><em>, by Toby Ord. It is about what existential risks are, and it explores the natural sources of existential risks. Future posts will explore the danger from other sources, our place in the story of humanity, and the importance of reducing existential risk.<br></em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="http://www.tobyord.com/earth" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!LUB-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3128a1ee-4851-4a8b-9b0b-80a26d3eb9e0_5500x5500.jpeg 424w, https://substackcdn.com/image/fetch/$s_!LUB-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3128a1ee-4851-4a8b-9b0b-80a26d3eb9e0_5500x5500.jpeg 848w, https://substackcdn.com/image/fetch/$s_!LUB-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3128a1ee-4851-4a8b-9b0b-80a26d3eb9e0_5500x5500.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!LUB-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3128a1ee-4851-4a8b-9b0b-80a26d3eb9e0_5500x5500.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!LUB-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3128a1ee-4851-4a8b-9b0b-80a26d3eb9e0_5500x5500.jpeg" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3128a1ee-4851-4a8b-9b0b-80a26d3eb9e0_5500x5500.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5061450,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:&quot;http://www.tobyord.com/earth&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!LUB-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3128a1ee-4851-4a8b-9b0b-80a26d3eb9e0_5500x5500.jpeg 424w, https://substackcdn.com/image/fetch/$s_!LUB-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3128a1ee-4851-4a8b-9b0b-80a26d3eb9e0_5500x5500.jpeg 848w, https://substackcdn.com/image/fetch/$s_!LUB-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3128a1ee-4851-4a8b-9b0b-80a26d3eb9e0_5500x5500.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!LUB-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3128a1ee-4851-4a8b-9b0b-80a26d3eb9e0_5500x5500.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1>What is an existential risk?</h1><p>An existential catastrophe is any event that would destroy humanity&#8217;s potential. This could take a few forms:<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><ul><li><p><em>Ordinary extinction:</em> Every human on Earth dies, or there are too few survivors to repopulate it.</p></li><li><p><em>Permanent civilisational collapse:</em> An enormous catastrophe collapses civilisation and severely damages the environment in a way that makes it impossible to rebuild. This would be a world without writing, cities, and law. A collapse of civilisation might or might not be an existential catastrophe; it depends on whether we can rebuild.</p></li><li><p><em>A world in chains:</em> The entire world is locked under totalitarian rule. Advanced technology allows permanent and powerful indoctrination, surveillance, and enforcement, leaving no chance for an uprising and no internal or external pressure to change. Like civilisational collapse, this presents an existential catastrophe if the situation is permanent.&nbsp;</p></li></ul><h1>How can we estimate the danger?</h1><p>One way is to assume the risk is negligible until there is strong scientific evidence determining that it is higher. This ensures that risks are not exaggerated but does not usually reflect our current understanding of the risks and might lead to dangerous underestimation of emerging risks.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> Instead, Toby Ord begins with an initial impression of the size of the risk, then adjusts this estimate according to the scientific evidence.</p><h1>The sources of natural extinction risk</h1><h1><em>Asteroids and comets</em></h1><p>Sixty-six million years ago an asteroid hit Earth off the coast of Mexico, burning everything within&nbsp;1,000&nbsp;kilometres. The worst effects were caused by a billowing cloud of dust and ash (and sulphate aerosols from the vaporised sea floor) which blocked out the sun and cooled Earth. In the end, every land vertebrate over&nbsp;five&nbsp;kilograms went extinct (Longrich, Scriberas and Wills, 2016).</p><h2><em>Supervolcanic eruptions</em></h2><p>The very largest volcanic eruptions don&#8217;t look like typical volcanoes. Instead of mountains spilling out molten rock, supervolcanoes collapse into a vast craterlike depression (a well-known example is the Yellowstone caldera). One of these eruptions happened&nbsp;74,000&nbsp;years ago in Indonesia. Glowing rocks rained down as far as 100&nbsp;kilometres away, and places as far away as India were covered in a metre-thick blanket of ash. Although this was not close to being an extinction-level event, supervolcanic eruptions present a small risk of civilisational collapse.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> Even though we could likely rebuild civilisation, most of the extinction risk here is driven by the possibility of permanent civilisational collapse.&nbsp;&nbsp;</p><h2><em>Stellar explosions</em></h2><p>Sometimes large stars explode, instantly releasing the same amount of energy as our sun will over its 10-billion-year lifetime. If this happened close to Earth, it could alter the climate and erode the ozone layer, leaving us exposed to UV radiation.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.millionyearview.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.millionyearview.com/subscribe?"><span>Subscribe now</span></a></p><h1>Estimating natural extinction risk</h1><p>There are many other potential dangers.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> And our understanding of natural risks is recent and growing. It was only in the 1960s that we learned that Earth may have been hit by a large asteroid and we detected the first signs of the bursts of energy emitted by exploding stars. There has been no slowdown in our discovery of new risks, and we do not know what caused several historical mass-extinction events. We should expect to learn about new sources of extinction risk in the coming decades.</p><p>Luckily, we can estimate the total natural extinction risk without complete knowledge of the individual risks by examining our track record. Homo sapiens have survived for over 200,000 years. If the risk had been 1% per century, then there would have been a&nbsp;99.9999998% chance that we would have gone extinct by now. Based on this, we can be extremely confident that the risk is below&nbsp;0.34% per century, and our best guess is that the risk is below&nbsp;0.05% per century.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p><p>We might also consider that humans have spread to diverse environments all over the planet, so it&#8217;s likely that only <em>mass</em>-extinction events truly threaten us. There have been five of these events since complex life developed &#8212; over&nbsp;540&nbsp;million years ago&#8212; making the extinction risk one in a million (0.0001%) per century.</p><p>Where possible, we can supplement this track record with our scientific understanding of the risks to get estimates for individual risks that are sometimes substantially lower than our track record suggests. For instance, we have identified about&nbsp;95% of the asteroids less than&nbsp;10&nbsp;kilometres in diameter and likely all asteroids greater than&nbsp;10&nbsp;kilometres across; and we know none are going to hit us this century. Astronomers have also estimated the chances of a stellar explosion close enough to destroy&nbsp;30% of the ozone layer at about one in&nbsp;5&nbsp;million.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a></p><p>Overall, the picture is incredibly reassuring. While it would be prudent to continue to improve our scientific understanding of these risks and monitor them, these risks are very small over the next century.</p><p><em>The next post will begin to explore the existential risks caused by nuclear weapons, climate change, advanced biotechnology, and artificial intelligence.</em></p><h1>Sources</h1><p>Nick Bostrom (2002). <a href="https://www.jetpress.org/volume9/risks.html">Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards</a>. <em>Journal of Evolution and Technology&nbsp;</em>9.</p><p>Nick Bostrom (2013). <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/1758-5899.12002">Existential Risk Prevention as Global Priority.</a> <em>Global Policy&nbsp;</em>4/1.</p><p>Dario Buttazzo, Giuseppe Degrassi, Pier Paolo Giardino, Gian F Giudice, Filippo Sala, Alberto Salvio &amp; Alessandro Strumia (2013). <a href="https://doi.org/10.1007/JHEP12(2013)089">Investigating the Near-Criticality of the Higgs Boson.</a><em> Journal of High Energy Physics&nbsp;</em>2013/89.</p><p>Nicholas R Longrich, J Scriberas, and Matthew A Wills (2016). <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/jeb.12882">Severe Extinction and Rapid Recovery of Mammals across the Cretaceous-Palaeogene Boundary, and the Effects of Rarity on Patterns of Extinction and Recovery.</a> <em>Journal of Evolutionary Biology&nbsp;</em>29.</p><p>Max Tegmark and Nick Bostrom (2005). <a href="https://www.nature.com/articles/438754a">Is a Doomsday Catastrophe Likely?</a> <em>Nature&nbsp;</em>438.</p><p>Image of the earth from<em>: <a href="http://www.tobyord.com/earth">www.tobyord.com/earth</a></em><a href="http://www.tobyord.com/earth"> </a></p><h1>Notes</h1><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>See Bostrom (2002, 2013).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>For instance, this method would conclude that the extinction risks from climate change are negligible because the scientific evidence does not show conclusively that even extreme climate scenarios would result in human extinction. But extreme climate scenarios have also been largely neglected by researchers, and rapidly increasing our carbon emissions could have currently unforeseen harmful effects. Until new research shows that rapid warming simply cannot drive us extinct, we cannot be confident that the risk is extremely low. Ord estimates the extinction risk from climate change to be around 1 in 1,000 this century, as we will see in the next two parts of this summary.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>As with asteroids, the biggest threat comes from the dark cloud of volcanic dust and sulphate aerosols that would block out the sun and cool Earth. There is a lot of uncertainty about how much previous eruptions have cooled Earth (estimates from the Toba volcano in Indonesia range from&nbsp;0.8&nbsp;to&nbsp;18&nbsp;degrees Celsius of cooling, with the best estimates around&nbsp;1&#8211;2&nbsp;degrees). With only six months of food reserves, a supervolcanic eruption could result in the starvation of billions of people and the collapse of civilisation.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Many pose no risk of extinction&#8212; for instance, catastrophes such as hurricanes or tsunamis. Some risks are vanishingly small over the coming century. For instance, there is little chance of another ice age over the next thousand years or another star passing through our solar system in the next few billion years; and for the next billion years there is little risk from the eventual brightening of our sun. Other risks are vanishingly small in general. For instance, some physical theories suggest that space is not a true vacuum and could collapse to a true vacuum state. However, Tegmark and Bostrom (2005) argue that we can have&nbsp;99.9% confidence that the risk is less than one in a billion per year. Others suggest it is much lower (Buttazzo et al., 2013) or endorse a theory of physics in which space is already a true vacuum and so this poses no risk.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Other, similar ways of estimating the risks give similarly low results, with best-guess estimates always below&nbsp;0.05%. It&#8217;s plausible that we should consider humans inclusively, to include Neanderthals or perhaps the entire genus <em>Homo</em>. If so, we will arrive at lower best-guess estimates. Alternatively, we could consider the extinction of other species in our genus to be indicative of our own chances, which would give a best-guess estimate of at most&nbsp;0.05% per century. We would get lower best-guess estimates if we looked at other mammals, or indeed other species in general. These estimates are likely to be overestimates because they include noncatastrophic extinction (for instance, gradual evolution into a new species) and because humanity has spread to a variety of environments and developed technologies that could help protect it from natural risks.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>&nbsp;We face a similar risk from bursts of gamma rays, thought to be the result of a particular kind of exploding star or the collision of neutron stars. These have the same energy release as a normal exploding star but concentrated into two narrow cones. This risk is estimated to be about one in&nbsp;2.5&nbsp;million. Searching the skies, we see no likely candidates for such stellar explosions or collisions, but we cannot entirely rule them out, yielding a moderately reduced risk this century in particular.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[What is "Million Year View"?]]></title><description><![CDATA[Academics have spent thousands of hours grappling with complex questions related to our mission to do the most good possible&#8212;but this research is often difficult to access without a technical background in multiple fields and a large time investment.]]></description><link>https://www.millionyearview.com/p/coming-soon</link><guid isPermaLink="false">https://www.millionyearview.com/p/coming-soon</guid><dc:creator><![CDATA[Riley Harris]]></dc:creator><pubDate>Mon, 15 Aug 2022 18:23:39 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!v4O8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F6ae1d53e-1f10-480a-b304-fe261ca98ba1_1024x533.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Academics have spent thousands of hours grappling with complex questions related to our mission to do the most good possible&#8212;but this research is often difficult to access without a technical background in multiple fields and a large time investment. <strong>This blog will provide simple explanations of research papers to&nbsp;help you keep up to date with&nbsp;the latest global priorities research.</strong></p><h2><strong>In a sentence: </strong>this blog delivers simple explanations of global priorities research.</h2><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.millionyearview.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.millionyearview.com/subscribe?"><span>Subscribe now</span></a></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!v4O8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F6ae1d53e-1f10-480a-b304-fe261ca98ba1_1024x533.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!v4O8!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F6ae1d53e-1f10-480a-b304-fe261ca98ba1_1024x533.png 424w, https://substackcdn.com/image/fetch/$s_!v4O8!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F6ae1d53e-1f10-480a-b304-fe261ca98ba1_1024x533.png 848w, https://substackcdn.com/image/fetch/$s_!v4O8!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F6ae1d53e-1f10-480a-b304-fe261ca98ba1_1024x533.png 1272w, https://substackcdn.com/image/fetch/$s_!v4O8!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F6ae1d53e-1f10-480a-b304-fe261ca98ba1_1024x533.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!v4O8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F6ae1d53e-1f10-480a-b304-fe261ca98ba1_1024x533.png" width="1024" height="533" data-attrs="{&quot;src&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/6ae1d53e-1f10-480a-b304-fe261ca98ba1_1024x533.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:533,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1505531,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!v4O8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F6ae1d53e-1f10-480a-b304-fe261ca98ba1_1024x533.png 424w, https://substackcdn.com/image/fetch/$s_!v4O8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F6ae1d53e-1f10-480a-b304-fe261ca98ba1_1024x533.png 848w, https://substackcdn.com/image/fetch/$s_!v4O8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F6ae1d53e-1f10-480a-b304-fe261ca98ba1_1024x533.png 1272w, https://substackcdn.com/image/fetch/$s_!v4O8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F6ae1d53e-1f10-480a-b304-fe261ca98ba1_1024x533.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">An expressive oil painting of the universe &#8211; generated by DALL-E</figcaption></figure></div><p></p><h2><strong>What kind of research will the blog cover?</strong></h2><p>This blog will cover topics that are important for understanding our values and making the world a better place. The focus will be more academic than practical. For example, I might cover research that estimates the probability that we live in a simulation&#8212;this is highly speculative, but perhaps has important implications.</p><p>As the name suggests, this blog will likely skew towards&nbsp;<em>longtermism</em>&#8212;the view that one of our priorities should be helping future generations to survive and flourish. For example, I might cover research into whether future people matter, predicting and impacting the future, and how important we are relative to other people throughout history.</p><h2><strong>Who should read it?</strong></h2><p>This blog is most focused on helping motivated students to understand key research about how to do&nbsp;good effectively, although it should also be helpful to anyone interested in global priorities research.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.millionyearview.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"> If that&#8217;s you, subscribe to our newsletter! </p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!UKbn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fccbf4748-1ad8-40f1-b123-2211c1e5e920_1251x789.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!UKbn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fccbf4748-1ad8-40f1-b123-2211c1e5e920_1251x789.jpeg 424w, https://substackcdn.com/image/fetch/$s_!UKbn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fccbf4748-1ad8-40f1-b123-2211c1e5e920_1251x789.jpeg 848w, https://substackcdn.com/image/fetch/$s_!UKbn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fccbf4748-1ad8-40f1-b123-2211c1e5e920_1251x789.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!UKbn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fccbf4748-1ad8-40f1-b123-2211c1e5e920_1251x789.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!UKbn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fccbf4748-1ad8-40f1-b123-2211c1e5e920_1251x789.jpeg" width="1251" height="789" data-attrs="{&quot;src&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/ccbf4748-1ad8-40f1-b123-2211c1e5e920_1251x789.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:789,&quot;width&quot;:1251,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:133319,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!UKbn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fccbf4748-1ad8-40f1-b123-2211c1e5e920_1251x789.jpeg 424w, https://substackcdn.com/image/fetch/$s_!UKbn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fccbf4748-1ad8-40f1-b123-2211c1e5e920_1251x789.jpeg 848w, https://substackcdn.com/image/fetch/$s_!UKbn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fccbf4748-1ad8-40f1-b123-2211c1e5e920_1251x789.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!UKbn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fccbf4748-1ad8-40f1-b123-2211c1e5e920_1251x789.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Riley Harris at the 2022 EAGxAustralia conference</figcaption></figure></div><h2><strong>Who writes it?</strong></h2><p><a href="https://rileyharris.blog/academic/">Riley Harris</a> is a PhD student in Philosophy at Oxford University, and has worked on research communications projects the Global Priorities Institute,  Legal Priorities Project, and Longview Philanthropy.  </p><h2>Where should I begin?</h2><p>Wherever you want really, perhaps <a href="https://www.millionyearview.com/p/precipice-1">the series on Toby Ord&#8217;s &#8220;The precipice&#8221; is a good place to begin.</a></p><h2><strong>I want to help&#8230; how can I help?</strong></h2><p>This is a tiny project, as of September 2023 I have around 60 subscribers. I would love for you to spread the good word. If you know someone who would find it valuable, I&#8217;d really appreciate it if you sent them a link!</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.millionyearview.com/p/coming-soon?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.millionyearview.com/p/coming-soon?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p>]]></content:encoded></item></channel></rss>