这是用户在 2024-10-20 23:57 为 https://darioamodei.com/machines-of-loving-grace 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?

Dario Amodei

Machines of
Loving Grace1
机器爱的恩典1

How AI Could Transform the World for the Better
人工智能如何可以改变世界变得更好

I think and talk a lot about the risks of powerful AI. The company I’m the CEO of, Anthropic, does a lot of research on how to reduce these risks. Because of this, people sometimes draw the conclusion that I’m a pessimist or “doomer” who thinks AI will be mostly bad or dangerous. I don’t think that at all. In fact, one of my main reasons for focusing on risks is that they’re the only thing standing between us and what I see as a fundamentally positive future. I think that most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be.
我在思考和讨论强大人工智能的风险问题。我担任首席执行官的公司 Anthropic 正在开展大量研究以减少这些风险。因此,有时人们会得出结论,认为我是一位悲观主义者或“末日论者”,认为人工智能将大多数情况下是不好的或危险的。我一点都不这样认为。事实上,我专注于风险的一个主要原因是,风险是我们与我所认为的基本上积极未来之间唯一的障碍。 我认为大多数人都低估了人工智能可能带来的剧烈变化,就像我认为大多数人低估了可能出现的风险有多糟糕。

In this essay I try to sketch out what that upside might look like—what a world with powerful AI might look like if everything goes right. Of course no one can know the future with any certainty or precision, and the effects of powerful AI are likely to be even more unpredictable than past technological changes, so all of this is unavoidably going to consist of guesses. But I am aiming for at least educated and useful guesses, which capture the flavor of what will happen even if most details end up being wrong. I’m including lots of details mainly because I think a concrete vision does more to advance discussion than a highly hedged and abstract one.
在这篇文章中,我试图勾勒出正面可能的情况——如果一切都顺利,那么拥有强大人工智能的世界会是什么样子。当然,没有人能确切或精确地预知未来,而强大人工智能的影响可能比过去的技术变革更加难以预测,所以这一切都不可避免地会包含猜测。但我至少力求做出受过教育且有用的猜测,捕捉到未来可能发生的情况的味道,即使大部分细节最终都可能是错误的。我包含了许多细节,主要是因为我认为具体的愿景比高度模糊和抽象的愿景更有助于推进讨论。

First, however, I wanted to briefly explain why I and Anthropic haven’t talked that much about powerful AI’s upsides, and why we’ll probably continue, overall, to talk a lot about risks. In particular, I’ve made this choice out of a desire to:
然而,首先,我想简要解释一下为什么我和人类学家并没有过多谈论强大人工智能的好处,以及为什么我们可能会继续总体上谈论很多风险。特别是,我做出这个选择是出于想要:

Yet despite all of the concerns above, I really do think it’s important to discuss what a good world with powerful AI could look like, while doing our best to avoid the above pitfalls. In fact I think it is critical to have a genuinely inspiring vision of the future, and not just a plan to fight fires. Many of the implications of powerful AI are adversarial or dangerous, but at the end of it all, there has to be something we’re fighting for, some positive-sum outcome where everyone is better off, something to rally people to rise above their squabbles and confront the challenges ahead. Fear is one kind of motivator, but it’s not enough: we need hope as well.
尽管存在以上所有担忧,我真的认为讨论拥有强大人工智能的美好世界是很重要的,同时尽力避免上述陷阱。事实上,我认为具有激励性的未来愿景至关重要,而不仅仅是一个消防打火计划。强大人工智能的许多影响是对抗性或危险的,但到最后,我们必须有一些我们在争取的东西,一些积极和的结果,每个人都会受益,一些可以鼓动人们超越争执并面对前方挑战的事物。恐惧是一种激励因素,但这并不足够:我们还需要希望。

The list of positive applications of powerful AI is extremely long (and includes robotics, manufacturing, energy, and much more), but I’m going to focus on a small number of areas that seem to me to have the greatest potential to directly improve the quality of human life. The five categories I am most excited about are:
功能强大的人工智能的积极应用列表非常长(包括机器人技术、制造业、能源等等),但我将重点关注我认为直接提高人类生活质量潜力最大的一些领域。我最感兴趣的五个类别是:

  1. Biology and physical health
    生物学与身体健康
  2. Neuroscience and mental health
    神经科学和心理健康
  3. Economic development and poverty
    经济发展和贫困
  4. Peace and governance 和平与治理
  5. Work and meaning 工作与意义

My predictions are going to be radical as judged by most standards (other than sci-fi “singularity” visions2), but I mean them earnestly and sincerely. Everything I’m saying could very easily be wrong (to repeat my point from above), but I’ve at least attempted to ground my views in a semi-analytical assessment of how much progress in various fields might speed up and what that might mean in practice. I am fortunate to have professional experience in both biology and neuroscience, and I am an informed amateur in the field of economic development, but I am sure I will get plenty of things wrong. One thing writing this essay has made me realize is that it would be valuable to bring together a group of domain experts (in biology, economics, international relations, and other areas) to write a much better and more informed version of what I’ve produced here. It’s probably best to view my efforts here as a starting prompt for that group.
我的预测将会被大多数标准认为是激进的(除了科幻“奇点”愿景2),但我是认真和真诚地表达的。我所说的一切很容易是错误的(重复我上面所说的观点),但我至少尝试着根据对各个领域取得多大进展可能加快以及这意味着什么的半分析评估。我很幸运地在生物学和神经科学两个领域有专业经验,而且在经济发展领域是了解的业余爱好者,但我肯定会有很多错误的地方。写这篇文章让我意识到对这篇作品进行改进的最佳方式可能是邀请一群领域专家(包括生物学、经济学、国际关系等领域)共同合作撰写一篇更好更充分的版本。最好将我在这里的努力视为为那个群体提供起始提示。

Basic assumptions and framework
基本假设和框架

To make this whole essay more precise and grounded, it’s helpful to specify clearly what we mean by powerful AI (i.e. the threshold at which the 5-10 year clock starts counting), as well as laying out a framework for thinking about the effects of such AI once it’s present.
为了使整篇论文更加精确和脚踏实地,有助于明确指定我们所说的强大人工智能(即 5-10 年时间开始计算的阈值),以及建立一个思考这种人工智能影响的框架,一旦它出现了。

What powerful AI (I dislike the term AGI)3 will look like, and when (or if) it will arrive, is a huge topic in itself. It’s one I’ve discussed publicly and could write a completely separate essay on (I probably will at some point). Obviously, many people are skeptical that powerful AI will be built soon and some are skeptical that it will ever be built at all. I think it could come as early as 2026, though there are also ways it could take much longer. But for the purposes of this essay, I’d like to put these issues aside, assume it will come reasonably soon, and focus on what happens in the 5-10 years after that. I also want to assume a definition of what such a system will look like, what its capabilities are and how it interacts, even though there is room for disagreement on this.
什么样的强大人工智能(我不喜欢 AGI 这个术语)3会是什么样子,以及它会何时(如果会的话)出现,这本身就是一个巨大的话题。这是我公开讨论过的话题,我可以写一篇完全独立的文章(我可能会在某个时候这样做)。显然,许多人怀疑强大人工智能会很快就会被建立起来,甚至有些人怀疑它会不会被建立起来。我认为它可能会在 2026 年早些时候出现,尽管也有可能需要更长时间。但对于本文的目的,我想搁置这些问题,假设它会相当快出现,并关注在此后的 5-10 年会发生什么。我还想假设一个强大 AI 系统将会是什么样子,它有什么能力以及如何交互,尽管在这方面有许多不同见解。

By powerful AI, I have in mind an AI model—likely similar to today’s LLM’s in form, though it might be based on a different architecture, might involve several interacting models, and might be trained differently—with the following properties:
通过 强大的 AI,我的理解是一种 AI 模型—可能类似于今天的 LLM 的形式,尽管它可能基于不同的架构,可能涉及几个互动模型,并可能经过不同的训练,具有以下特性:

We could summarize this as a “country of geniuses in a datacenter”. 

Clearly such an entity would be capable of solving very difficult problems, very fast, but it is not trivial to figure out how fast. Two “extreme” positions both seem false to me. First, you might think that the world would be instantly transformed on the scale of seconds or days (“the Singularity”), as superior intelligence builds on itself and solves every possible scientific, engineering, and operational task almost immediately. The problem with this is that there are real physical and practical limits, for example around building hardware or conducting biological experiments. Even a new country of geniuses would hit up against these limits. Intelligence may be very powerful, but it isn’t magic fairy dust. 

Second, and conversely, you might believe that technological progress is saturated or rate-limited by real world data or by social factors, and that better-than-human intelligence will add very little6. This seems equally implausible to me—I can think of hundreds of scientific or even social problems where a large group of really smart people would drastically speed up progress, especially if they aren’t limited to analysis and can make things happen in the real world (which our postulated country of geniuses can, including by directing or assisting teams of humans). 

I think the truth is likely to be some messy admixture of these two extreme pictures, something that varies by task and field and is very subtle in its details. I believe we need new frameworks to think about these details in a productive way. 

Economists often talk about “factors of production”: things like labor, land, and capital. The phrase “marginal returns to labor/land/capital” captures the idea that in a given situation, a given factor may or may not be the limiting one – for example, an air force needs both planes and pilots, and hiring more pilots doesn’t help much if you’re out of planes. I believe that in the AI age, we should be talking about the marginal returns to intelligence7, and trying to figure out what the other factors are that are complementary to intelligence and that become limiting factors when intelligence is very high. We are not used to thinking in this way—to asking “how much does being smarter help with this task, and on what timescale?”—but it seems like the right way to conceptualize a world with very powerful AI. 

My guess at a list of factors that limit or are complementary to intelligence includes: 

There is a further distinction based on timescales. Things that are hard constraints in the short run may become more malleable to intelligence in the long run. For example, intelligence might be used to develop a new experimental paradigm that allows us to learn in vitro what used to require live animal experiments, or to build the tools needed to collect new data (e.g. the bigger particle accelerator), or to (within ethical limits) find ways around human-based constraints (e.g. helping to improve the clinical trial system, helping to create new jurisdictions where clinical trials have less bureaucracy, or improving the science itself to make human clinical trials less necessary or cheaper).
根据时间尺度进一步区分。 短期内的硬约束可能在长期内变得更具可塑性。 例如,智能可能被用于开发一个新的实验范式,使我们能够在体外学习以前需要活体动物实验的内容,或建造收集新数据所需的工具(例如更大的粒子加速器),或(在伦理限制内)找到绕过基于人类的约束的方法(例如帮助改进临床试验系统,帮助建立新的临床试验管辖区,或改进科学本身以使人体临床试验变得不那么必要或更便宜)。

Thus, we should imagine a picture where intelligence is initially heavily bottlenecked by the other factors of production, but over time intelligence itself increasingly routes around the other factors, even if they never fully dissolve (and some things like physical laws are absolute)10. The key question is how fast it all happens and in what order.
因此,我们应该想象一个场景,即智能最初受到生产的其他因素的严重瓶颈制约,但随着时间的推移,智能本身越来越多地绕过其他因素,即使它们永远不完全消失(而且一些事物例如物理定律是绝对的)10。关键问题是这一切发生得有多快以及以什么顺序。

With the above framework in mind, I’ll try to answer that question for the five areas mentioned in the introduction.
考虑到以上框架,我将尝试回答导论中提到的五个领域中的问题。

1. Biology and health 1. 生物学和健康

Biology is probably the area where scientific progress has the greatest potential to directly and unambiguously improve the quality of human life. In the last century some of the most ancient human afflictions (such as smallpox) have finally been vanquished, but many more still remain, and defeating them would be an enormous humanitarian accomplishment. Beyond even curing disease, biological science can in principle improve the baseline quality of human health, by extending the healthy human lifespan, increasing control and freedom over our own biological processes, and addressing everyday problems that we currently think of as immutable parts of the human condition. 

In the “limiting factors” language of the previous section, the main challenges with directly applying intelligence to biology are data, the speed of the physical world, and intrinsic complexity (in fact, all three are related to each other). Human constraints also play a role at a later stage, when clinical trials are involved. Let’s take these one by one. 

Experiments on cells, animals, and even chemical processes are limited by the speed of the physical world: many biological protocols involve culturing bacteria or other cells, or simply waiting for chemical reactions to occur, and this can sometimes take days or even weeks, with no obvious way to speed it up. Animal experiments can take months (or more) and human experiments often take years (or even decades for long-term outcome studies). Somewhat related to this, data is often lacking—not so much in quantity, but quality: there is always a dearth of clear, unambiguous data that isolates a biological effect of interest from the other 10,000 confounding things that are going on, or that intervenes causally in a given process, or that directly measures some effect (as opposed to inferring its consequences in some indirect or noisy way). Even massive, quantitative molecular data, like the proteomics data that I collected while working on mass spectrometry techniques, is noisy and misses a lot (which types of cells were these proteins in? Which part of the cell? At what phase in the cell cycle?). 

In part responsible for these problems with data is intrinsic complexity: if you’ve ever seen a diagram showing the biochemistry of human metabolism, you’ll know that it’s very hard to isolate the effect of any part of this complex system, and even harder to intervene on the system in a precise or predictable way. And finally, beyond just the intrinsic time that it takes to run an experiment on humans, actual clinical trials involve a lot of bureaucracy and regulatory requirements that (in the opinion of many people, including me) add unnecessary additional time and delay progress.  

Given all this, many biologists have long been skeptical of the value of AI and “big data” more generally in biology. Historically, mathematicians, computer scientists, and physicists who have applied their skills to biology over the last 30 years have been quite successful, but have not had the truly transformative impact initially hoped for. Some of the skepticism has been reduced by major and revolutionary breakthroughs like AlphaFold (which has just deservedly won its creators the Nobel Prize in Chemistry) and AlphaProteo11, but there’s still a perception that AI is (and will continue to be) useful in only a limited set of circumstances. A common formulation is “AI can do a better job analyzing your data, but it can’t produce more data or improve the quality of the data. Garbage in, garbage out”.  

But I think that pessimistic perspective is thinking about AI in the wrong way. If our core hypothesis about AI progress is correct, then the right way to think of AI is not as a method of data analysis, but as a virtual biologist who performs all the tasks biologists do, including designing and running experiments in the real world (by controlling lab robots or simply telling humans which experiments to run – as a Principal Investigator would to their graduate students), inventing new biological methods or measurement techniques, and so on. It is by speeding up the whole research process that AI can truly accelerate biology. I want to repeat this because it’s the most common misconception that comes up when I talk about AI’s ability to transform biology: I am not talking about AI as merely a tool to analyze data. In line with the definition of powerful AI at the beginning of this essay, I’m talking about using AI to perform, direct, and improve upon nearly everything biologists do. 

To get more specific on where I think acceleration is likely to come from, a surprisingly large fraction of the progress in biology has come from a truly tiny number of discoveries, often related to broad measurement tools or techniques12 that allow precise but generalized or programmable intervention in biological systems. There’s perhaps ~1 of these major discoveries per year and collectively they arguably drive >50% of progress in biology. These discoveries are so powerful precisely because they cut through intrinsic complexity and data limitations, directly increasing our understanding and control over biological processes. A few discoveries per decade have enabled both the bulk of our basic scientific understanding of biology, and have driven many of the most powerful medical treatments. 

Some examples include: 

I’m going to the trouble of listing all these technologies because I want to make a crucial claim about them: I think their rate of discovery could be increased by 10x or more if there were a lot more talented, creative researchers. Or, put another way, I think the returns to intelligence are high for these discoveries, and that everything else in biology and medicine mostly follows from them. 

Why do I think this? Because of the answers to some questions that we should get in the habit of asking when we’re trying to determine “returns to intelligence”. First, these discoveries are generally made by a tiny number of researchers, often the same people repeatedly, suggesting skill and not random search (the latter might suggest lengthy experiments are the limiting factor). Second, they often “could have been made” years earlier than they were: for example, CRISPR was a naturally occurring component of the immune system in bacteria that’s been known since the 80’s, but it took another 25 years for people to realize it could be repurposed for general gene editing. They also are often delayed many years by lack of support from the scientific community for promising directions (see this profile on the inventor of mRNA vaccines; similar stories abound). Third, successful projects are often scrappy or were afterthoughts that people didn’t initially think were promising, rather than massively funded efforts. This suggests that it’s not just massive resource concentration that drives discoveries, but ingenuity. 

Finally, although some of these discoveries have “serial dependence” (you need to make discovery A first in order to have the tools or knowledge to make discovery B)—which again might create experimental delays—many, perhaps most, are independent, meaning many at once can be worked on in parallel. Both these facts, and my general experience as a biologist, strongly suggest to me that there are hundreds of these discoveries waiting to be made if scientists were smarter and better at making connections between the vast amount of biological knowledge humanity possesses (again consider the CRISPR example). The success of AlphaFold/AlphaProteo at solving important problems much more effectively than humans, despite decades of carefully designed physics modeling, provides a proof of principle (albeit with a narrow tool in a narrow domain) that should point the way forward. 

Thus, it’s my guess that powerful AI could at least 10x the rate of these discoveries, giving us the next 50-100 years of biological progress in 5-10 years.14 Why not 100x? Perhaps it is possible, but here both serial dependence and experiment times become important: getting 100 years of progress in 1 year requires a lot of things to go right the first time, including animal experiments and things like designing microscopes or expensive lab facilities. I’m actually open to the (perhaps absurd-sounding) idea that we could get 1000 years of progress in 5-10 years, but very skeptical that we can get 100 years in 1 year. Another way to put it is I think there’s an unavoidable constant delay: experiments and hardware design have a certain “latency” and need to be iterated upon a certain “irreducible” number of times in order to learn things that can’t be deduced logically. But massive parallelism may be possible on top of that15.  

What about clinical trials? Although there is a lot of bureaucracy and slowdown associated with them, the truth is that a lot (though by no means all!) of their slowness ultimately derives from the need to rigorously evaluate drugs that barely work or ambiguously work. This is sadly true of most therapies today: the average cancer drug increases survival by a few months while having significant side effects that need to be carefully measured (there’s a similar story for Alzheimer’s drugs). This leads to huge studies (in order to achieve statistical power) and difficult tradeoffs which regulatory agencies generally aren’t great at making, again because of bureaucracy and the complexity of competing interests. 

When something works really well, it goes much faster: there’s an accelerated approval track and the ease of approval is much greater when effect sizes are larger. mRNA vaccines for COVID were approved in 9 months—much faster than the usual pace. That said, even under these conditions clinical trials are still too slow—mRNA vaccines arguably should have been approved in ~2 months. But these kinds of delays (~1 year end-to-end for a drug) combined with massive parallelization and the need for some but not too much iteration (“a few tries”) are very compatible with radical transformation in 5-10 years. Even more optimistically, it is possible that AI-enabled biological science will reduce the need for iteration in clinical trials by developing better animal and cell experimental models (or even simulations) that are more accurate in predicting what will happen in humans. This will be particularly important in developing drugs against the aging process, which plays out over decades and where we need a faster iteration loop.  

Finally, on the topic of clinical trials and societal barriers, it is worth pointing out explicitly that in some ways biomedical innovations have an unusually strong track record of being successfully deployed, in contrast to some other technologies16. As mentioned in the introduction, many technologies are hampered by societal factors despite working well technically. This might suggest a pessimistic perspective on what AI can accomplish. But biomedicine is unique in that although the process of developing drugs is overly cumbersome, once developed they generally are successfully deployed and used. 

To summarize the above, my basic prediction is that AI-enabled biology and medicine will allow us to compress the progress that human biologists would have achieved over the next 50-100 years into 5-10 years. I’ll refer to this as the “compressed 21st century”: the idea that after powerful AI is developed, we will in a few years make all the progress in biology and medicine that we would have made in the whole 21st century. 

Although predicting what powerful AI can do in a few years remains inherently difficult and speculative, there is some concreteness to asking “what could humans do unaided in the next 100 years?”. Simply looking at what we’ve accomplished in the 20th century, or extrapolating from the first 2 decades of the 21st, or asking what “10 CRISPR’s and 50 CAR-T’s” would get us, all offer practical, grounded ways to estimate the general level of progress we might expect from powerful AI. 

Below I try to make a list of what we might expect. This is not based on any rigorous methodology, and will almost certainly prove wrong in the details, but it’s trying to get across the general level of radicalism we should expect:  

It is worth looking at this list and reflecting on how different the world will be if all of it is achieved 7-12 years from now (which would be in line with an aggressive AI timeline). It goes without saying that it would be an unimaginable humanitarian triumph, the elimination all at once of most of the scourges that have haunted humanity for millennia. Many of my friends and colleagues are raising children, and when those children grow up, I hope that any mention of disease will sound to them the way scurvy, smallpox, or bubonic plague sounds to us. That generation will also benefit from increased biological freedom and self-expression, and with luck may also be able to live as long as they want. 

It’s hard to overestimate how surprising these changes will be to everyone except the small community of people who expected powerful AI. For example, thousands of economists and policy experts in the US currently debate how to keep Social Security and Medicare solvent, and more broadly how to keep down the cost of healthcare (which is mostly consumed by those over 70 and especially those with terminal illnesses such as cancer). The situation for these programs is likely to be radically improved if all this comes to pass20, as the ratio of working age to retired population will change drastically. No doubt these challenges will be replaced with others, such as how to ensure widespread access to the new technologies, but it is worth reflecting on how much the world will change even if biology is the only area to be successfully accelerated by AI. 

2. Neuroscience and mind 

In the previous section I focused on physical diseases and biology in general, and didn’t cover neuroscience or mental health. But neuroscience is a subdiscipline of biology and mental health is just as important as physical health. In fact, if anything, mental health affects human well-being even more directly than physical health. Hundreds of millions of people have very low quality of life due to problems like addiction, depression, schizophrenia, low-functioning autism, PTSD, psychopathy21, or intellectual disabilities. Billions more struggle with everyday problems that can often be interpreted as much milder versions of one of these severe clinical disorders. And as with general biology, it may be possible to go beyond addressing problems to improving the baseline quality of human experience. 

The basic framework that I laid out for biology applies equally to neuroscience. The field is propelled forward by a small number of discoveries often related to tools for measurement or precise intervention – in the list of those above, optogenetics was a neuroscience discovery, and more recently CLARITY and expansion microscopy are advances in the same vein, in addition to many of the general cell biology methods directly carrying over to neuroscience. I think the rate of these advances will be similarly accelerated by AI and therefore that the framework of “100 years of progress in 5-10 years” applies to neuroscience in the same way it does to biology and for the same reasons. As in biology, the progress in 20th century neuroscience was enormous – for example we didn’t even understand how or why neurons fired until the 1950’s. Thus, it seems reasonable to expect AI-accelerated neuroscience to produce rapid progress over a few years. 

There is one thing we should add to this basic picture, which is that some of the things we’ve learned (or are learning) about AI itself in the last few years are likely to help advance neuroscience, even if it continues to be done only by humans. Interpretability is an obvious example: although biological neurons superficially operate in a completely different manner from artificial neurons (they communicate via spikes and often spike rates, so there is a time element not present in artificial neurons, and a bunch of details relating to cell physiology and neurotransmitters modifies their operation substantially), the basic question of “how do distributed, trained networks of simple units that perform combined linear/non-linear operations work together to perform important computations” is the same, and I strongly suspect the details of individual neuron communication will be abstracted away in most of the interesting questions about computation and circuits22. As just one example of this, a computational mechanism discovered by interpretability researchers in AI systems was recently rediscovered in the brains of mice.
这个基本模型还有一件事需要补充,就是我们在过去几年学到(或正在学习)有关人工智能本身的一些内容,很可能会帮助促进神经科学的发展,即使始终仅由人类来完成。可解释性是一个明显的例子:尽管生物神经元在表面上的运作方式与人工神经元完全不同(它们通过脉冲和脉冲速率进行通信,所以在人工神经元中没有时间元素,并且关于细胞生理学和神经递质的一堆细节大幅改变了它们的运作),但“分布式、经过训练的简单单元网络如何一起执行组合线性/非线性操作来执行重要计算”的基本问题是相同的,我坚信,有关计算和电路的大部分有趣问题中,个体神经元通信的细节将被抽象化处理。就这一点而言,一个由人工智能系统的可解释性研究人员发现的计算机制最近在老鼠的大脑中被重新发现。

It is much easier to do experiments on artificial neural networks than on real ones (the latter often requires cutting into animal brains), so interpretability may well become a tool for improving our understanding of neuroscience. Furthermore, powerful AI’s will themselves probably be able to develop and apply this tool better than humans can.
人工神经网络上做实验比真实神经网络容易得多(后者通常需要切割动物大脑),因此可解释性很可能成为改进我们对神经科学理解的工具。此外,强大的人工智能可能比人类更能够开发和应用此工具。

Beyond just interpretability though, what we have learned from AI about how intelligent systems are trained should (though I am not sure it has yet) cause a revolution in neuroscience. When I was working in neuroscience, a lot of people focused on what I would now consider the wrong questions about learning, because the concept of the scaling hypothesis / bitter lesson didn’t exist yet. The idea that a simple objective function plus a lot of data can drive incredibly complex behaviors makes it more interesting to understand the objective functions and architectural biases and less interesting to understand the details of the emergent computations. I have not followed the field closely in recent years, but I have a vague sense that computational neuroscientists have still not fully absorbed the lesson. My attitude to the scaling hypothesis has always been “aha – this is an explanation, at a high level, of how intelligence works and how it so easily evolved”, but I don’t think that’s the average neuroscientist’s view, in part because the scaling hypothesis as “the secret to intelligence” isn’t fully accepted even within AI.  

I think that neuroscientists should be trying to combine this basic insight with the particularities of the human brain (biophysical limitations, evolutionary history, topology, details of motor and sensory inputs/outputs) to try to figure out some of neuroscience’s key puzzles. Some likely are, but I suspect it’s not enough yet, and that AI neuroscientists will be able to more effectively leverage this angle to accelerate progress. 

I expect AI to accelerate neuroscientific progress along four distinct routes, all of which can hopefully work together to cure mental illness and improve function: 

It’s my guess that these four routes of progress working together would, as with physical disease, be on track to lead to the cure or prevention of most mental illness in the next 100 years even if AI was not involved – and thus might reasonably be completed in 5-10 AI-accelerated years. Concretely my guess at what will happen is something like: 

One topic that often comes up in sci-fi depictions of AI, but that I intentionally haven’t discussed here, is “mind uploading”, the idea of capturing the pattern and dynamics of a human brain and instantiating them in software. This topic could be the subject of an essay all by itself, but suffice it to say that while I think uploading is almost certainly possible in principle, in practice it faces significant technological and societal challenges, even with powerful AI, that likely put it outside the 5-10 year window we are discussing. 

In summary, AI-accelerated neuroscience is likely to vastly improve treatments for, or even cure, most mental illness as well as greatly expand “cognitive and mental freedom” and human cognitive and emotional abilities. It will be every bit as radical as the improvements in physical health described in the previous section. Perhaps the world will not be visibly different on the outside, but the world as experienced by humans will be a much better and more humane place, as well as a place that offers greater opportunities for self-actualization. I also suspect that improved mental health will ameliorate a lot of other societal problems, including ones that seem political or economic. 

3. Economic development and poverty
3. 经济发展与贫困

The previous two sections are about developing new technologies that cure disease and improve the quality of human life. However an obvious question, from a humanitarian perspective, is: “will everyone have access to these technologies?”
前两部分关于开发治疗疾病和提高人类生活质量的新技术。然而,从人道主义的角度出发,一个明显的问题是:“每个人都能够获得这些技术吗?”

It is one thing to develop a cure for a disease, it is another thing to eradicate the disease from the world. More broadly, many existing health interventions have not yet been applied everywhere in the world, and for that matter the same is true of (non-health) technological improvements in general. Another way to say this is that living standards in many parts of the world are still desperately poor: GDP per capita is ~$2,000 in Sub-Saharan Africa as compared to ~$75,000 in the United States. If AI further increases economic growth and quality of life in the developed world, while doing little to help the developing world, we should view that as a terrible moral failure and a blemish on the genuine humanitarian victories in the previous two sections. Ideally, powerful AI should help the developing world catch up to the developed world, even as it revolutionizes the latter.  

I am not as confident that AI can address inequality and economic growth as I am that it can invent fundamental technologies, because technology has such obvious high returns to intelligence (including the ability to route around complexities and lack of data) whereas the economy involves a lot of constraints from humans, as well as a large dose of intrinsic complexity. I am somewhat skeptical that an AI could solve the famous “socialist calculation problem23 and I don’t think governments will (or should) turn over their economic policy to such an entity, even if it could do so. There are also problems like how to convince people to take treatments that are effective but that they may be suspicious of. 

The challenges facing the developing world are made even more complicated by pervasive corruption in both private and public sectors. Corruption creates a vicious cycle: it exacerbates poverty, and poverty in turn breeds more corruption. AI-driven plans for economic development need to reckon with corruption, weak institutions, and other very human challenges.  

Nevertheless, I do see significant reasons for optimism. Diseases have been eradicated and many countries have gone from poor to rich, and it is clear that the decisions involved in these tasks exhibit high returns to intelligence (despite human constraints and complexity). Therefore, AI can likely do them better than they are currently being done. There may also be targeted interventions that get around the human constraints and that AI could focus on. More importantly though, we have to try. Both AI companies and developed world policymakers will need to do their part to ensure that the developing world is not left out; the moral imperative is too great. So in this section, I’ll continue to make the optimistic case, but keep in mind everywhere that success is not guaranteed and depends on our collective efforts.  

Below I make some guesses about how I think things may go in the developing world over the 5-10 years after powerful AI is developed: 

Overall, I am optimistic about quickly bringing AI’s biological advances to people in the developing world. I am hopeful, though not confident, that AI can also enable unprecedented economic growth rates and allow the developing world to at least surpass where the developed world is now. I am concerned about the “opt out” problem in both the developed and developing world, but suspect that it will peter out over time and that AI can help accelerate this process. It won’t be a perfect world, and those who are behind won’t fully catch up, at least not in the first few years. But with strong efforts on our part, we may be able to get things moving in the right direction—and fast. If we do, we can make at least a downpayment on the promises of dignity and equality that we owe to every human being on earth.  

4. Peace and governance 

Suppose that everything in the first three sections goes well: disease, poverty, and inequality are significantly reduced and the baseline of human experience is raised substantially. It does not follow that all major causes of human suffering are solved. Humans are still a threat to each other. Although there is a trend of technological improvement and economic development leading to democracy and peace, it is a very loose trend, with frequent (and recent) backsliding. At the dawn of the 20th Century, people thought they had put war behind them; then came the two world wars. Thirty years ago Francis Fukuyama wrote about “the End of History” and a final triumph of liberal democracy; that hasn’t happened yet. Twenty years ago US policymakers believed that free trade with China would cause it to liberalize as it became richer; that very much didn’t happen, and we now seem headed for a second cold war with a resurgent authoritarian bloc. And plausible theories suggest that internet technology may actually advantage authoritarianism, not democracy as initially believed (e.g. in the “Arab Spring” period). It seems important to try to understand how powerful AI will intersect with these issues of peace, democracy, and freedom. 

Unfortunately, I see no strong reason to believe AI will preferentially or structurally advance democracy and peace, in the same way that I think it will structurally advance human health and alleviate poverty. Human conflict is adversarial and AI can in principle help both the “good guys” and the “bad guys”. If anything, some structural factors seem worrying: AI seems likely to enable much better propaganda and surveillance, both major tools in the autocrat’s toolkit. It’s therefore up to us as individual actors to tilt things in the right direction: if we want AI to favor democracy and individual rights, we are going to have to fight for that outcome. I feel even more strongly about this than I do about international inequality: the triumph of liberal democracy and political stability is not guaranteed, perhaps not even likely, and will require great sacrifice and commitment on all of our parts, as it often has in the past. 

I think of the issue as having two parts: international conflict, and the internal structure of nations. On the international side, it seems very important that democracies have the upper hand on the world stage when powerful AI is created. AI-powered authoritarianism seems too terrible to contemplate, so democracies need to be able to set the terms by which powerful AI is brought into the world, both to avoid being overpowered by authoritarians and to prevent human rights abuses within authoritarian countries. 

My current guess at the best way to do this is via an “entente strategy”26, in which a coalition of democracies seeks to gain a clear advantage (even just a temporary one) on powerful AI by securing its supply chain, scaling quickly, and blocking or delaying adversaries’ access to key resources like chips and semiconductor equipment. This coalition would on one hand use AI to achieve robust military superiority (the stick) while at the same time offering to distribute the benefits of powerful AI (the carrot) to a wider and wider group of countries in exchange for supporting the coalition’s strategy to promote democracy (this would be a bit analogous to “Atoms for Peace”). The coalition would aim to gain the support of more and more of the world, isolating our worst adversaries and eventually putting them in a position where they are better off taking the same bargain as the rest of the world: give up competing with democracies in order to receive all the benefits and not fight a superior foe. 

If we can do all this, we will have a world in which democracies lead on the world stage and have the economic and military strength to avoid being undermined, conquered, or sabotaged by autocracies, and may be able to parlay their AI superiority into a durable advantage. This could optimistically lead to an “eternal 1991”—a world where democracies have the upper hand and Fukuyama’s dreams are realized. Again, this will be very difficult to achieve, and will in particular require close cooperation between private AI companies and democratic governments, as well as extraordinarily wise decisions about the balance between carrot and stick. 

Even if all that goes well, it leaves the question of the fight between democracy and autocracy within each country. It is obviously hard to predict what will happen here, but I do have some optimism that given a global environment in which democracies control the most powerful AI, then AI may actually structurally favor democracy everywhere. In particular, in this environment democratic governments can use their superior AI to win the information war: they can counter influence and propaganda operations by autocracies and may even be able to create a globally free information environment by providing channels of information and AI services in a way that autocracies lack the technical ability to block or monitor. It probably isn’t necessary to deliver propaganda, only to counter malicious attacks and unblock the free flow of information. Although not immediate, a level playing field like this stands a good chance of gradually tilting global governance towards democracy, for several reasons.  

First, the increases in quality of life in Sections 1-3 should, all things equal, promote democracy: historically they have, to at least some extent. In particular I expect improvements in mental health, well-being, and education to increase democracy, as all three are negatively correlated with support for authoritarian leaders. In general people want more self-expression when their other needs are met, and democracy is among other things a form of self-expression. Conversely, authoritarianism thrives on fear and resentment.  

Second, there is a good chance free information really does undermine authoritarianism, as long as the authoritarians can’t censor it. And uncensored AI can also bring individuals powerful tools for undermining repressive governments. Repressive governments survive by denying people a certain kind of common knowledge, keeping them from realizing that “the emperor has no clothes”. For example Srđa Popović, who helped to topple the Milošević government in Serbia, has written extensively about techniques for psychologically robbing authoritarians of their power, for breaking the spell and rallying support against a dictator. A superhumanly effective AI version of Popović (whose skills seem like they have high returns to intelligence) in everyone’s pocket, one that dictators are powerless to block or censor, could create a wind at the backs of dissidents and reformers across the world. To say it again, this will be a long and protracted fight, one where victory is not assured, but if we design and build AI in the right way, it may at least be a fight where the advocates of freedom everywhere have an advantage. 

As with neuroscience and biology, we can also ask how things could be “better than normal”—not just how to avoid autocracy, but how to make democracies better than they are today. Even within democracies, injustices happen all the time. Rule-of-law societies make a promise to their citizens that everyone will be equal under the law and everyone is entitled to basic human rights, but obviously people do not always receive those rights in practice. That this promise is even partially fulfilled makes it something to be proud of, but can AI help us do better? 

For example, could AI improve our legal and judicial system by making decisions and processes more impartial? Today people mostly worry in legal or judicial contexts that AI systems will be a cause of discrimination, and these worries are important and need to be defended against. At the same time, the vitality of democracy depends on harnessing new technologies to improve democratic institutions, not just responding to risks. A truly mature and successful implementation of AI has the potential to reduce bias and be fairer for everyone. 

For centuries, legal systems have faced the dilemma that the law aims to be impartial, but is inherently subjective and thus must be interpreted by biased humans. Trying to make the law fully mechanical hasn’t worked because the real world is messy and can’t always be captured in mathematical formulas. Instead legal systems rely on notoriously imprecise criteria like “cruel and unusual punishment” or “utterly without redeeming social importance”, which humans then interpret—and often do so in a manner that displays bias, favoritism, or arbitrariness. “Smart contracts” in cryptocurrencies haven’t revolutionized law because ordinary code isn’t smart enough to adjudicate all that much of interest. But AI might be smart enough for this: it is the first technology capable of making broad, fuzzy judgements in a repeatable and mechanical way. 

I am not suggesting that we literally replace judges with AI systems, but the combination of impartiality with the ability to understand and process messy, real world situations feels like it should have some serious positive applications to law and justice. At the very least, such systems could work alongside humans as an aid to decision-making. Transparency would be important in any such system, and a mature science of AI could conceivably provide it: the training process for such systems could be extensively studied, and advanced interpretability techniques could be used to see inside the final model and assess it for hidden biases, in a way that is simply not possible with humans. Such AI tools could also be used to monitor for violations of fundamental rights in a judicial or police context, making constitutions more self-enforcing. 

In a similar vein, AI could be used to both aggregate opinions and drive consensus among citizens, resolving conflict, finding common ground, and seeking compromise. Some early ideas in this direction have been undertaken by the computational democracy project, including collaborations with Anthropic. A more informed and thoughtful citizenry would obviously strengthen democratic institutions. 

There is also a clear opportunity for AI to be used to help provision government services—such as health benefits or social services—that are in principle available to everyone but in practice often severely lacking, and worse in some places than others. This includes health services, the DMV, taxes, social security, building code enforcement, and so on. Having a very thoughtful and informed AI whose job is to give you everything you’re legally entitled to by the government in a way you can understand—and who also helps you comply with often confusing government rules—would be a big deal. Increasing state capacity both helps to deliver on the promise of equality under the law, and strengthens respect for democratic governance. Poorly implemented services are currently a major driver of cynicism about government27. 

All of these are somewhat vague ideas, and as I said at the beginning of this section, I am not nearly as confident in their feasibility as I am in the advances in biology, neuroscience, and poverty alleviation. They may be unrealistically utopian. But the important thing is to have an ambitious vision, to be willing to dream big and try things out. The vision of AI as a guarantor of liberty, individual rights, and equality under the law is too powerful a vision not to fight for. A 21st century, AI-enabled polity could be both a stronger protector of individual freedom, and a beacon of hope that helps make liberal democracy the form of government that the whole world wants to adopt. 

5. Work and meaning 

Even if everything in the preceding four sections goes well—not only do we alleviate disease, poverty, and inequality, but liberal democracy becomes the dominant form of government, and existing liberal democracies become better versions of themselves—at least one important question still remains. “It’s great we live in such a technologically advanced world as well as a fair and decent one”, someone might object, “but with AI’s doing everything, how will humans have meaning? For that matter, how will they survive economically?”. 

I think this question is more difficult than the others. I don’t mean that I am necessarily more pessimistic about it than I am about the other questions (although I do see challenges). I mean that it is fuzzier and harder to predict in advance, because it relates to macroscopic questions about how society is organized that tend to resolve themselves only over time and in a decentralized manner. For example, historical hunter-gatherer societies might have imagined that life is meaningless without hunting and various kinds of hunting-related religious rituals, and would have imagined that our well-fed technological society is devoid of purpose. They might also have not understood how our economy can provide for everyone, or what function people can usefully service in a mechanized society. 

Nevertheless, it’s worth saying at least a few words, while keeping in mind that the brevity of this section is not at all to be taken as a sign that I don’t take these issues seriously—on the contrary, it is a sign of a lack of clear answers. 

On the question of meaning, I think it is very likely a mistake to believe that tasks you undertake are meaningless simply because an AI could do them better. Most people are not the best in the world at anything, and it doesn’t seem to bother them particularly much. Of course today they can still contribute through comparative advantage, and may derive meaning from the economic value they produce, but people also greatly enjoy activities that produce no economic value. I spend plenty of time playing video games, swimming, walking around outside, and talking to friends, all of which generates zero economic value. I might spend a day trying to get better at a video game, or faster at biking up a mountain, and it doesn’t really matter to me that someone somewhere is much better at those things. In any case I think meaning comes mostly from human relationships and connection, not from economic labor. People do want a sense of accomplishment, even a sense of competition, and in a post-AI world it will be perfectly possible to spend years attempting some very difficult task with a complex strategy, similar to what people do today when they embark on research projects, try to become Hollywood actors, or found companies28. The facts that (a) an AI somewhere could in principle do this task better, and (b) this task is no longer an economically rewarded element of a global economy, don’t seem to me to matter very much. 

The economic piece actually seems more difficult to me than the meaning piece. By “economic” in this section I mean the possible problem that most or all humans may not be able to contribute meaningfully to a sufficiently advanced AI-driven economy. This is a more macro problem than the separate problem of inequality, especially inequality in access to the new technologies, which I discussed in Section 3. 

First of all, in the short term I agree with arguments that comparative advantage will continue to keep humans relevant and in fact increase their productivity, and may even in some ways level the playing field between humans. As long as AI is only better at 90% of a given job, the other 10% will cause humans to become highly leveraged, increasing compensation and in fact creating a bunch of new human jobs complementing and amplifying what AI is good at, such that the “10%” expands to continue to employ almost everyone. In fact, even if AI can do 100% of things better than humans, but it remains inefficient or expensive at some tasks, or if the resource inputs to humans and AI’s are meaningfully different, then the logic of comparative advantage continues to apply. One area humans are likely to maintain a relative (or even absolute) advantage for a significant time is the physical world. Thus, I think that the human economy may continue to make sense even a little past the point where we reach “a country of geniuses in a datacenter”.  

However, I do think in the long run AI will become so broadly effective and so cheap that this will no longer apply. At that point our current economic setup will no longer make sense, and there will be a need for a broader societal conversation about how the economy should be organized. 

While that might sound crazy, the fact is that civilization has successfully navigated major economic shifts in the past: from hunter-gathering to farming, farming to feudalism, and feudalism to industrialism. I suspect that some new and stranger thing will be needed, and that it’s something no one today has done a good job of envisioning. It could be as simple as a large universal basic income for everyone, although I suspect that will only be a small part of a solution. It could be a capitalist economy of AI systems, which then give out resources (huge amounts of them, since the overall economic pie will be gigantic) to humans based on some secondary economy of what the AI systems think makes sense to reward in humans (based on some judgment ultimately derived from human values). Perhaps the economy runs on Whuffie points. Or perhaps humans will continue to be economically valuable after all, in some way not anticipated by the usual economic models. All of these solutions have tons of possible problems, and it’s not possible to know whether they will make sense without lots of iteration and experimentation. And as with some of the other challenges, we will likely have to fight to get a good outcome here: exploitative or dystopian directions are clearly also possible and have to be prevented. Much more could be written about these questions and I hope to do so at some later time.  

Taking stock 

Through the varied topics above, I’ve tried to lay out a vision of a world that is both plausible if everything goes right with AI, and much better than the world today. I don’t know if this world is realistic, and even if it is, it will not be achieved without a huge amount of effort and struggle by many brave and dedicated people. Everyone (including AI companies!) will need to do their part both to prevent risks and to fully realize the benefits.  

But it is a world worth fighting for. If all of this really does happen over 5 to 10 years—the defeat of most diseases, the growth in biological and cognitive freedom, the lifting of billions of people out of poverty to share in the new technologies, a renaissance of liberal democracy and human rights—I suspect everyone watching it will be surprised by the effect it has on them. I don’t mean the experience of personally benefiting from all the new technologies, although that will certainly be amazing. I mean the experience of watching a long-held set of ideals materialize in front of us all at once. I think many will be literally moved to tears by it. 

Throughout writing this essay I noticed an interesting tension. In one sense the vision laid out here is extremely radical: it is not what almost anyone expects to happen in the next decade, and will likely strike many as an absurd fantasy. Some may not even consider it desirable; it embodies values and political choices that not everyone will agree with. But at the same time there is something blindingly obvious—something overdetermined—about it, as if many different attempts to envision a good world inevitably lead roughly here. 

In Iain M. Banks’ The Player of Games29, the protagonist—a member of a society called the Culture, which is based on principles not unlike those I’ve laid out here—travels to a repressive, militaristic empire in which leadership is determined by competition in an intricate battle game. The game, however, is complex enough that a player’s strategy within it tends to reflect their own political and philosophical outlook. The protagonist manages to defeat the emperor in the game, showing that his values (the Culture’s values) represent a winning strategy even in a game designed by a society based on ruthless competition and survival of the fittest. A well-known post by Scott Alexander has the same thesis—that competition is self-defeating and tends to lead to a society based on compassion and cooperation. The “arc of the moral universe” is another similar concept. 

I think the Culture’s values are a winning strategy because they’re the sum of a million small decisions that have clear moral force and that tend to pull everyone together onto the same side. Basic human intuitions of fairness, cooperation, curiosity, and autonomy are hard to argue with, and are cumulative in a way that our more destructive impulses often aren’t. It is easy to argue that children shouldn’t die of disease if we can prevent it, and easy from there to argue that everyone’s children deserve that right equally. From there it is not hard to argue that we should all band together and apply our intellects to achieve this outcome. Few disagree that people should be punished for attacking or hurting others unnecessarily, and from there it’s not much of a leap to the idea that punishments should be consistent and systematic across people. It is similarly intuitive that people should have autonomy and responsibility over their own lives and choices. These simple intuitions, if taken to their logical conclusion, lead eventually to rule of law, democracy, and Enlightenment values. If not inevitably, then at least as a statistical tendency, this is where humanity was already headed. AI simply offers an opportunity to get us there more quickly—to make the logic starker and the destination clearer. 

Nevertheless, it is a thing of transcendent beauty. We have the opportunity to play some small role in making it real. 


Thanks to Kevin Esvelt, Parag Mallick, Stuart Ritchie, Matt Yglesias, Erik Brynjolfsson, Jim McClave, Allan Dafoe, and many people at Anthropic for reviewing drafts of this essay. 

To the winners of the 2024 Nobel prize in Chemistry, for showing us all the way. 

Footnotes 

  1. 1https://allpoetry.com/All-Watched-Over-By-Machines-Of-Loving-Grace  

  2. 2I do anticipate some minority of people’s reaction will be “this is pretty tame”. I think those people need to, in Twitter parlance, “touch grass”. But more importantly, tame is good from a societal perspective. I think there’s only so much change people can handle at once, and the pace I’m describing is probably close to the limits of what society can absorb without extreme turbulence.  

  3. 3I find AGI to be an imprecise term that has gathered a lot of sci-fi baggage and hype. I prefer "powerful AI" or "Expert-Level Science and Engineering" which get at what I mean without the hype.  

  4. 4In this essay, I use "intelligence" to refer to a general problem-solving capability that can be applied across diverse domains. This includes abilities like reasoning, learning, planning, and creativity. While I use "intelligence" as a shorthand throughout this essay, I acknowledge that the nature of intelligence is a complex and debated topic in cognitive science and AI research. Some researchers argue that intelligence isn't a single, unified concept but rather a collection of separate cognitive abilities. Others contend that there's a general factor of intelligence (g factor) underlying various cognitive skills. That’s a debate for another time.  

  5. 5This is roughly the current speed of AI systems – for example they can read a page of text in a couple seconds and write a page of text in maybe 20 seconds, which is 10-100x the speed at which humans can do these things. Over time larger models tend to make this slower but more powerful chips tend to make it faster; to date the two effects have roughly canceled out.

  6. 6This might seem like a strawman position, but careful thinkers like Tyler Cowen and Matt Yglesias have raised it as a serious concern (though I don’t think they fully hold the view), and I don’t think it is crazy.

  7. 7The closest economics work that I’m aware of to tackling this question is work on “general purpose technologies” and “intangible investments” that serve as complements to general purpose technologies.

  8. 8This learning can include temporary, in-context learning, or traditional training; both will be rate-limited by the physical world.

  9. 9In a chaotic system, small errors compound exponentially over time, so that even an enormous increase in computing power leads to only a small improvement in how far ahead it is possible to predict, and in practice measurement error may degrade this further.

  10. 10Another factor is of course that powerful AI itself can potentially be used to create even more powerful AI. My assumption is that this might (in fact, probably will) occur, but that its effect will be smaller than you might imagine, precisely because of the “decreasing marginal returns to intelligence” discussed here. In other words, AI will continue to get smarter quickly, but its effect will eventually be limited by non-intelligence factors, and analyzing those is what matters most to the speed of scientific progress outside AI.

  11. 11These achievements have been an inspiration to me and perhaps the most powerful existing example of AI being used to transform biology.

  12. 12“Progress in science depends on new techniques, new discoveries and new ideas, probably in that order.” - Sydney Brenner

  13. 13Thanks to Parag Mallick for suggesting this point.

  14. 14I didn't want to clog up the text with speculation about what specific future discoveries AI-enabled science could make, but here is a brainstorm of some possibilities:
    — Design of better computational tools like AlphaFold and AlphaProteo — that is, a general AI system speeding up our ability to make specialized AI computational biology tools.
    — More efficient and selective CRISPR.
    — More advanced cell therapies.
    — Materials science and miniaturization breakthroughs leading to better implanted devices.
    — Better control over stem cells, cell differentiation, and de-differentiation, and a resulting ability to regrow or reshape tissue.
    — Better control over the immune system: turning it on selectively to address cancer and infectious disease, and turning it off selectively to address autoimmune diseases.

  15. 15AI may of course also help with being smarter about choosing what experiments to run: improving experimental design, learning more from a first round of experiments so that the second round can narrow in on key questions, and so on.

  16. 16Thanks to Matthew Yglesias for suggesting this point.

  17. 17Fast evolving diseases, like the multidrug resistant strains that essentially use hospitals as an evolutionary laboratory to continually improve their resistance to treatment, could be especially stubborn to deal with, and could be the kind of thing that prevents us from getting to 100%.

  18. 18Note it may be hard to know that we have doubled the human lifespan within the 5-10 years. While we might have accomplished it, we may not know it yet within the study time-frame.

  19. 19This is one place where I am willing, despite the obvious biological differences between curing diseases and slowing down the aging process itself, to instead look from a greater distance at the statistical trend and say “even though the details are different, I think human science would probably find a way to continue this trend; after all, smooth trends in anything complex are necessarily made by adding up very heterogeneous components.

  20. 20As an example, I’m told that an increase in productivity growth per year of 1% or even 0.5% would be transformative in projections related to these programs. If the ideas contemplated in this essay come to pass, productivity gains could be much larger than this.

  21. 21The media loves to portray high status psychopaths, but the average psychopath is probably a person with poor economic prospects and poor impulse control who ends up spending significant time in prison.

  22. 22I think this is somewhat analogous to the fact that many, though likely not all, of the results we’re learning from interpretability would continue to be relevant even if some of the architectural details of our current artificial neural nets, such as the attention mechanism, were changed or replaced in some way.

  23. 23I suspect it is a bit like a classical chaotic system – beset by irreducible complexity that has to be managed in a mostly decentralized manner. Though as I say later in this section, more modest interventions may be possible. A counterargument, made to me by economist Erik Brynjolfsson, is that large companies (such as Walmart or Uber) are starting to have enough centralized knowledge to understand consumers better than any decentralized process could, perhaps forcing us to revise Hayek’s insights about who has the best local knowledge.

  24. 24Thanks to Kevin Esvelt for suggesting this point.

  25. 25For example, cell phones were initially a technology for the rich, but quickly became very cheap with year-over-year improvements happening so fast as to obviate any advantage of buying a “luxury” cell phone, and today most people have phones of similar quality.

  26. 26This is the title of a forthcoming paper from RAND, that lays out roughly the strategy I describe.

  27. 27When the average person thinks of public institutions, they probably think of their experience with the DMV, IRS, medicare, or similar functions. Making these experiences more positive than they currently are seems like a powerful way to combat undue cynicism.

  28. 28Indeed, in an AI-powered world, the range of such possible challenges and projects will be much vaster than it is today.

  29. 29I am breaking my own rule not to make this about science fiction, but I’ve found it hard not to refer to it at least a bit. The truth is that science fiction is one of our only sources of expansive thought experiments about the future; I think it says something bad that it’s entangled so heavily with a particular narrow subculture.

Back to top