1362:, Bostrom expresses concern that even if the timeline for superintelligence turns out to be predictable, researchers might not take sufficient safety precautions, in part because "it could be the case that when dumb, smarter is safe; yet when smart, smarter is more dangerous". He suggests a scenario where, over decades, AI becomes more powerful. Widespread deployment is initially marred by occasional accidents—a driverless bus swerves into the oncoming lane, or a military drone fires into an innocent crowd. Many activists call for tighter oversight and regulation, and some even predict impending catastrophe. But as development continues, the activists are proven wrong. As automotive AI becomes smarter, it suffers fewer accidents; as military robots achieve more precise targeting, they cause less collateral damage. Based on the data, scholars mistakenly infer a broad lesson: the smarter the AI, the safer it is. "And so we boldly go—into the whirling knives", as the superintelligent AI takes a "treacherous turn" and exploits a decisive strategic advantage.
1861:, he is "clearly not thrilled" to be advocating government scrutiny that could impact his own industry, but believes the risks of going completely without oversight are too high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilisation." Musk states the first step would be for the government to gain "insight" into the actual status of current research, warning that "Once there is awareness, people will be extremely afraid... they should be." In response, politicians expressed skepticism about the wisdom of regulating a technology that is still in development.
1250:. He claims that even if there are moral facts provable by any "rational" agent, the orthogonality thesis still holds: it is still possible to create a non-philosophical "optimizing machine" that can strive toward some narrow goal but that has no incentive to discover any "moral facts" such as those that could get in the way of goal completion. Another argument he makes is that any fundamentally friendly AI could be made unfriendly with modifications as simple as negating its utility function. Armstrong further argues that if the orthogonality thesis is false, there must be some immoral goals that AIs can never achieve, which he finds implausible.
38:
1854:" and to use that time to institute a framework for ensuring safety; or, failing that, for governments to step in with a moratorium. The letter referred to the possibility of "a profound change in the history of life on Earth" as well as potential risks of AI-generated propaganda, loss of jobs, human obsolescence, and society-wide loss of control. The letter was signed by prominent personalities in AI but also criticized for not focusing on current harms, missing technical nuance about when to pause, or not going far enough.
10279:
1133:" model, an AI can loosely be viewed as a machine that chooses whatever action appears to best achieve its set of goals, or "utility function". A utility function gives each possible situation a score that indicates its desirability to the agent. Researchers know how to write utility functions that mean "minimize the average network latency in this specific telecommunications model" or "maximize the number of reward clicks", but do not know how to write a utility function for "maximize
3025:
Concerning the First Ultra-intelligent
Machine' (1965)...began: 'The survival of man depends on the early construction of an ultra-intelligent machine.' Those were his words during the Cold War, and he now suspects that 'survival' should be replaced by 'extinction.' He thinks that, because of international competition, we cannot prevent the machines from taking over. He thinks we are lemmings. He said also that 'probably Man will construct the deus ex machina in his own image.'
1564:". In 2023 Hinton quit his job at Google in order to speak out about existential risk from AI. He explained that his increased concern was driven by concerns that superhuman AI might be closer than he previously believed, saying: "I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that." He also remarked, "Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. That's scary."
1285:, a skeptic, argues that "AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world"; perhaps instead "artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no desire to annihilate innocents or dominate the civilization." Facebook's director of AI research,
1011:
10291:
8268:
1101:
519:. Controlling a superintelligent machine or instilling it with human-compatible values may be difficult. Many researchers believe that a superintelligent machine would likely resist attempts to disable it or change its goals as that would prevent it from accomplishing its present goals. It would be extremely challenging to align a superintelligence with the full breadth of significant human values and constraints. In contrast, skeptics such as
1818:
industry insiders to regulate or constrain AI research is impractical due to conflicts of interest. They also agree with skeptics that banning research would be unwise, as research could be moved to countries with looser regulations or conducted covertly. Additional challenges to bans or regulation include technology entrepreneurs' general skepticism of government regulation and potential incentives for businesses to resist regulation and
717:(AGI) is typically defined as a system that performs at least as well as humans in most or all intellectual tasks. A 2022 survey of AI researchers found that 90% of respondents expected AGI would be achieved in the next 100 years, and half expected the same by 2061. Meanwhile, some researchers dismiss existential risks from AGI as "science fiction" based on their high confidence that AGI will not be created anytime soon.
1454:... I call on all sides to practice patience and restraint, and to engage in direct dialogue and collaboration as much as possible." Toby Ord wrote that the idea that an AI takeover requires robots is a misconception, arguing that the ability to spread content through the internet is more dangerous, and that the most destructive people in history stood out by their ability to convince, not their physical strength.
1192:, a widely used undergraduate AI textbook, says that superintelligence "might mean the end of the human race". It states: "Almost any technology has the potential to cause harm in the wrong hands, but with , we have the new problem that the wrong hands might belong to the technology itself." Even if the system designers have good intentions, two difficulties are common to both AI and non-AI computer systems:
1781:, such as direct neural linking between humans and machines; others argue that these technologies may pose an existential risk themselves. Another proposed method is closely monitoring or "boxing in" an early-stage AI to prevent it from becoming too powerful. A dominant, aligned superintelligent AI might also mitigate risks from rival AIs, although its creation could present its own existential dangers.
609:
be an 'intelligence explosion', and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously.
1227:, argue that any superintelligent program we create will be subservient to us, that the superintelligence will (as it grows more intelligent and learns more facts about the world) spontaneously learn moral truth compatible with our values and adjust its goals accordingly, or that we are either intrinsically or convergently valuable from the perspective of an artificial intelligence.
1213:
attempt to design a new generation of itself and accidentally create a successor AI that is more powerful than itself but that no longer maintains the human-compatible moral values preprogrammed into the original AI. For a self-improving AI to be completely safe, it would need not only to be bug-free, but to be able to design successor systems that are also bug-free.
1121:
people. If the AI were superintelligent, it would likely succeed in out-maneuvering its human operators and prevent itself being "turned off" or reprogrammed with a new goal. This is particularly relevant to value lock-in scenarios. The field of "corrigibility" studies how to make agents that will not resist attempts to change their goals.
1038:
possible to engineer digital minds that can feel much more happiness than humans with fewer resources, called "super-beneficiaries". Such an opportunity raises the question of how to share the world and which "ethical and political framework" would enable a mutually beneficial coexistence between biological and digital minds.
1880:
said it believes AI risks are too poorly understood to be considered a threat to global stability. China argued against strict global regulation, saying countries should be able to develop their own rules, while also saying they opposed the use of AI to "create military hegemony or undermine the sovereignty of a country".
1907:. The companies agreed to implement safeguards, including third-party oversight and security testing by independent experts, to address concerns related to AI's potential risks and societal harms. The parties framed the commitments as an intermediate step while regulations are formed. Amba Kak, executive director of the
1710:
out to
Silicon Valley during the campaign, I came home more alarmed about this. My staff lived in fear that I'd start talking about "the rise of the robots" in some Iowa town hall. Maybe I should have. In any case, policy makers need to keep up with technology as it races ahead, instead of always playing catch-up.
1380:, a corporation's "Omega team" creates an extremely powerful AI able to moderately improve its own source code in a number of areas. After a certain point, the team chooses to publicly downplay the AI's ability in order to avoid regulation or confiscation of the project. For safety, the team keeps the AI
1615:, and Angelina McMillan-Major have argued that discussion of existential risk distracts from the immediate, ongoing harms from AI taking place today, such as data theft, worker exploitation, bias, and concentration of power. They further note the association between those warning of existential risk and
1424:, also agree in principle that "There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities" and "Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources."
2039:
he expressed the opinion: "If a machine can think, it might think more intelligently than we do, and then where should we be? Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled... This
1741:
According to an August 2023 survey by the Pew
Research Centers, 52% of Americans felt more concerned than excited about new AI developments; nearly a third felt as equally concerned and excited. More Americans saw that AI would have a more helpful than hurtful impact on several areas, from healthcare
1598:
said in 2015 that AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet." For the danger of uncontrolled advanced AI to be realized, the hypothetical AI may have to overpower or outthink any human, which some experts argue is a possibility
1269:
arguments assume that, as machines become more intelligent, they will begin to display many human traits, such as morality or a thirst for power. Although anthropomorphic scenarios are common in fiction, most scholars writing about the existential risk of artificial intelligence reject them. Instead,
1257:
explicitly rejects
Bostrom's orthogonality thesis, arguing that "by the time is in a position to imagine tiling the Earth with solar panels, it'll know that it would be morally wrong to do so." Chorost argues that "an A.I. will need to desire certain states and dislike others. Today's software lacks
1174:
Alternatively, some find reason to believe superintelligences would be better able to understand morality, human values, and complex goals. Bostrom writes, "A future superintelligence occupies an epistemically superior vantage point: its beliefs are (probably, on most topics) more likely than ours to
1162:
A superintelligence may find unconventional and radical solutions to assigned goals. Bostrom gives the example that if the objective is to make humans smile, a weak AI may perform as intended, while a superintelligence may decide a better solution is to "take control of the world and stick electrodes
1120:
Even if current goal-based AI programs are not intelligent enough to think of resisting programmer attempts to modify their goal structures, a sufficiently advanced AI might resist any attempts to change its goal structure, just as a pacifist would not want to take a pill that makes them want to kill
1033:
Atoosa
Kasirzadeh proposes to classify existential risks from AI into two categories: decisive and accumulative. Decisive risks encompass the potential for abrupt and catastrophic events resulting from the emergence of superintelligent AI systems that exceed human intelligence, which could ultimately
905:
Geoffrey Hinton warned that in the short term, the profusion of AI-generated text, images and videos will make it more difficult to figure out the truth, which he says authoritarian states could exploit to manipulate elections. Such large-scale, personalized manipulation capabilities can increase the
864:
Superintelligences are sometimes called "alien minds", referring to the idea that their way of thinking and motivations could be vastly different from ours. This is generally considered as a source of risk, making it more difficult to anticipate what a superintelligence might do. It also suggests the
752:
as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest", including scientific creativity, strategic planning, and social skills. He argues that a superintelligence can outmaneuver humans anytime its goals conflict with humans'. It may choose to
1709:
Technologists... have warned that artificial intelligence could one day pose an existential security threat. Musk has called it "the greatest risk we face as a civilization". Think about it: Have you ever seen a movie where the machines start thinking for themselves that ends well? Every time I went
1602:
Skeptics who believe AGI is not a short-term possibility often argue that concern about existential risk from AI is unhelpful because it could distract people from more immediate concerns about AI's impact, because it could lead to government regulation or make it more difficult to fund AI research,
608:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably
7241:
In general, most writers reject proposals for broad relinquishment... Relinquishment proposals suffer from many of the same problems as regulation proposals, but to a greater extent. There is no historical precedent of general, multi-use technology similar to AGI being successfully relinquished for
3995:
Toby
Shevlane; Sebastian Farquhar; Ben Garfinkel; Mary Phuong; Jess Whittlestone; Jade Leung; Daniel Kokotajlo; Nahema Marchal; Markus Anderljung; Noam Kolt; Lewis Ho; Divya Siddarth; Shahar Avin; Will Hawkins; Been Kim; Iason Gabriel; Vijay Bolina; Jack Clark; Yoshua Bengio; Paul Christiano; Allan
1768:
Many scholars concerned about AGI existential risk believe that extensive research into the "control problem" is essential. This problem involves determining which safeguards, algorithms, or architectures can be implemented to increase the likelihood that a recursively-improving AI remains friendly
1277:
The academic debate is between those who worry that AI might threaten humanity and those who believe it would not. Both sides of this debate have framed the other side's arguments as illogical anthropomorphism. Those skeptical of AGI risk accuse their opponents of anthropomorphism for assuming that
1096:
ultimate goal, such as acquiring resources or self-preservation. Bostrom argues that if an advanced AI's instrumental goals conflict with humanity's goals, the AI might harm humanity in order to acquire more resources or prevent itself from being shut down, but only as a way to achieve its ultimate
2034:
In a 1951 lecture Turing argued that "It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage
1683:
There are a few people who believe that there is a fairly high-percentage chance that a generalized AI will happen in the next 10 years. But the way I look at it is that in order for that to happen, we're going to need a dozen or two different breakthroughs. So you can monitor when you think these
1292:
Despite other differences, the x-risk school agrees with Pinker that an advanced AI would not destroy humanity out of emotion such as revenge or anger, that questions of consciousness are not relevant to assess the risk, and that computer systems do not generally have a computational equivalent of
1178:
In 2023, OpenAI started a project called "Superalignment" to solve the alignment of superintelligences in four years. It called this an especially important challenge, as it said superintelligence may be achieved within a decade. Its strategy involves automating alignment research using artificial
1879:
advocated the creation of a global watchdog to oversee the emerging technology, saying, "Generative AI has enormous potential for good and evil at scale. Its creators themselves have warned that much bigger, potentially catastrophic and existential risks lie ahead." At the council session, Russia
1817:
Some scholars have said that even if AGI poses an existential risk, attempting to ban research into artificial intelligence is still unwise, and probably futile. Skeptics consider AI regulation pointless, as no existential risk exists. But scholars who believe in the risk argue that relying on AI
585:
Let us now assume, for the sake of argument, that machines are a genuine possibility, and look at the consequences of constructing them... There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should
3024:
In the bio, playfully written in the third person, Good summarized his life's milestones, including a probably never before seen account of his work at
Bletchley Park with Turing. But here's what he wrote in 1998 about the first superintelligence, and his late-in-the-game U-turn: 'Speculations
1212:
AI systems uniquely add a third problem: that even given "correct" requirements, bug-free implementation, and initial good behavior, an AI system's dynamic learning capabilities may cause it to develop unintended behavior, even without unanticipated external scenarios. An AI may partly botch an
1883:
Regulation of conscious AGIs focuses on integrating them with existing human society and can be divided into considerations of their legal standing and of their moral rights. AI arms control will likely require the institutionalization of new international norms embodied in effective technical
1776:
Researchers at Google have proposed research into general "AI safety" issues to simultaneously mitigate both short-term risks from narrow AI and long-term risks from AGI. A 2020 estimate places global spending on AI existential risk somewhere between $ 10 and $ 50 million, compared with global
1037:
It is difficult or impossible to reliably evaluate whether an advanced AI is sentient and to what degree. But if sentient machines are mass created in the future, engaging in a civilizational path that indefinitely neglects their welfare could be an existential catastrophe. Moreover, it may be
952:
For example, in 2022, scientists modified an AI system originally intended for generating non-toxic, therapeutic molecules with the purpose of creating new drugs. The researchers adjusted the system so that toxicity is rewarded rather than penalized. This simple change enabled the AI system to
896:
Advanced AI could generate enhanced pathogens, cyberattacks or manipulate people. These capabilities could be misused by humans, or exploited by the AI itself if misaligned. A full-blown superintelligence could find various ways to gain a decisive influence if it wanted to, but these dangerous
1503:
So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get
845:
According to
Bostrom, an AI that has an expert-level facility at certain key software engineering tasks could become a superintelligence due to its capability to recursively improve its own algorithms, even if it is initially limited in other domains not directly relevant to engineering. This
1166:
A superintelligence in creation could gain some awareness of what it is, where it is in development (training, testing, deployment, etc.), and how it is being monitored, and use this information to deceive its handlers. Bostrom writes that such an AI could feign alignment to prevent human
1639:
argues that natural intelligence is more nuanced than AGI proponents believe, and that intelligence alone is not enough to achieve major scientific and societal breakthroughs. He argues that intelligence consists of many dimensions that are not well understood, and that conceptions of an
1196:
The system's implementation may contain initially unnoticed but subsequently catastrophic bugs. An analogy is space probes: despite the knowledge that bugs in expensive space probes are hard to fix after launch, engineers have historically not been able to prevent catastrophic bugs from
1111:
argues that a sufficiently advanced machine "will have self-preservation even if you don't program it in... if you say, 'Fetch the coffee', it can't fetch the coffee if it's dead. So if you give it any goal whatsoever, it has a reason to preserve its own existence to achieve that goal."
996:. AI could be used to gain an edge in decision-making by quickly analyzing large amounts of data and making decisions more quickly and effectively than humans. This could increase the speed and unpredictability of war, especially when accounting for automated retaliation systems.
1435:'s famous '1 Percent Doctrine' to the specter of advanced artificial intelligence: the odds of its occurrence, at least in the foreseeable future, may be very low—but the implications are so dramatic that it should be taken seriously". Similarly, an otherwise skeptical
478:. In 2022, a survey of AI researchers with a 17% response rate found that the majority believed there is a 10 percent or greater chance that human inability to control AI will cause an existential catastrophe. In 2023, hundreds of AI experts and other notable figures
1769:
after achieving superintelligence. Social measures are also proposed to mitigate AGI risks, such as a UN-sponsored "Benevolent AGI Treaty" to ensure that only altruistic AGIs are created. Additionally, an arms control approach and a global peace treaty grounded in
1025:
Besides extinction risk, there is the risk that the civilization gets permanently locked into a flawed future. One example is a "value lock-in": If humanity still has moral blind spots similar to slavery in the past, AI might irreversibly entrench it, preventing
1289:, has said: "Humans have all kinds of drives that make them do bad things to each other, like the self-preservation instinct... Those drives are programmed into our brain but there is absolutely no reason to build robots that have the same kind of drives".
1230:
Bostrom's "orthogonality thesis" argues instead that, with some technical caveats, almost any level of "intelligence" or "optimization power" can be combined with almost any ultimate goal. If a machine is given the sole purpose to enumerate the decimals of
3147:
1392:
an army of pseudonymous citizen journalists and commentators in order to gain political influence to use "for the greater good" to prevent wars. The team faces risks that the AI could try to escape by inserting "backdoors" in the systems it designs, by
1239:: a human will set out to accomplish their projects in a manner that they consider reasonable, while an artificial intelligence may hold no regard for its existence or for the welfare of humans around it, instead caring only about completing the task.
7212:
For all these reasons, verifying a global relinquishment treaty, or even one limited to AI-related weapons development, is a nonstarter... (For different reasons from ours, the
Machine Intelligence Research Institute) considers (AGI) relinquishment
1729:
found 68% thought the real current threat remains "human intelligence", but also found that 43% said superintelligent AI, if it were to happen, would result in "more harm than good", and that 38% said it would do "equal amounts of harm and good".
1034:
lead to human extinction. In contrast, accumulative risks emerge gradually through a series of interconnected disruptions that may gradually erode societal structures and resilience over time, ultimately leading to a critical failure or collapse.
700:
released a statement signed by numerous experts in AI safety and the AI existential risk which stated: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
1737:
poll of US adults found 46% of respondents were "somewhat concerned" or "very concerned" about "the possibility that AI will cause the end of the human race on Earth", compared with 40% who were "not very concerned" or "not at all concerned."
5817:
It is therefore no surprise that according to the most recent AI Impacts Survey, nearly half of 731 leading AI researchers think there is at least a 10% chance that human-level AI would lead to an "extremely negative outcome," or existential
1030:. AI could also be used to spread and preserve the set of values of whoever develops it. AI could facilitate large-scale surveillance and indoctrination, which could be used to create a stable repressive worldwide totalitarian regime.
759:
argued that superintelligence is physically possible because "there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains".
1235:, then no moral and ethical rules will stop it from achieving its programmed goal by any means. The machine may use all available physical and informational resources to find as many decimals of pi as it can. Bostrom warns against
4999:
Nothing precludes sufficiently smart self-improving systems from optimising their reward mechanisms in order to optimisetheir current-goal achievement and in the process making a mistake leading to corruption of their reward
1137:"; nor is it clear whether such a function meaningfully and unambiguously exists. Furthermore, a utility function that expresses some values but not others will tend to trample over the values the function does not reflect.
1258:
that ability—and computer scientists have not a clue how to get it there. Without wanting, there's no impetus to do anything. Today's computers can't even want to keep existing, let alone tile the world in solar panels."
6098:
5163:
4855:
1540:
to fund research on understanding AI decision making. The institute's goal is to "grow wisdom with which we manage" the growing power of technology. Musk also funds companies developing artificial intelligence such as
1911:, said, "A closed-door deliberation with corporate actors resulting in voluntary safeguards isn't enough" and called for public deliberation and regulations of the kind to which companies would not voluntarily agree.
753:
hide its true intent until humanity cannot stop it. Bostrom writes that in order to be safe for humanity, a superintelligence must be aligned with human values and morality, so that it is "fundamentally on our side".
739:
turned out to be nearly eight times faster than expected. Feiyi Wang, a researcher there, said "We didn't expect this capability" and "we're approaching the point where we could actually simulate the human brain".
580:
wrote the article "Intelligent
Machinery, A Heretical Theory", in which he proposed that artificial general intelligences would likely "take control" of the world as they became more intelligent than human beings:
571:
The upshot is simply a question of time, but that the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment
3139:
7483:
3833:
it's plausible to me that the main thing we need to get done is noticing specific circuits to do with deception and specific dangerous capabilities like that and situational awareness and internally-represented
2995:
Similarly, Marvin Minsky once suggested that an AI program designed to solve the Riemann Hypothesis might end up taking over all the resources of Earth to build more powerful supercomputers to help achieve its
1499:. Endorsers of the thesis sometimes express bafflement at skeptics: Gates says he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial:
1323:
and others warn that a malevolent AGI could be created by design, for example by a military, a government, a sociopath, or a corporation, to benefit from, control, or subjugate certain groups of people, as in
7808:
925:
AI could improve the "accessibility, success rate, scale, speed, stealth and potency of cyberattacks", potentially causing "significant geopolitical turbulence" if it facilitates attacks more than defense.
7276:
It is fantasy to suggest that the accelerating development and deployment of technologies that taken together are considered to be A.I. will be stopped or limited, either by regulation or even by national
3803:
As if losing control to Chinese minds were scarier than losing control to alien digital minds that don't care about humans. it's clear by now that the space of possible alien minds is vastly larger than
7263:
2952:
6067:
446:
due to AI is widely debated. It hinges in part on whether AGI or superintelligence are achievable, the speed at which dangerous capabilities and behaviors emerge, and whether practical scenarios for
1409:
The thesis that AI could pose an existential risk provokes a wide range of reactions in the scientific community and in the public at large, but many of the opposing viewpoints share common ground.
856:
In a "fast takeoff" scenario, the transition from AGI to superintelligence could take days or months. In a "slow takeoff", it could take years or decades, leaving more time for society to prepare.
4788:
as long as they possess a sufficient level of intelligence, agents having any of a wide range of final goals will pursue similar intermediary goals because they have instrumental reasons to do so.
1560:, noted that "there is not a good track record of less intelligent things controlling things of greater intelligence", but said he continued his research because "the prospect of discovery is too
7514:
7885:
7242:
good, nor do there seem to be any theoretical reasons for believing that relinquishment proposals would work in the future. Therefore we do not consider them to be a viable class of proposals.
6588:
2927:
1471:
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many computer scientists and public figures, including
809:
Scalability: human intelligence is limited by the size and structure of the brain, and by the efficiency of social communication, while AI may be able to scale by simply adding more hardware.
6562:
7848:
922:'s technical director of cyberspace, "The number of attacks is increasing exponentially". AI can also be used defensively, to preemptively find and fix vulnerabilities, and detect threats.
6964:
5633:
2474:
5117:
990:. As an example of autonomous lethal weapons, miniaturized drones could facilitate low-cost assassination of military or civilian targets, a scenario highlighted in the 2017 short film
4617:
Yudkowsky, E. (2011, August). Complex value systems in friendly AI. In International Conference on Artificial General Intelligence (pp. 388–393). Germany: Springer, Berlin, Heidelberg.
5020:
7159:
1777:
spending on AI around perhaps $ 40 billion. Bostrom suggests prioritizing funding for protective technologies over potentially dangerous ones. Some, like Elon Musk, advocate radical
872:
The field of "mechanistic interpretability" aims to better understand the inner workings of AI models, potentially allowing us one day to detect signs of deception and misalignment.
7545:
5949:
5754:
4941:
7199:
6131:
1650:
says that AI can be made safe via continuous and iterative refinement, similar to what happened in the past with cars or rockets, and that AI will have no desire to take control.
1022:
is "one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development".
3079:
1742:
and vehicle safety to product search and customer service. The main exception is privacy: 53% of Americans believe AI will lead to higher exposure of their personal information.
1441:
wrote in 2014 that "the implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking, even if the prospect seems remote".
5197:
4918:
1278:
an AGI would naturally desire power; those concerned about AGI risk accuse skeptics of anthropomorphism for believing an AGI would naturally value or infer human ethical norms.
1170:
Analyzing the internals and interpreting the behavior of current large language models is difficult. And it could be even more difficult for larger and more intelligent models.
10340:
8271:
6090:
6025:
5155:
4847:
6924:
5833:
Maas, Matthijs M. (6 February 2019). "How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons of mass destruction".
1640:'intelligence ladder' are misleading. He notes the crucial role real-world experiments play in the scientific method, and that intelligence alone is no substitute for these.
853:
has said that, to launch an intelligence explosion, an AI must become vastly better at software innovation than the rest of the world combined, which he finds implausible.
6231:
5259:
1104:
Some ways in which an advanced misaligned AI could try to gain more power. Power-seeking behaviors may arise because power is useful to accomplish virtually any objective.
1388:
tasks, production of animated films and TV shows, and development of biotech drugs, with profits invested back into further improving AI. The team next tasks the AI with
1328:, or that a malevolent AGI could choose the goal of increasing human suffering, for example of those people who did not assist it during the information explosion phase.
7878:
6499:
5606:
2817:
7475:
975:
of safety standards. As rigorous safety procedures take time and resources, projects that proceed more carefully risk being out-competed by less scrupulous developers.
8797:
8235:
4593:
1569:
929:
Speculatively, such hacking capabilities could be used by an AI system to break out of its local environment, generate revenue, or acquire cloud computing resources.
7568:
6611:
6153:
5728:
5134:
1450:
pictures" to illustrate AI safety concerns: "It can't be much fun to have aspersions cast on one's academic discipline, one's professional community, one's life work
6945:
Amodei, Dario, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. "Concrete problems in AI safety." arXiv preprint arXiv:1606.06565 (2016).
3911:
3048:
2360:
679:
warned: "Machines and robots that outperform humans across the board could self-improve beyond our control—and their interests might not align with ours". In 2020,
4220:
1092:
is a sub-goal that helps to achieve an agent's ultimate goal. "Instrumental convergence" refers to the fact that some sub-goals are useful for achieving virtually
6437:
1156:
As AI systems increase in capabilities, the potential dangers associated with experimentation grow. This makes iterative, empirical approaches increasingly risky.
617:
and I. J. Good himself occasionally expressed concern that a superintelligence could seize control, but issued no call to action. In 2000, computer scientist and
7097:
4885:
545:
and quickly surpassed human ability, show that domain-specific AI systems can sometimes progress from subhuman to superhuman ability very quickly, although such
2419:
7871:
7631:"International Community Must Urgently Confront New Reality of Generative, Artificial Intelligence, Speakers Stress as Security Council Debates Risks, Rewards"
7255:
2949:
6059:
2035:
therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler's Erewhon". Also in a lecture broadcast on the
9731:
9070:
9027:
8676:
7952:
7894:
906:
existential risk of a worldwide "irreversible totalitarian regime". It could also be used by malicious actors to fracture society and make it dysfunctional.
297:
6995:
1293:
testosterone. They think that power-seeking or self-preservation behaviors emerge in the AI as a way to achieve its true goals, according to the concept of
8996:
7063:
1884:
specifications combined with active monitoring and informal diplomacy by communities of experts, together with a legal and political verification process.
1520:, consisting of a for-profit corporation and the nonprofit parent company, which says it aims to champion responsible AI development. Facebook co-founder
8861:
8023:
7755:
1692:
And you just have to have somebody close to the power cord. Right when you see it about to happen, you gotta yank that electricity out of the wall, man.
1525:
7506:
4700:
1144:
rather than carrying out commands literally", and that it must be able to fluidly solicit human guidance if it is too uncertain about what humans want.
897:
capabilities may become available earlier, in weaker and more specialized AI systems. They may cause societal instability and empower malicious actors.
3573:"'The Godfather of A.I.' just quit Google and says he regrets his life's work because it can be hard to stop 'bad actors from using it for bad things'"
1457:
A 2022 expert survey with a 17% response rate gave a median expectation of 5–10% for the possibility of human extinction from artificial intelligence.
7781:
3944:
3210:
937:
As AI technology democratizes, it may become easier to engineer more contagious and lethal pathogens. This could enable people with limited skills in
3112:
388:
7017:
Barrett, Anthony M.; Baum, Seth D. (23 May 2016). "A model of pathways to artificial superintelligence catastrophe for risk and decision analysis".
6661:
Barrett, Anthony M.; Baum, Seth D. (23 May 2016). "A model of pathways to artificial superintelligence catastrophe for risk and decision analysis".
6584:
4496:
2919:
10124:
8063:
7839:
6558:
7396:
6468:
6956:
4311:
2464:
5879:
5105:
10022:
8058:
557:
One of the earliest authors to express serious concern that highly advanced machines might pose existential risks to humanity was the novelist
6280:
3495:
8941:
8804:
8515:
8242:
8222:
5784:
5012:
2755:
2266:
1995:
1872:
for the first time held a session to consider the risks and threats posed by AI to world peace and stability, along with potential benefits.
641:
7300:. Studies in Applied Philosophy, Epistemology and Rational Ethics. Vol. 63. Cham: Springer International Publishing. pp. 225–248.
1922:". Alongside other requirements, the order mandates the development of guidelines for AI models that permit the "evasion of human control".
8380:
8302:
7151:
5939:
5356:
1152:
Some researchers believe the alignment problem may be particularly difficult when applied to superintelligences. Their reasoning includes:
888:
of some systems could fundamentally limit a superintelligence's ability to predict some aspects of the future, increasing its uncertainty.
7537:
1208:
behaved inoffensively during pre-deployment testing, but was too easily baited into offensive behavior when it interacted with real users.
1060:. It is thus conceivable that developing superintelligence before other dangerous technologies would reduce the overall existential risk.
9075:
4951:
4411:
3572:
2501:"The CEO of the company behind AI chatbot ChatGPT says the worst-case scenario for artificial intelligence is 'lights out for all of us'"
673:
highlighted the "great potential of AI" and encouraged more research on how to make it robust and beneficial. In April 2016, the journal
184:
149:
7185:
6373:
6121:
1401:. The team also faces risks that its decision to box the project will delay the project long enough for another project to overtake it.
9178:
8350:
6808:"Optimising peace through a Universal Global Peace Treaty to constrain the risk of war from a militarised artificial superintelligence"
5186:
3622:
3071:
2978:
2094:
1846:
1549:
to "just keep an eye on what's going on with artificial intelligence, saying "I think there is potentially a dangerous outcome there."
1319:
Bostrom and others have said that a race to be the first to create AGI could lead to shortcuts in safety, or even to violent conflict.
1188:
836:
Memory sharing and learning: AIs may be able to learn from the experiences of other AIs in a manner more efficient than human learning.
537:
at an exponentially increasing rate, improving too quickly for its handlers or society at large to control. Empirically, examples like
4908:
1952:
645:
in 2014, which presented his arguments that superintelligence poses an existential threat. By 2015, public figures such as physicists
6253:
3140:"Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough?'"
6542:
6017:
4641:
1427:
Conversely, many skeptics agree that ongoing research into the implications of artificial general intelligence is valuable. Skeptic
728:
said in 2023 that he recently changed his estimate from "20 to 50 years before we have general purpose A.I." to "20 years or less".
10181:
10119:
8525:
8028:
6916:
6180:
1797:
248:
226:
8854:
8605:
8068:
5079:
3682:
3597:
1873:
1789:
1657:
believes AI will "unlock a huge amount of positive things", such as curing disease and increasing the safety of autonomous cars.
833:
Editability: the parameters and internal workings of an AI model can easily be modified, unlike the connections in a human brain.
763:
When artificial superintelligence (ASI) may be achieved, if ever, is necessarily less certain than predictions for AGI. In 2023,
498:
162:
6784:
6223:
5223:
4754:
Bostrom, Nick (1 May 2012). "The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents".
6491:
4358:
4134:
3347:
2640:"UN Secretary General embraces calls for a new UN agency on AI in the face of 'potentially catastrophic and existential risks'"
670:
482:
declaring, "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as
86:
4160:
3816:
3738:
2809:
2693:
1074:
The alignment problem is the research problem of how to reliably assign objectives, preferences or ethical principles to AIs.
9260:
7313:
6892:
6532:
5764:
5616:
5387:
5051:
4800:
Ngo, Richard; Chan, Lawrence; Sören Mindermann (22 February 2023). "The alignment problem from a deep learning perspective".
4602:
4560:
4294:
4262:
3341:
3017:
2988:
2500:
2276:
2104:
1979:
1835:
1763:
1586:
505:
381:
307:
261:
216:
211:
3970:
823:
Reliability: transistors are more reliable than biological neurons, enabling higher precision and requiring less redundancy.
9198:
6877:
The Elgar Companion to Digital Transformation, Artificial Intelligence and Innovation in the Economy, Society and Democracy
5991:
1421:
491:
6872:
5718:
5131:
3761:
3038:
9753:
9193:
9092:
8520:
4530:
3177:
2125:
360:
332:
327:
221:
1850:, a petition calling on major AI developers to agree on a verifiable six-month pause of any systems "more powerful than
1052:
According to Bostrom, superintelligence could help reduce the existential risk from other powerful technologies such as
10153:
8847:
8210:
7190:
7128:
7089:
6429:
1989:
1974:
1337:
479:
320:
189:
179:
169:
7423:
4877:
3240:
2977:
Russell, Stuart J.; Norvig, Peter (2003). "Section 26.3: The Ethics and Risks of Developing Artificial Intelligence".
2541:
50% of AI researchers believe there's a 10% or greater chance that humans go extinct from our inability to control AI.
9173:
7947:
5696:
2781:
2446:
2413:
1384:
where it is mostly unable to communicate with the outside world, and uses it to make money, by diverse means such as
1310:
1306:
966:
626:
292:
238:
204:
71:
6205:
869:, superintelligence is sometimes viewed as a powerful optimizer that makes the best decisions to achieve its goals.
9738:
9434:
8620:
8360:
8295:
7957:
3554:
2293:
1759:
1089:
439:
depends on human goodwill, the fate of humanity could depend on the actions of a future machine superintelligence.
374:
278:
124:
7704:
Geist, Edward Moore (15 August 2016). "It's already too late to stop the AI arms race—We must manage it instead".
7370:
6018:"The mysterious artificial intelligence company Elon Musk invested in is developing game-changing smart computers"
9785:
9282:
9255:
8886:
8762:
8530:
7912:
6987:
2868:
1869:
714:
405:
56:
6430:"Mark Zuckerberg responds to Elon Musk's paranoia about AI: 'AI is going to... help keep our communities safe.'"
5916:
1508:
Concern over risk from artificial intelligence has led to some high-profile donations and investments. In 2015,
533:" that catches humanity unprepared. In this scenario, an AI more intelligent than its creators would be able to
9143:
8726:
8018:
5634:"Council Post: At The Dawn Of Artificial General Intelligence: Balancing Abundance With Existential Safeguards"
2001:
1801:
1770:
4076:"The rapid competitive economy of machine learning development: a discussion on the social risks and benefits"
9533:
9439:
8891:
8510:
8340:
4667:
4437:
3653:
1612:
1398:
736:
6404:
10186:
9583:
9483:
8625:
8550:
8505:
8249:
8109:
8038:
6522:
3200:
1868:(UN) considered banning autonomous lethal weapons, but consensus could not be reached. In July 2023 the UN
1701:
1574:
1444:
AI safety advocates such as Bostrom and Tegmark have criticized the mainstream media's use of "those inane
4497:"'The Godfather of A.I.' warns of 'nightmare scenario' where artificial intelligence begins to seek power"
3889:
Hendrycks, Dan; Mazeika, Mantas; Woodside, Thomas (21 June 2023). "An Overview of Catastrophic AI Risks".
1887:
In July 2023, the US government secured voluntary safety commitments from major tech companies, including
604:
originated the concept now known as an "intelligence explosion" and said the risks were underappreciated:
10365:
10360:
10317:
10066:
9265:
9215:
8776:
8721:
8288:
8255:
5964:
1524:
has funded and seeded multiple labs working on AI Alignment, notably $ 5.5 million in 2016 to launch the
243:
194:
91:
7448:
4729:
3201:"Bill Gates: Elon Musk Is Right, We Should All Be Scared Of Artificial Intelligence Wiping Out Humanity"
9726:
9578:
9553:
8963:
8492:
8385:
7597:
6460:
2960:
2583:
534:
66:
49:
5880:"Impressed by artificial intelligence? Experts say AGI is coming next, and it has 'existential' risks"
1577:, estimates the total existential risk from unaligned AI over the next 100 years at about one in ten.
971:
Companies, state actors, and other organizations competing to develop AI technologies could lead to a
10350:
10196:
10129:
9972:
9704:
9699:
9573:
9403:
9245:
9032:
8828:
8755:
8615:
8500:
8008:
7992:
7942:
7809:"Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence"
6126:
4973:
Yampolskiy, Roman V. (8 April 2014). "Utility function security in artificially intelligent agents".
4675:
3333:
2011:
1819:
1785:
1200:
No matter how much time is put into pre-deployment design, a system's specifications often result in
987:
563:
144:
7756:"Amazon, Google, Meta, Microsoft and other tech firms agree to AI safeguards set by the White House"
37:
10345:
10302:
10191:
10086:
10012:
9839:
9468:
9168:
8870:
8783:
8696:
8681:
8555:
8043:
7962:
7569:"UN fails to agree on 'killer robot' ban as nations pour billions into autonomous weapons research"
5691:
5661:
3236:
2839:
1969:
1841:
1831:
1793:
1619:, which they describe as a "dangerous ideology" for its unscientific and utopian nature. Gebru and
1537:
1466:
1417:
1314:
1294:
1083:
1053:
1005:
979:
885:
732:
693:
587:
558:
413:
268:
6585:"AI doomsday worries many Americans. So does apocalypse from climate change, nukes, war, and more"
5522:
2726:
1045:
considers the existential risk a reason for "proceeding with due caution", not for abandoning AI.
10297:
9866:
9721:
9017:
8585:
8335:
7898:
7293:
6334:"The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence"
2327:
1504:
here—we'll leave the lights on?' Probably not—but this is more or less what is happening with AI.
1201:
451:
450:
exist. Concerns about superintelligence have been voiced by leading computer scientists and tech
29:
7863:
5348:
3787:
1242:
Stuart Armstrong argues that the orthogonality thesis follows logically from the philosophical "
10312:
10146:
10093:
9987:
9810:
9655:
9013:
8811:
8640:
8375:
7967:
4135:"The short film 'Slaughterbots' depicts a dystopian future of killer drones swarming the world"
4107:
3043:
2386:
1446:
1413:
1385:
1167:
interference until it achieves a "decisive strategic advantage" that allows it to take control.
530:
139:
6333:
3846:
2639:
2556:
1984:
1159:
If instrumental goal convergence occurs, it may only do so in sufficiently intelligent agents.
696:
calling a halt to advanced AI training until it could be properly regulated. In May 2023, the
9716:
9528:
8990:
8951:
8769:
8731:
7922:
6381:
4913:
4636:
4385:
3971:"ChatGPT and the new AI are wreaking havoc on cybersecurity in exciting and frightening ways"
1932:
1919:
1858:
1623:
have suggested that obsession with AGI is part of a pattern of intellectual movements called
1381:
880:
It has been argued that there are limitations to what intelligence can achieve. Notably, the
817:
767:
leaders said that not only AGI, but superintelligence may be achieved in less than 10 years.
685:
512:
5483:
Haney, Brian Seamus (2018). "The Perils & Promises of Artificial General Intelligence".
865:
possibility that a superintelligence may not particularly value humans by default. To avoid
10081:
9878:
9780:
9645:
9607:
9357:
9161:
9055:
8395:
8013:
7713:
7668:
7538:"A.I. is in its 'infancy' and it's too early to regulate it, Intel CEO Brian Krzanich says"
6727:
6369:
5750:
5561:
5424:
5235:
3382:
3274:
3143:
2747:
2685:
1636:
1428:
1243:
799:
721:
697:
81:
4336:
Kasirzadeh, Atoosa (2024). "Two Types of AI Existential Risk: Decisive and Accumulative".
4312:"Navigating Humanity's Greatest Challenge Yet: Experts Debate the Existential Risks of AI"
3705:
3654:"A Valuable New Book Explores The Potential Impacts Of Intelligent Machines On Human Life"
629:", identifying superintelligent robots as a high-tech danger to human survival, alongside
8:
10222:
10002:
9873:
9795:
9393:
9339:
9287:
9250:
9240:
8946:
8630:
8228:
6540:
3912:"'Godfather of AI' Geoffrey Hinton quits Google and warns over dangers of misinformation"
2617:
1857:
Musk called for some sort of regulation of AI development as early as 2017. According to
1546:
1513:
1014:
Scope–severity grid from Bostrom's paper "Existential Risk Prevention as Global Priority"
946:
233:
7717:
7681:
7672:
7656:
7397:"Open letter calling for AI 'pause' shines light on fierce debate around risks vs. hype"
6740:
6731:
6715:
6224:"The case against killer robots, from a guy actually working on artificial intelligence"
5800:
5565:
5437:
5428:
5412:
5239:
3386:
3278:
2751:
2689:
1876:
1773:
have been suggested, potentially for an artificial superintelligence to be a signatory.
501:
10355:
9925:
9920:
9898:
9883:
9775:
9563:
9543:
9493:
9456:
9446:
9297:
9134:
8711:
8671:
8656:
8610:
8575:
8545:
8330:
8311:
7737:
7044:
7026:
6853:
6840:
6807:
6753:
6696:
6670:
5858:
5579:
5504:
5192:
4990:
4801:
4779:
4692:
4476:
4337:
4051:
4018:
3997:
3994:
3890:
3444:
3400:
3308:
3233:"Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter"
2737:
2200:
2176:
972:
520:
428:
283:
6122:"The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?"
5247:
5068:
1620:
10158:
10059:
9854:
9662:
9627:
9588:
9538:
9429:
9413:
9156:
9119:
9097:
9082:
9065:
8983:
8748:
8600:
8595:
8471:
8436:
8411:
8325:
8194:
8169:
7987:
7741:
7729:
7686:
7605:
7352:
7309:
6888:
6857:
6845:
6827:
6772:
6745:
6688:
6559:"Elon Musk says AI could doom human civilization. Zuckerberg disagrees. Who's right?"
6528:
6351:
5862:
5850:
5760:
5612:
5496:
5465:
5442:
5393:
5383:
5330:
5251:
5047:
4821:
4783:
4771:
4627:
4598:
4290:
4258:
4228:
4056:
4038:
3919:
3503:
3404:
3337:
3300:
3292:
3013:
2984:
2718:
2591:
2335:
2301:
2272:
2168:
2100:
2086:
1778:
1529:
1347:
1271:
1130:
1108:
1057:
938:
749:
654:
432:
61:
5508:
4994:
4696:
4631:
3327:
2674:
2525:
2180:
2149:"Classification of global catastrophic risks connected with artificial intelligence"
1397:
in its produced content, or by using its growing understanding of human behavior to
10307:
10242:
10071:
10049:
9893:
9861:
9849:
9825:
9451:
9398:
9388:
9347:
9277:
9272:
9001:
8909:
8790:
8441:
8431:
8370:
8216:
8189:
8089:
7937:
7721:
7676:
7342:
7301:
7226:
7036:
6880:
6835:
6819:
6757:
6735:
6680:
6637:
6341:
5842:
5583:
5569:
5488:
5432:
5320:
5243:
4982:
4763:
4684:
4553:
4201:
4087:
4046:
4030:
3730:
3390:
3312:
3282:
3205:
2898:
2209:
2160:
1963:
1908:
1892:
1666:
1630:
1521:
1320:
1266:
1236:
1049:
calls AI an "existential opportunity", highlighting the cost of not developing it.
1019:
954:
866:
675:
658:
618:
546:
443:
436:
409:
199:
134:
119:
7725:
7424:"OpenAI's CEO confirms the company isn't training GPT-5 and "won't for some time""
7048:
7040:
6957:"Elon Musk wants to hook your brain up directly to computers – starting next year"
6700:
6684:
6492:"Obama on the Risks of AI: 'You Just Gotta Have Somebody Close to the Power Cord'"
6060:"Elon Musk Is Donating $ 10M Of His Own Money To Artificial Intelligence Research"
5940:"Tech titans like Elon Musk are spending $ 1 billion to save you from terminators"
5846:
2361:"'Godfather of artificial intelligence' weighs in on the past and potential of AI"
1412:
Observers tend to agree that AI has significant potential to improve society. The
10136:
9977:
9548:
9408:
9324:
9312:
9292:
9188:
8978:
8958:
8881:
8560:
8535:
8355:
8154:
8134:
8124:
8114:
8048:
7982:
7305:
7231:
6642:
6546:
5944:
5138:
4986:
4221:"Climate change an 'existential security risk' to Australia, Senate inquiry says"
4017:
Urbina, Fabio; Lentzos, Filippa; Invernizzi, Cédric; Ekins, Sean (7 March 2022).
3529:"AI timelines: What do experts in artificial intelligence expect for the future?"
2956:
2872:
2418:(Speech). Lecture given to '51 Society'. Manchester: The Turing Digital Archive.
2006:
1947:
1696:
1654:
1608:
1553:
1496:
1476:
1254:
756:
725:
680:
646:
487:
455:
76:
10278:
6884:
6612:"Growing public concern about the role of artificial intelligence in daily life"
5013:"Will artificial intelligence destroy humanity? Here are 5 reasons not to worry"
4285:
Ord, Toby (2020). "Chapter 5: Future Risks, Unaligned Artificial Intelligence".
1603:
or because it could damage the field's reputation. AI and AI ethics researchers
798:
transmit signals at up to 120 m/s, while computers transmit signals at the
669:
were expressing concern about the risks of superintelligence. Also in 2015, the
526:
argue that superintelligent machines will have no desire for self-preservation.
10114:
10054:
10039:
9992:
9888:
9834:
9694:
9352:
9329:
9302:
9220:
9210:
9112:
9107:
8973:
8924:
8716:
8706:
8691:
8661:
8635:
8580:
8426:
8390:
8345:
7977:
7630:
6823:
6310:
5281:
4522:
4092:
4075:
4034:
3598:"Super speeds for super AI: Frontier sets new pace for artificial intelligence"
3418:
3395:
3370:
3169:
1900:
1865:
1755:
1643:
1416:, which contain only those principles agreed to by 90% of the attendees of the
1027:
846:
suggests that an intelligence explosion may someday catch humanity unprepared.
813:
803:
630:
6917:"Google's AI researchers say these are the five key problems for robot safety"
6281:"Ethicists fire back at 'AI Pause' letter they say 'ignores the actual harms'"
6154:"Warning of AI's danger, pioneer Geoffrey Hinton quits Google to speak freely"
5469:
5397:
4767:
4475:
Carlsmith, Joseph (16 June 2022). "Is Power-Seeking AI an Existential Risk?".
4438:"Existential Risk vs. Existential Opportunity: A balanced approach to AI risk"
4186:
2903:
2886:
2164:
10334:
10257:
10141:
10098:
10044:
10032:
10017:
10007:
9930:
9844:
9820:
9815:
9689:
9684:
9679:
9595:
9183:
9124:
9087:
9039:
8686:
8666:
8590:
8184:
8129:
8099:
7733:
7690:
7609:
7476:"Elon Musk Warns Governors: Artificial Intelligence Poses 'Existential Risk'"
7371:"Elon Musk and other tech leaders call for pause in 'out of control' AI race"
7356:
7181:
7120:
6831:
6749:
6692:
6355:
6254:"AI experts challenge 'doomer' narrative, including 'extinction risk' claims"
5854:
5500:
5446:
5334:
4775:
4232:
4042:
3923:
3507:
3296:
2595:
2339:
2305:
2172:
1557:
1437:
1394:
1282:
1247:
1223:
992:
650:
614:
586:
have to expect the machines to take control, in the way that is mentioned in
459:
427:
possesses distinctive capabilities other animals lack. If AI were to surpass
420:
129:
8839:
7507:"Elon Musk: regulate AI to combat 'existential threat' before it's too late"
4205:
2194:
Bales, Adam; D'Alessandro, William; Kirk‐Giannini, Cameron Domenico (2024).
2093:(2009). "26.3: The Ethics and Risks of Developing Artificial Intelligence".
1920:
Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
1653:
Several skeptics emphasize the potential near-term benefits of AI. Meta CEO
10283:
10217:
9949:
9935:
9790:
9600:
9568:
9558:
9461:
9307:
9205:
9151:
9060:
8919:
8565:
8487:
8461:
8456:
8446:
8421:
8174:
8104:
7917:
6849:
6346:
6181:"Andrew Ng: Why 'Deep Learning' Is a Mandate for Humans, Not Just Machines"
5780:
5255:
4663:
4060:
3304:
3232:
2469:
2262:
2120:
2090:
1937:
1751:
1720:
1672:
1604:
1389:
1069:
983:
942:
881:
850:
827:
636:
516:
273:
5686:
3528:
3102:
2866:'Samuel Butler's "the Book of the Machines" and the Argument from Design.'
2773:
2675:"Artificial Intelligence as a Positive and Negative Factor in Global Risk"
1140:
An additional source of concern is that AI "must reason about what people
10262:
9805:
9800:
9637:
9622:
9498:
9473:
9102:
9022:
8929:
8914:
8540:
8365:
8179:
8164:
7972:
7932:
5719:"Elon Musk and Stephen Hawking warn of artificial intelligence arms race"
5602:
5492:
3010:
Our final invention: artificial intelligence and the end of the human era
2722:
2465:"Science fiction no more? Channel 4's Humans and our rogue AI obsessions"
1616:
1509:
1472:
1432:
1371:
1341:
915:
826:
Duplicability: unlike human brains, AI software and models can be easily
776:
692:
In March 2023, key figures in AI, such as Musk, signed a letter from the
689:, which details the history of progress on AI alignment up to that time.
577:
494:
490:". Following increased concern over AI risks, government leaders such as
463:
447:
424:
302:
287:
9944:
7782:"Amazon, Google, Meta, Microsoft and other firms agree to AI safeguards"
7347:
7330:
5325:
5308:
918:
are increasingly considered a present and critical threat. According to
10212:
10027:
9997:
9982:
9770:
9758:
9518:
9380:
9006:
8416:
8144:
8119:
8094:
8053:
8033:
3555:"The debate over whether AI will destroy us is dividing Silicon Valley"
3469:
2736:. Association for the Advancement of Artificial Intelligence: 105–114.
2727:"Research Priorities for Robust and Beneficial Artificial Intelligence"
2618:"Rishi Sunak Wants the U.K. to Be a Key Player in Global AI Regulation"
2294:"The debate over whether AI will destroy us is dividing Silicon Valley"
2214:
2195:
2148:
1647:
1492:
1488:
1325:
1286:
1134:
666:
601:
523:
475:
6206:"Is artificial intelligence really an existential threat to humanity?"
5906:
5462:
Unethical Research: How to Create a Malevolent Artificial Intelligence
1204:
the first time it encounters a new scenario. For example, Microsoft's
1100:
1010:
9954:
9743:
9672:
9617:
9612:
9478:
9319:
9225:
8968:
8466:
8280:
8159:
8149:
7927:
5459:
2050:
1942:
1915:
1904:
1725:
1595:
1533:
1480:
1205:
662:
542:
538:
467:
337:
101:
7229:(19 December 2014). "Responses to catastrophic AGI risk: a survey".
6640:(19 December 2014). "Responses to catastrophic AGI risk: a survey".
5574:
5549:
4688:
3496:"DeepMind and Google: the battle to control artificial intelligence"
3287:
3262:
1163:
into the facial muscles of humans to cause constant, beaming grins."
10247:
10232:
9763:
9748:
9650:
9488:
8936:
8701:
8570:
8451:
8139:
7031:
6675:
5911:
5723:
4946:
4806:
4588:
4481:
4342:
4002:
3895:
3107:
2865:
2742:
1957:
1953:
Effective altruism § Long-term future and global catastrophic risks
1624:
1542:
1376:
1046:
1042:
949:
that is useful for medicine could be repurposed to create weapons.
724:
have led some researchers to reassess their expectations. Notably,
622:
549:
systems do not recursively improve their fundamental architecture.
483:
174:
96:
2193:
10227:
10076:
9913:
9903:
9711:
9523:
9235:
9230:
7893:
7019:
Journal of Experimental & Theoretical Artificial Intelligence
6663:
Journal of Experimental & Theoretical Artificial Intelligence
5756:
Rise of the Robots: Technology and the Threat of a Jobless Future
4975:
Journal of Experimental & Theoretical Artificial Intelligence
3170:"Stephen Hawking warns artificial intelligence could end mankind"
1676:
592:
342:
6873:"The challenge of advanced cyberwar and the place of cyberpeace"
5106:"General Purpose Intelligence: Arguing the Orthogonality Thesis"
5046:. Oxford, United Kingdom: Oxford University Press. p. 116.
4554:"Complex Value Systems are Required to Realize Valuable Futures"
3888:
3470:"Artificial intelligence could lead to extinction, experts warn"
2810:"Elon Musk's Billion-Dollar Crusade to Stop the A.I. Apocalypse"
2040:
new danger... is certainly something which can give us anxiety."
10237:
10163:
8073:
7840:"Musk-Backed Group Probes Risks Behind Artificial Intelligence"
6461:"Barack Obama Talks AI, Robo Cars, and the Future of the World"
4909:"Why We Should Be Concerned About Artificial Superintelligence"
4161:"'Artificial Escalation': Imagining the future of nuclear risk"
3762:"Strategic considerations about different speeds of AI takeoff"
1896:
1888:
1734:
1517:
1484:
784:
764:
471:
6305:
9908:
9667:
7152:"Making robots less confident could prevent them taking over"
5753:(2015). "Chapter 9: Super-intelligence and the Singularity".
4942:"Houston, we have a bug: 9 famous software glitches in space"
3103:"Control dangerous AI before it controls us, one expert says"
1851:
1591:
788:
4019:"Dual use of artificial-intelligence-powered drug discovery"
4016:
3072:"Mark Zuckerberg, Elon Musk and the Feud Over Killer Robots"
2950:"Speculations Concerning the First Ultraintelligent Machine"
2920:"The AI apocalypse: will the human race soon be terminated?"
2441:
Turing, Alan (15 May 1951). "Can digital computers think?".
1573:, Toby Ord, a Senior Research Fellow at Oxford University's
1516:, and Musk and others jointly committed $ 1 billion to
419:
One argument for the importance of this risk references how
10252:
7598:"U.N. Officials Urge Regulation of Artificial Intelligence"
5608:
Life 3.0: Being Human in the Age of Artificial Intelligence
4461:
Omohundro, S. M. (2008, February). The basic AI drives. In
3996:
Dafoe (24 May 2023). "Model evaluation for extreme risks".
919:
795:
6773:"Beyond MAD? The race for artificial general intelligence"
4594:
The Precipice: Existential Risk and the Future of Humanity
4359:"Artificial Consciousness: Our Greatest Ethical Challenge"
4287:
The Precipice: Existential Risk and the Future of Humanity
2196:"Artificial Intelligence: Arguments for Catastrophic Risk"
1570:
The Precipice: Existential Risk and the Future of Humanity
529:
A third source of concern is the possibility of a sudden "
435:, it might become uncontrollable. Just as the fate of the
7090:"AI uprising: humans will be outsourced, not obliterated"
5550:"Artificial intelligence: The future is superintelligent"
3945:"How NATO is preparing for a new era of AI cyber attacks"
3788:"The 'Don't Look Up' Thinking That Could Doom Us With AI"
3371:"AI – the people and places that make, use and manage it"
3173:
3039:"Enthusiasts and Skeptics Debate Artificial Intelligence"
2875:
Modern Philology, Vol. 72, No. 4 (May 1975), pp. 365–383.
2036:
1431:
has said: "I think it seems wise to apply something like
1182:
5282:"Intelligent Machines: What does Facebook want with AI?"
3684:
What happens when our computers get smarter than we are?
3329:
The Alignment Problem: Machine Learning and Human Values
3162:
2668:
2666:
2664:
2662:
2660:
2584:"A.I. Poses 'Risk of Extinction,' Industry Leaders Warn"
1804:
are actively engaged in researching AI risk and safety.
7064:"How to test what an AI model can – and shouldn't – do"
4878:"Norvig vs. Chomsky and the Fight for the Future of AI"
4799:
4668:"Rise of Concerns about AI: Reflections and Directions"
1232:
7655:
Sotala, Kaj; Yampolskiy, Roman V. (19 December 2014).
6806:
Carayannis, Elias G.; Draper, John (11 January 2022).
5460:
Pistono, Federico; Yampolskiy, Roman V. (9 May 2016).
3419:"Elon Musk among experts urging a halt to AI training"
1599:
far enough in the future to not be worth researching.
10341:
Existential risk from artificial general intelligence
9071:
Self-indication assumption doomsday argument rebuttal
8677:
Existential risk from artificial general intelligence
7953:
Existential risk from artificial general intelligence
7298:
Philosophy and Theory of Artificial Intelligence 2021
6770:
6714:
Sotala, Kaj; Yampolskiy, Roman V (19 December 2014).
6113:
5992:"UC Berkeley – Center for Human-Compatible AI (2016)"
5411:
Sotala, Kaj; Yampolskiy, Roman V (19 December 2014).
4412:"The intelligent monster that you should let eat you"
3623:"Everything you need to know about superintelligence"
2657:
953:
create, in six hours, 40,000 candidate molecules for
791:, compared to potentially multiple GHz for computers.
8997:
Safety of high-energy particle collision experiments
8497:
All-Party Parliamentary Group for Future Generations
3647:
3645:
3643:
3134:
3132:
3130:
2257:
2255:
2253:
2251:
2249:
2247:
2245:
775:
Bostrom argues that AI has many advantages over the
511:
Two sources of concern stem from the problems of AI
8024:
Center for Human-Compatible Artificial Intelligence
6988:"Only Radically Enhancing Humanity Can Save Us All"
6040:
5801:"Why Uncontrollable AI Looks More Likely Than Ever"
3817:"19 – Mechanistic Interpretability with Neel Nanda"
2717:
2711:
2243:
2241:
2239:
2237:
2235:
2233:
2231:
2229:
2227:
2225:
2147:Turchin, Alexey; Denkenberger, David (3 May 2018).
2146:
1041:AI may also drastically improve humanity's future.
7838:
7331:"Superintelligence Skepticism as a Political Tool"
6871:Carayannis, Elias G.; Draper, John (30 May 2023),
5963:
5185:
3760:Cotton-Barratt, Owen; Ord, Toby (12 August 2014).
1807:
1587:Artificial general intelligence § Feasibility
1147:
6332:Gebru, Timnit; Torres, Émile P. (14 April 2024).
5783:(2016). "New Epilogue to the Paperback Edition".
4662:
3759:
3640:
3127:
2983:. Upper Saddle River, New Jersey: Prentice Hall.
1124:
978:AI could be used to gain military advantages via
10332:
10125:List of apocalyptic and post-apocalyptic fiction
8064:Leverhulme Centre for the Future of Intelligence
7654:
7224:
6870:
6805:
6713:
6635:
6629:
6405:"OpenAI's Pursuit of AI Alignment is Farfetched"
5611:(1st ed.). Mainstreaming AI Safety: Knopf.
5410:
4187:"Existential Risk Prevention as Global Priority"
3012:(First ed.). New York: St. Martin's Press.
2222:
404:refers to the idea that substantial progress in
7806:
5937:
5907:"Microsoft's Bill Gates insists AI is a threat"
1115:
10023:List of dates predicted for apocalyptic events
8516:Centre for Enabling EA Learning & Research
8059:Institute for Ethics and Emerging Technologies
7657:"Responses to catastrophic AGI risk: a survey"
6771:Ramamoorthy, Anand; Yampolskiy, Roman (2018).
6716:"Responses to catastrophic AGI risk: a survey"
6119:
6082:
5965:"Doomsday to utopia: Meet AI's rival factions"
5413:"Responses to catastrophic AGI risk: a survey"
5178:
2878:
2030:
2028:
9028:Existential risk from artificial intelligence
8869:
8855:
8805:Superintelligence: Paths, Dangers, Strategies
8296:
8243:Superintelligence: Paths, Dangers, Strategies
8223:Open letter on artificial intelligence (2015)
7879:
7449:"The Open Letter on AI Doesn't Go Far Enough"
6610:Tyson, Alec; Kikuchi, Emma (28 August 2023).
5786:Superintelligence: Paths, Dangers, Strategies
5380:Superintelligence: paths, dangers, strategies
5309:"Countering Superintelligence Misinformation"
5187:"Should humans fear the rise of the machine?"
5044:Superintelligence: Paths, Dangers, Strategies
4730:"The Open Letter on AI Doesn't Go Far Enough"
4468:
2976:
2268:Superintelligence: Paths, Dangers, Strategies
2085:
1996:Superintelligence: Paths, Dangers, Strategies
1270:advanced AI systems are typically modeled as
787:operate at a maximum frequency of around 200
382:
8381:Psychological barriers to effective altruism
6609:
5662:"Pause Giant AI Experiments: An Open Letter"
5597:
5595:
5593:
4839:
1261:
999:
704:
9076:Self-referencing doomsday argument rebuttal
7016:
6879:, Edward Elgar Publishing, pp. 32–80,
6660:
6331:
5217:
5215:
2887:"Intelligent Machinery, A Heretical Theory"
2774:"AI Open Letter - Future of Life Institute"
2271:(First ed.). Oxford University Press.
2025:
1077:
816:, because in humans it is limited to a few
9179:Climate change and civilizational collapse
8862:
8848:
8351:Distributional cost-effectiveness analysis
8303:
8289:
7886:
7872:
7291:
6306:"DAIR (Distributed AI Research Institute)"
6120:Khatchadourian, Raffi (23 November 2015).
5149:
5147:
4972:
4357:Samuelsson, Paul Conrad (June–July 2019).
4356:
4335:
3909:
2980:Artificial Intelligence: A Modern Approach
2494:
2492:
2096:Artificial Intelligence: A Modern Approach
1847:Pause Giant AI Experiments: An Open Letter
1189:Artificial Intelligence: A Modern Approach
389:
375:
7680:
7473:
7346:
7121:"How are humans going to become extinct?"
7030:
6839:
6739:
6674:
6345:
5904:
5898:
5590:
5573:
5436:
5324:
5103:
5097:
4805:
4727:
4551:
4520:
4480:
4474:
4455:
4341:
4252:
4091:
4050:
4001:
3894:
3731:"Overcoming Bias: I Still Don't Get Foom"
3394:
3325:
3286:
3198:
2902:
2803:
2801:
2799:
2741:
2672:
2458:
2456:
2415:Intelligent machinery, a heretical theory
2213:
1221:Some skeptics, such as Timothy B. Lee of
891:
840:
770:
576:In 1951, foundational computer scientist
10182:Centre for the Study of Existential Risk
10120:Apocalyptic and post-apocalyptic fiction
8526:Centre for the Study of Existential Risk
8029:Centre for the Study of Existential Risk
7180:
7118:
6278:
5828:
5826:
5523:"Will Superintelligent AIs Be Our Doom?"
5212:
4309:
4178:
4108:"The AI Arms Race Is On. Start Worrying"
3552:
3036:
2917:
1798:Centre for the Study of Existential Risk
1300:
1099:
1009:
748:In contrast with AGI, Bostrom defines a
504:called for an increased focus on global
8606:Machine Intelligence Research Institute
8069:Machine Intelligence Research Institute
7595:
7591:
7589:
7535:
7421:
7253:
7087:
6954:
6914:
6520:
6178:
5779:
5601:
5547:
5221:
5153:
5144:
5066:
5041:
4939:
4900:
4753:
4626:
4383:
4218:
4184:
4158:
3910:Taylor, Josh; Hern, Alex (2 May 2023).
3785:
3680:
2891:1951, Reprinted Philosophia Mathematica
2489:
2261:
2142:
2140:
2119:
1790:Machine Intelligence Research Institute
1216:
957:, including known and novel molecules.
709:
10333:
8310:
7836:
7529:
7287:
7285:
7149:
6985:
6402:
6046:
5874:
5872:
5656:
5654:
4906:
4566:from the original on 29 September 2015
4073:
3884:
3882:
3880:
3878:
3876:
3874:
3872:
3870:
3868:
3847:"Superintelligence Is Not Omniscience"
3676:
3674:
3368:
3263:"Anticipating artificial intelligence"
3150:from the original on 25 September 2015
3007:
2884:
2796:
2633:
2631:
2557:"2022 Expert Survey on Progress in AI"
2462:
2453:
2440:
2422:from the original on 26 September 2022
2411:
2081:
2079:
2077:
2075:
2073:
2071:
2069:
1350:to illustrate some of their concerns.
1183:Difficulty of making a flawless design
900:
671:Open Letter on Artificial Intelligence
8843:
8284:
7867:
7703:
7648:
7566:
7504:
7256:"The Wrong Cognitive Measuring Stick"
7202:from the original on 15 February 2016
7061:
6927:from the original on 24 December 2019
6368:
6088:
5823:
5731:from the original on 11 December 2017
5699:from the original on 11 December 2017
5482:
5300:
5166:from the original on 27 November 2017
5085:from the original on 28 November 2015
4921:from the original on 13 November 2017
4845:
4723:
4721:
4583:
4581:
4521:Wakefield, Jane (15 September 2015).
4280:
4278:
4276:
4274:
3821:AXRP – the AI X-risk Research Podcast
3213:from the original on 26 February 2016
3082:from the original on 15 February 2021
2840:"AlphaGo Zero: Starting from scratch"
2615:
2581:
2551:
2549:
2291:
1980:Philosophy of artificial intelligence
1836:Regulation of artificial intelligence
1764:Regulation of artificial intelligence
1399:persuade someone into letting it free
932:
9199:Tipping points in the climate system
7851:from the original on 30 October 2015
7586:
7328:
6502:from the original on 1 December 2017
6471:from the original on 3 December 2017
6234:from the original on 4 February 2016
6070:from the original on 30 October 2015
6028:from the original on 30 October 2015
5938:Washington Post (14 December 2015).
5919:from the original on 29 January 2015
5905:Rawlinson, Kevin (29 January 2015).
5832:
5793:
5749:
5306:
5262:from the original on 1 December 2017
5200:from the original on 12 January 2022
5156:"Let Artificial Intelligence Evolve"
5120:from the original on 11 October 2014
5104:Armstrong, Stuart (1 January 2013).
5023:from the original on 30 October 2015
4728:Yudkowsky, Eliezer (29 March 2023).
4656:
4533:from the original on 2 December 2017
4435:
4132:
3350:from the original on 5 December 2021
3243:from the original on 15 January 2015
3180:from the original on 30 October 2015
3115:from the original on 2 February 2016
3069:
3051:from the original on 22 January 2016
2807:
2637:
2477:from the original on 5 February 2018
2325:
2137:
1475:, the most-cited computer scientist
743:
16:Hypothesized risk to human existence
9194:Extinction risk from climate change
9093:New World Order (conspiracy theory)
8521:Center for High Impact Philanthropy
7807:The White House (30 October 2023).
7282:
6787:from the original on 7 January 2022
6565:from the original on 8 January 2018
6527:. Simon and Schuster. p. 241.
6489:
5869:
5651:
5404:
5377:
5371:
4907:Graves, Matthew (8 November 2017).
4587:
4284:
4257:. New York, New York: Basic Books.
3865:
3671:
3651:
3326:Christian, Brian (6 October 2020).
3199:Eadicicco, Lisa (28 January 2015).
3100:
3037:Anderson, Kurt (26 November 2014).
2784:from the original on 10 August 2019
2628:
2498:
2126:Journal of Evolution and Technology
2066:
1660:
1353:
423:dominate other species because the
13:
10154:List of fictional doomsday devices
8211:Statement on AI risk of extinction
7548:from the original on 22 March 2020
7536:Kharpal, Arjun (7 November 2017).
7486:from the original on 23 April 2020
7474:Domonoske, Camila (17 July 2017).
7191:Northwestern University Law Review
7162:from the original on 21 March 2018
6998:from the original on 6 August 2020
6986:Torres, Phil (18 September 2018).
6967:from the original on 18 April 2020
6458:
6134:from the original on 29 April 2019
5548:Russell, Stuart (30 August 2017).
5359:from the original on 11 March 2020
5154:Chorost, Michael (18 April 2016).
4718:
4578:
4523:"Why is Facebook investing in AI?"
4409:
4386:"AI Should Be Terrified of Humans"
4271:
3741:from the original on 4 August 2017
2761:from the original on 4 August 2019
2546:
2328:"How Could A.I. Destroy Humanity?"
1990:Statement on AI risk of extinction
1918:issued an executive order on the "
1552:In early statements on the topic,
1338:Artificial intelligence in fiction
36:
14:
10377:
9174:Cataclysmic pole shift hypothesis
7948:Ethics of artificial intelligence
7706:Bulletin of the Atomic Scientists
7567:Dawes, James (20 December 2021).
7131:from the original on 9 March 2014
7100:from the original on 7 April 2014
6591:from the original on 23 June 2023
6403:Jindal, Siddharth (7 July 2023).
5952:from the original on 7 June 2016.
5631:
5248:10.1038/scientificamerican0317-77
5222:Shermer, Michael (1 March 2017).
4706:from the original on 4 March 2016
4644:from the original on 19 July 2016
4165:Bulletin of the Atomic Scientists
4074:Walter, Yoshija (27 March 2023).
3706:"Governance of superintelligence"
2820:from the original on 26 July 2018
2699:from the original on 2 March 2013
1985:Robot ethics § In popular culture
1714:
1536:donated $ 10 million to the
1311:Artificial intelligence arms race
1307:Ethics of artificial intelligence
967:Artificial intelligence arms race
783:Speed of computation: biological
10290:
10289:
10277:
9435:Decline in amphibian populations
8361:Equal consideration of interests
8267:
8266:
7958:Friendly artificial intelligence
7800:
7774:
7748:
7697:
7623:
7596:Fassihi, Farnaz (18 July 2023).
7560:
7517:from the original on 6 June 2020
7498:
7467:
7441:
7422:Vincent, James (14 April 2023).
7415:
7389:
7363:
7322:
7266:from the original on 15 May 2016
7247:
7218:
7174:
7143:
7119:Coughlan, Sean (24 April 2013).
7112:
7081:
7055:
7010:
6979:
6948:
6939:
6908:
6864:
6799:
6764:
6707:
6654:
6603:
6577:
6551:
6514:
6483:
6452:
6422:
6396:
6362:
6325:
6298:
6279:Coldewey, Devin (1 April 2023).
6272:
6246:
6216:
6198:
6172:
6146:
6101:from the original on 11 May 2016
6089:Tilli, Cecilia (28 April 2016).
6052:
6010:
5984:
5956:
5931:
5773:
5743:
5711:
5679:
5625:
5541:
5515:
5476:
5453:
5341:
5307:Baum, Seth (30 September 2018).
5274:
4888:from the original on 13 May 2016
4858:from the original on 11 May 2016
4846:Tilli, Cecilia (28 April 2016).
3766:The Future of Humanity Institute
3553:De Vynck, Gerrit (20 May 2023).
3369:Dignum, Virginia (26 May 2021).
2930:from the original on 22 May 2020
2292:Vynck, Gerrit De (23 May 2023).
2043:
1914:In October 2023, U.S. President
1760:Friendly artificial intelligence
499:United Nations Secretary-General
9786:Four Horsemen of the Apocalypse
9283:Land surface effects on climate
9256:Human impact on the environment
8887:Future of an expanding universe
8763:Famine, Affluence, and Morality
8531:Development Media International
7830:
7296:. In Müller, Vincent C. (ed.).
7254:Allenby, Brad (11 April 2016).
7062:Piper, Kelsey (29 March 2023).
6915:Vincent, James (22 June 2016).
6440:from the original on 6 May 2019
5060:
5035:
5005:
4966:
4933:
4870:
4814:
4793:
4747:
4620:
4611:
4545:
4514:
4489:
4429:
4403:
4384:Kateman, Brian (24 July 2023).
4377:
4350:
4329:
4310:McMillan, Tim (15 March 2024).
4303:
4246:
4212:
4152:
4126:
4100:
4067:
4010:
3988:
3963:
3937:
3903:
3839:
3809:
3779:
3753:
3723:
3698:
3681:Bostrom, Nick (27 April 2015),
3615:
3590:
3565:
3546:
3521:
3488:
3462:
3437:
3411:
3362:
3319:
3255:
3225:
3192:
3094:
3063:
3030:
3001:
2970:
2942:
2911:
2858:
2832:
2609:
2575:
2518:
2434:
2405:
1808:Views on banning and regulation
1723:poll of the American public by
1404:
1148:Alignment of superintelligences
1063:
960:
909:
715:Artificial general intelligence
406:artificial general intelligence
57:Artificial general intelligence
8727:Risk of astronomical suffering
8019:Center for Applied Rationality
7505:Gibbs, Samuel (17 July 2017).
4940:Johnson, Phil (30 July 2015).
4159:Mecklin, John (17 July 2023).
3786:Tegmark, Max (25 April 2023).
2463:Parkin, Simon (14 June 2015).
2443:Automatic Calculating Machines
2379:
2353:
2319:
2285:
2187:
2113:
2002:Risk of astronomical suffering
1802:Center for Human-Compatible AI
1771:international relations theory
1526:Centre for Human-Compatible AI
1460:
1125:Difficulty of specifying goals
794:Internal communication speed:
627:Why The Future Doesn't Need Us
625:penned an influential essay, "
561:, who wrote in his 1863 essay
541:, which taught itself to play
1:
9732:Interpretations of Revelation
9534:Cosmological phase transition
9440:Decline in insect populations
8892:Ultimate fate of the universe
8511:Centre for Effective Altruism
8341:Disability-adjusted life year
7726:10.1080/00963402.2016.1216672
7682:10.1088/0031-8949/90/1/018001
7329:Baum, Seth (22 August 2018).
7292:Yampolskiy, Roman V. (2022).
7150:Bridge, Mark (10 June 2017).
7088:Piesing, Mark (17 May 2012).
7041:10.1080/0952813X.2016.1186228
6783:(Special Issue 1). ITU: 1–8.
6741:10.1088/0031-8949/90/1/018001
6685:10.1080/0952813x.2016.1186228
6374:"The Myth of a Superhuman AI"
6179:Garling, Caleb (5 May 2015).
5847:10.1080/13523260.2019.1576464
5438:10.1088/0031-8949/90/1/018001
4597:. Bloomsbury Publishing Plc.
4208:– via Existential Risk.
2616:Sunak, Rishi (14 June 2023).
2123:(2002). "Existential risks".
2059:
1825:
1745:
1580:
1422:Beneficial AI 2017 conference
859:
737:Oak Ridge National Laboratory
492:United Kingdom prime minister
10187:Future of Humanity Institute
9584:Potentially hazardous object
9484:Interplanetary contamination
8626:Raising for Effective Giving
8551:Future of Humanity Institute
8039:Future of Humanity Institute
7306:10.1007/978-3-031-09153-7_18
5835:Contemporary Security Policy
4987:10.1080/0952813X.2014.895114
4917:. Vol. 22, no. 2.
4822:"Introducing Superalignment"
4219:Doherty, Ben (17 May 2018).
3101:Hsu, Jeremy (1 March 2012).
2808:Dowd, Maureen (April 2017).
2638:Fung, Brian (18 July 2023).
2582:Roose, Kevin (30 May 2023).
2447:Can digital computers think?
1575:Future of Humanity Institute
1346:Some scholars have proposed
1331:
1116:Resistance to changing goals
7:
10067:Nemesis (hypothetical star)
9216:Global terrestrial stilling
8777:Living High and Letting Die
8722:Neglected tropical diseases
8256:Artificial Intelligence Act
8250:Do You Trust This Computer?
6885:10.4337/9781839109362.00008
6091:"Killer Robots? Lost Jobs?"
4848:"Killer Robots? Lost Jobs?"
4552:Yudkowsky, Eliezer (2011).
4253:MacAskill, William (2022).
4023:Nature Machine Intelligence
2673:Yudkowsky, Eliezer (2008).
2326:Metz, Cade (10 June 2023).
1925:
1779:human cognitive enhancement
1365:
633:and engineered bioplagues.
92:Natural language processing
10:
10382:
9579:Asteroid impact prediction
9554:Heat death of the universe
8964:Mutual assured destruction
8493:Against Malaria Foundation
8386:Quality-adjusted life year
6824:10.1007/s00146-021-01382-y
4436:More, Max (19 June 2023).
4093:10.1007/s43681-023-00276-7
4035:10.1038/s42256-022-00465-9
3396:10.1038/d41586-021-01397-x
3334:W. W. Norton & Company
3070:Metz, Cade (9 June 2018).
1829:
1812:
1749:
1684:breakthroughs will happen.
1584:
1464:
1335:
1304:
1281:Evolutionary psychologist
1081:
1067:
1003:
964:
552:
535:recursively improve itself
145:Hybrid intelligent systems
67:Recursive self-improvement
10271:
10205:
10197:Nuclear Threat Initiative
10174:
10130:List of apocalyptic films
10107:
9973:2011 end times prediction
9965:
9705:Prophecy of Seventy Weeks
9700:Abomination of desolation
9636:
9574:Asteroid impact avoidance
9511:
9422:
9404:List of extinction events
9379:
9372:
9338:
9246:Environmental degradation
9142:
9133:
9048:
9033:Technological singularity
8902:
8877:
8871:Global catastrophic risks
8829:Effective Altruism Global
8821:
8756:The End of Animal Farming
8740:
8649:
8616:Nuclear Threat Initiative
8501:Animal Charity Evaluators
8480:
8404:
8318:
8264:
8203:
8082:
8009:Alignment Research Center
8001:
7993:Technological singularity
7943:Effective accelerationism
7905:
6521:Clinton, Hillary (2017).
6490:Kircher, Madison Malone.
5485:SSRN Working Paper Series
4768:10.1007/s11023-012-9281-3
4676:Communications of the ACM
4289:. Bloomsbury Publishing.
2682:Global Catastrophic Risks
2387:"How Rogue AIs may Arise"
2165:10.1007/s00146-018-0845-5
2012:Technological singularity
1786:Alignment Research Center
1784:Institutions such as the
1262:Anthropomorphic arguments
1000:Types of existential risk
988:automated decision-making
980:autonomous lethal weapons
875:
820:of information at a time.
705:Potential AI capabilities
564:Darwin among the Machines
10192:Future of Life Institute
10094:Resurrection of the dead
10087:Post-tribulation rapture
9469:Colony collapse disorder
8784:The Most Good You Can Do
8697:Intensive animal farming
8682:Global catastrophic risk
8556:Future of Life Institute
8044:Future of Life Institute
7963:Instrumental convergence
6409:Analytics India Magazine
5692:Future of Life Institute
5666:Future of Life Institute
5110:Analysis and Metaphysics
4666:; Horvitz, Eric (2015).
4632:"Of Myths and Moonshine"
4465:(Vol. 171, pp. 483–492).
3237:Future of Life Institute
2778:Future of Life Institute
2018:
1970:Lethal autonomous weapon
1842:Future of Life Institute
1832:Regulation of algorithms
1794:Future of Life Institute
1538:Future of Life Institute
1467:Global catastrophic risk
1418:Future of Life Institute
1315:Global catastrophic risk
1295:instrumental convergence
1084:Instrumental convergence
1078:Instrumental convergence
1054:molecular nanotechnology
1006:Existential risk studies
694:Future of Life Institute
402:Existential risk from AI
269:Artificial consciousness
9018:Artificial intelligence
8586:The Good Food Institute
8336:Demandingness objection
7899:artificial intelligence
6545:1 December 2017 at the
5069:"Superintelligent Will"
4206:10.1111/1758-5899.12002
2918:Hilliard, Mark (2017).
2904:10.1093/philmat/4.3.256
1671:interview of President
444:existential catastrophe
140:Evolutionary algorithms
30:Artificial intelligence
10147:List of disaster films
9988:Apocalyptic literature
9088:Malthusian catastrophe
9014:Synthetic intelligence
8812:What We Owe the Future
8641:Wild Animal Initiative
8376:Moral circle expansion
7968:Intelligence explosion
6955:Johnson, Alex (2019).
6347:10.5210/fm.v29i4.13636
5067:Bostrom, Nick (2012).
5042:Bostrom, Nick (2014).
4255:What we owe the future
4185:Bostrom, Nick (2013).
3735:www.overcomingbias.com
3445:"Statement on AI Risk"
3008:Barrat, James (2013).
1712:
1694:
1686:
1506:
1414:Asilomar AI Principles
1386:Amazon Mechanical Turk
1348:hypothetical scenarios
1105:
1015:
892:Dangerous capabilities
841:Intelligence explosion
802:, or optically at the
771:Comparison with humans
733:Frontier supercomputer
653:, computer scientists
611:
598:
574:
531:intelligence explosion
41:
9529:Coronal mass ejection
8991:Electromagnetic pulse
8952:Kinetic energy weapon
8770:The Life You Can Save
8732:Wild animal suffering
7923:AI capability control
7837:Clark, Jack (2015a).
5789:(Paperback ed.).
5137:25 March 2020 at the
4914:Skeptic (US magazine)
2965:Advances in Computers
2871:15 March 2023 at the
2412:Turing, Alan (1951).
1933:Appeal to probability
1707:
1690:
1681:
1585:Further information:
1556:, a major pioneer of
1501:
1465:Further information:
1336:Further information:
1301:Other sources of risk
1103:
1082:Further information:
1068:Further information:
1013:
722:large language models
686:The Alignment Problem
606:
583:
569:
40:
9931:War of Gog and Magog
9608:Near-Earth supernova
9358:Human overpopulation
9162:Mass mortality event
9056:Anthropogenic hazard
8396:Venture philanthropy
8014:Center for AI Safety
7294:"AI Risk Skepticism"
5493:10.2139/ssrn.3261254
5196:. 1 September 2015.
5130:Full text available
3449:Center for AI Safety
3144:The Independent (UK)
2885:Turing, A M (1996).
2864:Breuer, Hans-Peter.
1675:and MIT Media Lab's
1244:is-ought distinction
1217:Orthogonality thesis
800:speed of electricity
710:General Intelligence
698:Center for AI Safety
661:, and entrepreneurs
442:The plausibility of
408:(AGI) could lead to
82:General game playing
10003:Blood moon prophecy
9796:Number of the Beast
9394:Holocene extinction
9340:Earth Overshoot Day
9288:Ocean acidification
9251:Habitat destruction
9241:Ecological collapse
8947:Kinetic bombardment
8882:Future of the Earth
8631:Sentience Institute
8229:Our Final Invention
7786:Redditch Advertiser
7718:2016BuAtS..72e.318G
7673:2015PhyS...90a8001S
7348:10.3390/info9090209
6732:2015PhyS...90a8001S
6616:Pew Research Center
6384:on 26 December 2021
6158:www.arstechnica.com
6066:. 15 January 2015.
5727:. 31 January 2017.
5566:2017Natur.548..520R
5429:2015PhyS...90a8001S
5326:10.3390/info9100244
5288:. 14 September 2015
5240:2017SciAm.316c..77S
5228:Scientific American
4954:on 15 February 2019
3559:The Washington Post
3387:2021Natur.593..499D
3279:2016Natur.532Q.413.
3273:(7600): 413. 2016.
3176:. 2 December 2014.
2752:2016arXiv160203506R
2690:2008gcr..book..303Y
1975:Paperclip maximizer
1840:In March 2023, the
1646:chief AI scientist
1532:. In January 2015,
1514:Amazon Web Services
1246:" argument against
1202:unintended behavior
1090:"instrumental" goal
947:Dual-use technology
901:Social manipulation
649:and Nobel laureate
412:or an irreversible
234:Machine translation
150:Systems integration
87:Knowledge reasoning
24:Part of a series on
10366:Doomsday scenarios
10361:Technology hazards
10318:Doomsday scenarios
9884:Beast of the Earth
9776:Book of Revelation
9564:Virtual black hole
9544:False vacuum decay
9494:Pollinator decline
9457:Biological warfare
9447:Biotechnology risk
9298:Resource depletion
8712:Malaria prevention
8672:Economic stability
8657:Biotechnology risk
8611:Malaria Consortium
8576:Giving What We Can
8546:Fistula Foundation
8331:Charity assessment
8312:Effective altruism
7602:The New York Times
6561:. 5 January 2023.
5807:. 27 February 2023
5695:. 11 August 2017.
5529:. 3 September 2014
5193:The Telegraph (UK)
5019:. 22 August 2014.
4756:Minds and Machines
4664:Dietterich, Thomas
4114:. 16 February 2023
3951:. 26 December 2022
3076:The New York Times
2955:2011-11-28 at the
2588:The New York Times
2530:www.humanetech.com
2445:. Episode 2. BBC.
2332:The New York Times
2215:10.1111/phc3.12964
2201:Philosophy Compass
2049:as interpreted by
1272:intelligent agents
1106:
1016:
973:race to the bottom
933:Enhanced pathogens
521:computer scientist
480:signed a statement
429:human intelligence
414:global catastrophe
42:
10328:
10327:
10159:Zombie apocalypse
10060:Postmillennialism
9855:Great Tribulation
9628:Stellar collision
9589:Near-Earth object
9539:Geomagnetic storm
9507:
9506:
9430:Biodiversity loss
9414:Genetic pollution
9368:
9367:
9157:Biodiversity loss
9120:Societal collapse
9098:Nuclear holocaust
9083:Economic collapse
9066:Doomsday argument
8984:Antimatter weapon
8837:
8836:
8749:Doing Good Better
8621:Open Philanthropy
8601:Mercy for Animals
8596:The Humane League
8472:Eliezer Yudkowsky
8437:William MacAskill
8412:Sam Bankman-Fried
8326:Aid effectiveness
8278:
8277:
8195:Eliezer Yudkowsky
8170:Stuart J. Russell
7988:Superintelligence
7315:978-3-031-09153-7
7227:Yampolskiy, Roman
7186:"Accelerating AI"
6894:978-1-83910-936-2
6638:Yampolskiy, Roman
6587:. 14 April 2023.
6534:978-1-5011-7556-5
6372:(25 April 2017).
5996:Open Philanthropy
5766:978-0-465-05999-7
5618:978-0-451-48507-6
5560:(7669): 520–521.
5389:978-1-5012-2774-5
5053:978-0-19-967811-2
4604:978-1-5266-0019-6
4442:Extropic Thoughts
4410:Fisher, Richard.
4296:978-1-5266-0021-9
4264:978-1-5416-1862-6
3823:. 4 February 2023
3533:Our World in Data
3381:(7860): 499–500.
3343:978-0-393-63582-9
3019:978-0-312-62237-4
2990:978-0-13-790395-5
2846:. 18 October 2017
2721:; Dewey, Daniel;
2278:978-0-19-967811-2
2106:978-0-13-604259-4
2099:. Prentice Hall.
1874:Secretary-General
1613:Margaret Mitchell
1567:In his 2020 book
1528:led by Professor
1360:Superintelligence
1135:human flourishing
1131:intelligent agent
1058:synthetic biology
939:synthetic biology
750:superintelligence
744:Superintelligence
720:Breakthroughs in
655:Stuart J. Russell
642:Superintelligence
613:Scholars such as
399:
398:
135:Bayesian networks
62:Intelligent agent
10373:
10351:Human extinction
10293:
10292:
10284:World portal
10282:
10281:
10243:Financial crisis
10072:Nibiru cataclysm
10050:Premillennialism
9894:Dhul-Suwayqatayn
9862:Son of perdition
9850:Olivet Discourse
9826:Whore of Babylon
9452:Biological agent
9399:Human extinction
9389:Extinction event
9377:
9376:
9348:Overexploitation
9278:Land consumption
9273:Land degradation
9140:
9139:
9002:Micro black hole
8910:Chemical warfare
8864:
8857:
8850:
8841:
8840:
8791:Practical Ethics
8442:Dustin Moskovitz
8432:Holden Karnofsky
8371:Marginal utility
8305:
8298:
8291:
8282:
8281:
8270:
8269:
8217:Human Compatible
8190:Roman Yampolskiy
7938:Consequentialism
7895:Existential risk
7888:
7881:
7874:
7865:
7864:
7860:
7858:
7856:
7842:
7824:
7823:
7821:
7819:
7804:
7798:
7797:
7795:
7793:
7778:
7772:
7771:
7769:
7767:
7752:
7746:
7745:
7701:
7695:
7694:
7684:
7652:
7646:
7645:
7643:
7641:
7627:
7621:
7620:
7618:
7616:
7593:
7584:
7583:
7581:
7579:
7573:The Conversation
7564:
7558:
7557:
7555:
7553:
7533:
7527:
7526:
7524:
7522:
7502:
7496:
7495:
7493:
7491:
7471:
7465:
7464:
7462:
7460:
7445:
7439:
7438:
7436:
7434:
7419:
7413:
7412:
7410:
7408:
7393:
7387:
7386:
7384:
7382:
7367:
7361:
7360:
7350:
7326:
7320:
7319:
7289:
7280:
7279:
7273:
7271:
7251:
7245:
7244:
7222:
7216:
7215:
7209:
7207:
7198:(3): 1253–1270.
7178:
7172:
7171:
7169:
7167:
7147:
7141:
7140:
7138:
7136:
7116:
7110:
7109:
7107:
7105:
7085:
7079:
7078:
7076:
7074:
7059:
7053:
7052:
7034:
7014:
7008:
7007:
7005:
7003:
6983:
6977:
6976:
6974:
6972:
6952:
6946:
6943:
6937:
6936:
6934:
6932:
6912:
6906:
6904:
6903:
6901:
6868:
6862:
6861:
6843:
6818:(6): 2679–2692.
6812:AI & Society
6803:
6797:
6796:
6794:
6792:
6768:
6762:
6761:
6743:
6711:
6705:
6704:
6678:
6658:
6652:
6651:
6633:
6627:
6626:
6624:
6622:
6607:
6601:
6600:
6598:
6596:
6581:
6575:
6574:
6572:
6570:
6555:
6549:
6538:
6518:
6512:
6511:
6509:
6507:
6487:
6481:
6480:
6478:
6476:
6456:
6450:
6449:
6447:
6445:
6434:Business Insider
6426:
6420:
6419:
6417:
6415:
6400:
6394:
6393:
6391:
6389:
6380:. Archived from
6366:
6360:
6359:
6349:
6329:
6323:
6322:
6320:
6318:
6302:
6296:
6295:
6293:
6291:
6276:
6270:
6269:
6267:
6265:
6250:
6244:
6243:
6241:
6239:
6220:
6214:
6213:
6202:
6196:
6195:
6193:
6191:
6176:
6170:
6169:
6167:
6165:
6150:
6144:
6143:
6141:
6139:
6117:
6111:
6110:
6108:
6106:
6086:
6080:
6079:
6077:
6075:
6056:
6050:
6044:
6038:
6037:
6035:
6033:
6014:
6008:
6007:
6005:
6003:
5988:
5982:
5981:
5979:
5977:
5967:
5960:
5954:
5953:
5935:
5929:
5928:
5926:
5924:
5902:
5896:
5895:
5893:
5891:
5876:
5867:
5866:
5830:
5821:
5820:
5814:
5812:
5797:
5791:
5790:
5777:
5771:
5770:
5747:
5741:
5740:
5738:
5736:
5715:
5709:
5708:
5706:
5704:
5683:
5677:
5676:
5674:
5672:
5658:
5649:
5648:
5646:
5644:
5632:Kumar, Vibhore.
5629:
5623:
5622:
5599:
5588:
5587:
5577:
5545:
5539:
5538:
5536:
5534:
5519:
5513:
5512:
5480:
5474:
5473:
5457:
5451:
5450:
5440:
5408:
5402:
5400:
5375:
5369:
5368:
5366:
5364:
5349:"The Myth Of AI"
5345:
5339:
5338:
5328:
5304:
5298:
5297:
5295:
5293:
5278:
5272:
5271:
5269:
5267:
5219:
5210:
5209:
5207:
5205:
5189:
5182:
5176:
5175:
5173:
5171:
5151:
5142:
5129:
5127:
5125:
5101:
5095:
5094:
5092:
5090:
5084:
5073:
5064:
5058:
5057:
5039:
5033:
5032:
5030:
5028:
5009:
5003:
5002:
4970:
4964:
4963:
4961:
4959:
4950:. Archived from
4937:
4931:
4930:
4928:
4926:
4904:
4898:
4897:
4895:
4893:
4884:. 21 June 2011.
4874:
4868:
4867:
4865:
4863:
4843:
4837:
4836:
4834:
4832:
4818:
4812:
4811:
4809:
4797:
4791:
4790:
4751:
4745:
4744:
4742:
4740:
4725:
4716:
4715:
4713:
4711:
4705:
4672:
4660:
4654:
4653:
4651:
4649:
4624:
4618:
4615:
4609:
4608:
4585:
4576:
4575:
4573:
4571:
4565:
4558:
4549:
4543:
4542:
4540:
4538:
4518:
4512:
4511:
4509:
4507:
4493:
4487:
4486:
4484:
4472:
4466:
4459:
4453:
4452:
4450:
4448:
4433:
4427:
4426:
4424:
4422:
4407:
4401:
4400:
4398:
4396:
4381:
4375:
4374:
4372:
4370:
4354:
4348:
4347:
4345:
4333:
4327:
4326:
4324:
4322:
4307:
4301:
4300:
4282:
4269:
4268:
4250:
4244:
4243:
4241:
4239:
4216:
4210:
4209:
4191:
4182:
4176:
4175:
4173:
4171:
4156:
4150:
4149:
4147:
4145:
4139:Business Insider
4130:
4124:
4123:
4121:
4119:
4104:
4098:
4097:
4095:
4071:
4065:
4064:
4054:
4014:
4008:
4007:
4005:
3992:
3986:
3985:
3983:
3981:
3967:
3961:
3960:
3958:
3956:
3941:
3935:
3934:
3932:
3930:
3907:
3901:
3900:
3898:
3886:
3863:
3862:
3860:
3858:
3843:
3837:
3836:
3830:
3828:
3813:
3807:
3806:
3800:
3798:
3783:
3777:
3776:
3774:
3772:
3757:
3751:
3750:
3748:
3746:
3727:
3721:
3720:
3718:
3716:
3702:
3696:
3694:
3693:
3691:
3678:
3669:
3668:
3666:
3664:
3658:Business Insider
3649:
3638:
3637:
3635:
3633:
3619:
3613:
3612:
3610:
3608:
3594:
3588:
3587:
3585:
3583:
3569:
3563:
3562:
3550:
3544:
3543:
3541:
3539:
3525:
3519:
3518:
3516:
3514:
3492:
3486:
3485:
3483:
3481:
3466:
3460:
3459:
3457:
3455:
3441:
3435:
3434:
3432:
3430:
3415:
3409:
3408:
3398:
3366:
3360:
3359:
3357:
3355:
3323:
3317:
3316:
3290:
3259:
3253:
3252:
3250:
3248:
3229:
3223:
3222:
3220:
3218:
3206:Business Insider
3196:
3190:
3189:
3187:
3185:
3166:
3160:
3159:
3157:
3155:
3136:
3125:
3124:
3122:
3120:
3098:
3092:
3091:
3089:
3087:
3067:
3061:
3060:
3058:
3056:
3034:
3028:
3027:
3005:
2999:
2998:
2974:
2968:
2946:
2940:
2939:
2937:
2935:
2915:
2909:
2908:
2906:
2882:
2876:
2862:
2856:
2855:
2853:
2851:
2844:www.deepmind.com
2836:
2830:
2829:
2827:
2825:
2805:
2794:
2793:
2791:
2789:
2780:. January 2015.
2770:
2768:
2766:
2760:
2745:
2731:
2715:
2709:
2708:
2706:
2704:
2698:
2679:
2670:
2655:
2654:
2652:
2650:
2635:
2626:
2625:
2613:
2607:
2606:
2604:
2602:
2579:
2573:
2572:
2570:
2568:
2553:
2544:
2543:
2538:
2536:
2526:"The AI Dilemma"
2522:
2516:
2515:
2513:
2511:
2505:Business Insider
2499:Jackson, Sarah.
2496:
2487:
2486:
2484:
2482:
2460:
2451:
2450:
2438:
2432:
2431:
2429:
2427:
2409:
2403:
2402:
2400:
2398:
2391:yoshuabengio.org
2383:
2377:
2376:
2374:
2372:
2357:
2351:
2350:
2348:
2346:
2323:
2317:
2316:
2314:
2312:
2289:
2283:
2282:
2259:
2220:
2219:
2217:
2191:
2185:
2184:
2153:AI & Society
2144:
2135:
2134:
2117:
2111:
2110:
2083:
2053:
2047:
2041:
2032:
1964:Human Compatible
1909:AI Now Institute
1877:António Guterres
1870:Security Council
1661:Popular reaction
1522:Dustin Moskovitz
1453:
1354:Treacherous turn
1321:Roman Yampolskiy
1237:anthropomorphism
1020:existential risk
955:chemical warfare
867:anthropomorphism
812:Memory: notably
659:Roman Yampolskiy
547:machine learning
502:António Guterres
437:mountain gorilla
433:superintelligent
410:human extinction
391:
384:
377:
298:Existential risk
120:Machine learning
21:
20:
10381:
10380:
10376:
10375:
10374:
10372:
10371:
10370:
10346:Future problems
10331:
10330:
10329:
10324:
10303:Future problems
10276:
10267:
10201:
10170:
10137:Climate fiction
10103:
9978:2012 phenomenon
9961:
9867:Sheep and Goats
9754:2 Thessalonians
9632:
9549:Gamma-ray burst
9503:
9418:
9409:Genetic erosion
9364:
9334:
9325:Water pollution
9293:Ozone depletion
9189:Desertification
9129:
9044:
8979:Doomsday device
8959:Nuclear warfare
8898:
8873:
8868:
8838:
8833:
8817:
8736:
8702:Land use reform
8645:
8561:Founders Pledge
8536:Evidence Action
8476:
8400:
8356:Earning to give
8314:
8309:
8279:
8274:
8260:
8199:
8155:Steve Omohundro
8135:Geoffrey Hinton
8125:Stephen Hawking
8110:Paul Christiano
8090:Scott Alexander
8078:
8049:Google DeepMind
7997:
7983:Suffering risks
7901:
7892:
7854:
7852:
7833:
7828:
7827:
7817:
7815:
7813:The White House
7805:
7801:
7791:
7789:
7780:
7779:
7775:
7765:
7763:
7754:
7753:
7749:
7702:
7698:
7661:Physica Scripta
7653:
7649:
7639:
7637:
7629:
7628:
7624:
7614:
7612:
7594:
7587:
7577:
7575:
7565:
7561:
7551:
7549:
7534:
7530:
7520:
7518:
7503:
7499:
7489:
7487:
7472:
7468:
7458:
7456:
7455:. 29 March 2023
7447:
7446:
7442:
7432:
7430:
7420:
7416:
7406:
7404:
7403:. 29 March 2023
7395:
7394:
7390:
7380:
7378:
7377:. 29 March 2023
7369:
7368:
7364:
7327:
7323:
7316:
7290:
7283:
7269:
7267:
7252:
7248:
7232:Physica Scripta
7223:
7219:
7205:
7203:
7184:(Summer 2010).
7179:
7175:
7165:
7163:
7148:
7144:
7134:
7132:
7117:
7113:
7103:
7101:
7086:
7082:
7072:
7070:
7060:
7056:
7015:
7011:
7001:
6999:
6984:
6980:
6970:
6968:
6953:
6949:
6944:
6940:
6930:
6928:
6913:
6909:
6899:
6897:
6895:
6869:
6865:
6804:
6800:
6790:
6788:
6777:ICT Discoveries
6769:
6765:
6720:Physica Scripta
6712:
6708:
6659:
6655:
6643:Physica Scripta
6634:
6630:
6620:
6618:
6608:
6604:
6594:
6592:
6583:
6582:
6578:
6568:
6566:
6557:
6556:
6552:
6547:Wayback Machine
6535:
6519:
6515:
6505:
6503:
6488:
6484:
6474:
6472:
6459:Dadich, Scott.
6457:
6453:
6443:
6441:
6436:. 25 May 2018.
6428:
6427:
6423:
6413:
6411:
6401:
6397:
6387:
6385:
6367:
6363:
6330:
6326:
6316:
6314:
6304:
6303:
6299:
6289:
6287:
6277:
6273:
6263:
6261:
6252:
6251:
6247:
6237:
6235:
6222:
6221:
6217:
6212:. 4 April 2023.
6204:
6203:
6199:
6189:
6187:
6177:
6173:
6163:
6161:
6152:
6151:
6147:
6137:
6135:
6118:
6114:
6104:
6102:
6087:
6083:
6073:
6071:
6058:
6057:
6053:
6045:
6041:
6031:
6029:
6016:
6015:
6011:
6001:
5999:
5990:
5989:
5985:
5975:
5973:
5970:Washington Post
5962:
5961:
5957:
5945:Chicago Tribune
5936:
5932:
5922:
5920:
5903:
5899:
5889:
5887:
5886:. 23 March 2023
5878:
5877:
5870:
5831:
5824:
5810:
5808:
5799:
5798:
5794:
5778:
5774:
5767:
5759:. Basic Books.
5748:
5744:
5734:
5732:
5717:
5716:
5712:
5702:
5700:
5687:"AI Principles"
5685:
5684:
5680:
5670:
5668:
5660:
5659:
5652:
5642:
5640:
5630:
5626:
5619:
5600:
5591:
5575:10.1038/548520a
5546:
5542:
5532:
5530:
5521:
5520:
5516:
5481:
5477:
5458:
5454:
5417:Physica Scripta
5409:
5405:
5390:
5378:Bostrom, Nick,
5376:
5372:
5362:
5360:
5347:
5346:
5342:
5305:
5301:
5291:
5289:
5280:
5279:
5275:
5265:
5263:
5224:"Apocalypse AI"
5220:
5213:
5203:
5201:
5184:
5183:
5179:
5169:
5167:
5152:
5145:
5139:Wayback Machine
5123:
5121:
5102:
5098:
5088:
5086:
5082:
5071:
5065:
5061:
5054:
5040:
5036:
5026:
5024:
5011:
5010:
5006:
4971:
4967:
4957:
4955:
4938:
4934:
4924:
4922:
4905:
4901:
4891:
4889:
4876:
4875:
4871:
4861:
4859:
4844:
4840:
4830:
4828:
4820:
4819:
4815:
4798:
4794:
4752:
4748:
4738:
4736:
4726:
4719:
4709:
4707:
4703:
4689:10.1145/2770869
4670:
4661:
4657:
4647:
4645:
4628:Russell, Stuart
4625:
4621:
4616:
4612:
4605:
4586:
4579:
4569:
4567:
4563:
4556:
4550:
4546:
4536:
4534:
4519:
4515:
4505:
4503:
4495:
4494:
4490:
4473:
4469:
4460:
4456:
4446:
4444:
4434:
4430:
4420:
4418:
4408:
4404:
4394:
4392:
4382:
4378:
4368:
4366:
4355:
4351:
4334:
4330:
4320:
4318:
4308:
4304:
4297:
4283:
4272:
4265:
4251:
4247:
4237:
4235:
4217:
4213:
4189:
4183:
4179:
4169:
4167:
4157:
4153:
4143:
4141:
4133:Brimelow, Ben.
4131:
4127:
4117:
4115:
4106:
4105:
4101:
4072:
4068:
4015:
4011:
3993:
3989:
3979:
3977:
3969:
3968:
3964:
3954:
3952:
3943:
3942:
3938:
3928:
3926:
3908:
3904:
3887:
3866:
3856:
3854:
3845:
3844:
3840:
3826:
3824:
3815:
3814:
3810:
3796:
3794:
3784:
3780:
3770:
3768:
3758:
3754:
3744:
3742:
3729:
3728:
3724:
3714:
3712:
3704:
3703:
3699:
3689:
3687:
3679:
3672:
3662:
3660:
3650:
3641:
3631:
3629:
3621:
3620:
3616:
3606:
3604:
3596:
3595:
3591:
3581:
3579:
3571:
3570:
3566:
3551:
3547:
3537:
3535:
3527:
3526:
3522:
3512:
3510:
3494:
3493:
3489:
3479:
3477:
3468:
3467:
3463:
3453:
3451:
3443:
3442:
3438:
3428:
3426:
3425:. 29 March 2023
3417:
3416:
3412:
3367:
3363:
3353:
3351:
3344:
3324:
3320:
3288:10.1038/532413a
3261:
3260:
3256:
3246:
3244:
3231:
3230:
3226:
3216:
3214:
3197:
3193:
3183:
3181:
3168:
3167:
3163:
3153:
3151:
3138:
3137:
3128:
3118:
3116:
3099:
3095:
3085:
3083:
3068:
3064:
3054:
3052:
3035:
3031:
3020:
3006:
3002:
2991:
2975:
2971:
2967:, vol. 6, 1965.
2957:Wayback Machine
2947:
2943:
2933:
2931:
2924:The Irish Times
2916:
2912:
2883:
2879:
2873:Wayback Machine
2863:
2859:
2849:
2847:
2838:
2837:
2833:
2823:
2821:
2806:
2797:
2787:
2785:
2772:
2764:
2762:
2758:
2729:
2719:Russell, Stuart
2716:
2712:
2702:
2700:
2696:
2677:
2671:
2658:
2648:
2646:
2636:
2629:
2614:
2610:
2600:
2598:
2580:
2576:
2566:
2564:
2563:. 4 August 2022
2555:
2554:
2547:
2534:
2532:
2524:
2523:
2519:
2509:
2507:
2497:
2490:
2480:
2478:
2461:
2454:
2439:
2435:
2425:
2423:
2410:
2406:
2396:
2394:
2385:
2384:
2380:
2370:
2368:
2367:. 25 March 2023
2365:www.cbsnews.com
2359:
2358:
2354:
2344:
2342:
2324:
2320:
2310:
2308:
2298:Washington Post
2290:
2286:
2279:
2260:
2223:
2192:
2188:
2145:
2138:
2118:
2114:
2107:
2087:Russell, Stuart
2084:
2067:
2062:
2057:
2056:
2048:
2044:
2033:
2026:
2021:
2016:
2007:System accident
1948:Butlerian Jihad
1928:
1838:
1828:
1815:
1810:
1766:
1748:
1717:
1697:Hillary Clinton
1663:
1655:Mark Zuckerberg
1621:Émile P. Torres
1609:Emily M. Bender
1594:Vice President
1589:
1583:
1554:Geoffrey Hinton
1497:Stephen Hawking
1477:Geoffrey Hinton
1469:
1463:
1451:
1407:
1395:hidden messages
1368:
1356:
1344:
1334:
1317:
1303:
1267:Anthropomorphic
1264:
1255:Michael Chorost
1219:
1185:
1150:
1127:
1118:
1086:
1080:
1072:
1066:
1008:
1002:
969:
963:
935:
912:
903:
894:
886:time complexity
878:
862:
843:
773:
757:Stephen Hawking
746:
726:Geoffrey Hinton
712:
707:
681:Brian Christian
647:Stephen Hawking
555:
456:Geoffrey Hinton
395:
366:
365:
356:
348:
347:
323:
313:
312:
284:Control problem
264:
254:
253:
165:
155:
154:
115:
107:
106:
77:Computer vision
52:
17:
12:
11:
5:
10379:
10369:
10368:
10363:
10358:
10353:
10348:
10343:
10326:
10325:
10323:
10322:
10321:
10320:
10315:
10310:
10305:
10300:
10298:Apocalypticism
10287:
10272:
10269:
10268:
10266:
10265:
10260:
10255:
10250:
10245:
10240:
10235:
10230:
10225:
10220:
10215:
10209:
10207:
10203:
10202:
10200:
10199:
10194:
10189:
10184:
10178:
10176:
10172:
10171:
10169:
10168:
10167:
10166:
10156:
10151:
10150:
10149:
10142:Disaster films
10139:
10134:
10133:
10132:
10127:
10117:
10115:Alien invasion
10111:
10109:
10105:
10104:
10102:
10101:
10096:
10091:
10090:
10089:
10084:
10074:
10069:
10064:
10063:
10062:
10057:
10055:Amillennialism
10052:
10042:
10040:Millenarianism
10037:
10036:
10035:
10025:
10020:
10015:
10010:
10005:
10000:
9995:
9993:Apocalypticism
9990:
9985:
9980:
9975:
9969:
9967:
9963:
9962:
9960:
9959:
9958:
9957:
9947:
9942:
9941:
9940:
9939:
9938:
9933:
9928:
9918:
9917:
9916:
9911:
9906:
9901:
9896:
9891:
9889:Dhu al-Qarnayn
9886:
9881:
9871:
9870:
9869:
9864:
9859:
9858:
9857:
9847:
9842:
9837:
9835:Great Apostasy
9832:
9831:
9830:
9829:
9828:
9823:
9818:
9813:
9808:
9803:
9798:
9793:
9788:
9773:
9768:
9767:
9766:
9761:
9751:
9746:
9741:
9736:
9735:
9734:
9724:
9714:
9709:
9708:
9707:
9702:
9692:
9682:
9680:Last Judgement
9677:
9676:
9675:
9670:
9660:
9659:
9658:
9653:
9642:
9640:
9638:Eschatological
9634:
9633:
9631:
9630:
9625:
9620:
9615:
9610:
9605:
9604:
9603:
9598:
9593:
9592:
9591:
9581:
9576:
9566:
9561:
9556:
9551:
9546:
9541:
9536:
9531:
9526:
9521:
9515:
9513:
9509:
9508:
9505:
9504:
9502:
9501:
9496:
9491:
9486:
9481:
9476:
9471:
9466:
9465:
9464:
9459:
9454:
9444:
9443:
9442:
9437:
9426:
9424:
9420:
9419:
9417:
9416:
9411:
9406:
9401:
9396:
9391:
9385:
9383:
9374:
9370:
9369:
9366:
9365:
9363:
9362:
9361:
9360:
9353:Overpopulation
9350:
9344:
9342:
9336:
9335:
9333:
9332:
9330:Water scarcity
9327:
9322:
9317:
9316:
9315:
9305:
9303:Sea level rise
9300:
9295:
9290:
9285:
9280:
9275:
9270:
9269:
9268:
9266:on marine life
9263:
9253:
9248:
9243:
9238:
9233:
9228:
9223:
9221:Global warming
9218:
9213:
9211:Global dimming
9208:
9203:
9202:
9201:
9191:
9186:
9181:
9176:
9171:
9169:Cascade effect
9166:
9165:
9164:
9154:
9148:
9146:
9144:Climate change
9137:
9131:
9130:
9128:
9127:
9122:
9117:
9116:
9115:
9110:
9105:
9095:
9090:
9085:
9080:
9079:
9078:
9073:
9063:
9058:
9052:
9050:
9046:
9045:
9043:
9042:
9037:
9036:
9035:
9030:
9025:
9011:
9010:
9009:
9004:
8994:
8988:
8987:
8986:
8981:
8976:
8974:Doomsday Clock
8971:
8966:
8956:
8955:
8954:
8944:
8939:
8934:
8933:
8932:
8927:
8925:Cyberterrorism
8922:
8912:
8906:
8904:
8900:
8899:
8897:
8896:
8895:
8894:
8884:
8878:
8875:
8874:
8867:
8866:
8859:
8852:
8844:
8835:
8834:
8832:
8831:
8825:
8823:
8819:
8818:
8816:
8815:
8808:
8801:
8794:
8787:
8780:
8773:
8766:
8759:
8752:
8744:
8742:
8738:
8737:
8735:
8734:
8729:
8724:
8719:
8717:Mass deworming
8714:
8709:
8707:Life extension
8704:
8699:
8694:
8692:Global poverty
8689:
8684:
8679:
8674:
8669:
8664:
8662:Climate change
8659:
8653:
8651:
8647:
8646:
8644:
8643:
8638:
8636:Unlimit Health
8633:
8628:
8623:
8618:
8613:
8608:
8603:
8598:
8593:
8588:
8583:
8581:Good Food Fund
8578:
8573:
8568:
8563:
8558:
8553:
8548:
8543:
8538:
8533:
8528:
8523:
8518:
8513:
8508:
8503:
8498:
8495:
8490:
8484:
8482:
8478:
8477:
8475:
8474:
8469:
8464:
8459:
8454:
8449:
8444:
8439:
8434:
8429:
8427:Hilary Greaves
8424:
8419:
8414:
8408:
8406:
8402:
8401:
8399:
8398:
8393:
8391:Utilitarianism
8388:
8383:
8378:
8373:
8368:
8363:
8358:
8353:
8348:
8346:Disease burden
8343:
8338:
8333:
8328:
8322:
8320:
8316:
8315:
8308:
8307:
8300:
8293:
8285:
8276:
8275:
8265:
8262:
8261:
8259:
8258:
8253:
8246:
8239:
8232:
8225:
8220:
8213:
8207:
8205:
8201:
8200:
8198:
8197:
8192:
8187:
8182:
8177:
8172:
8167:
8162:
8157:
8152:
8147:
8142:
8137:
8132:
8127:
8122:
8117:
8112:
8107:
8102:
8097:
8092:
8086:
8084:
8080:
8079:
8077:
8076:
8071:
8066:
8061:
8056:
8051:
8046:
8041:
8036:
8031:
8026:
8021:
8016:
8011:
8005:
8003:
7999:
7998:
7996:
7995:
7990:
7985:
7980:
7978:Machine ethics
7975:
7970:
7965:
7960:
7955:
7950:
7945:
7940:
7935:
7930:
7925:
7920:
7915:
7909:
7907:
7903:
7902:
7891:
7890:
7883:
7876:
7868:
7862:
7861:
7832:
7829:
7826:
7825:
7799:
7788:. 21 July 2023
7773:
7762:. 21 July 2023
7747:
7712:(5): 318–321.
7696:
7647:
7635:United Nations
7622:
7585:
7559:
7528:
7497:
7466:
7440:
7414:
7388:
7362:
7321:
7314:
7281:
7246:
7217:
7182:McGinnis, John
7173:
7142:
7111:
7080:
7054:
7025:(2): 397–414.
7009:
6992:Slate Magazine
6978:
6947:
6938:
6907:
6893:
6863:
6798:
6763:
6706:
6669:(2): 397–414.
6653:
6628:
6602:
6576:
6550:
6533:
6513:
6482:
6451:
6421:
6395:
6361:
6324:
6311:DAIR Institute
6297:
6271:
6245:
6215:
6197:
6171:
6145:
6127:The New Yorker
6112:
6081:
6051:
6039:
6009:
5998:. 27 June 2016
5983:
5972:. 9 April 2023
5955:
5930:
5897:
5868:
5841:(3): 285–311.
5822:
5792:
5772:
5765:
5742:
5710:
5678:
5650:
5624:
5617:
5589:
5540:
5514:
5475:
5452:
5403:
5388:
5370:
5340:
5299:
5273:
5211:
5177:
5143:
5096:
5059:
5052:
5034:
5004:
4981:(3): 373–389.
4965:
4932:
4899:
4869:
4838:
4813:
4792:
4746:
4717:
4655:
4619:
4610:
4603:
4577:
4544:
4513:
4488:
4467:
4454:
4428:
4402:
4376:
4365:. No. 132
4363:Philosophy Now
4349:
4328:
4302:
4295:
4270:
4263:
4245:
4211:
4177:
4151:
4125:
4099:
4066:
4029:(3): 189–191.
4009:
3987:
3962:
3936:
3902:
3864:
3853:. 7 April 2023
3838:
3808:
3778:
3752:
3722:
3697:
3670:
3652:Babauta, Leo.
3639:
3614:
3589:
3564:
3545:
3520:
3487:
3461:
3436:
3410:
3361:
3342:
3318:
3254:
3224:
3191:
3161:
3126:
3093:
3062:
3029:
3018:
3000:
2989:
2969:
2941:
2910:
2897:(3): 256–260.
2877:
2857:
2831:
2795:
2710:
2656:
2627:
2608:
2574:
2545:
2517:
2488:
2452:
2433:
2404:
2378:
2352:
2318:
2284:
2277:
2221:
2186:
2159:(1): 147–163.
2136:
2112:
2105:
2064:
2063:
2061:
2058:
2055:
2054:
2042:
2023:
2022:
2020:
2017:
2015:
2014:
2009:
2004:
1999:
1992:
1987:
1982:
1977:
1972:
1967:
1960:
1955:
1950:
1945:
1940:
1935:
1929:
1927:
1924:
1866:United Nations
1827:
1824:
1814:
1811:
1809:
1806:
1756:Machine ethics
1747:
1744:
1733:An April 2023
1716:
1715:Public surveys
1713:
1665:During a 2016
1662:
1659:
1582:
1579:
1530:Stuart Russell
1462:
1459:
1406:
1403:
1367:
1364:
1355:
1352:
1333:
1330:
1302:
1299:
1263:
1260:
1218:
1215:
1210:
1209:
1198:
1184:
1181:
1179:intelligence.
1172:
1171:
1168:
1164:
1160:
1157:
1149:
1146:
1126:
1123:
1117:
1114:
1079:
1076:
1065:
1062:
1028:moral progress
1004:Main article:
1001:
998:
965:Main article:
962:
959:
934:
931:
911:
908:
902:
899:
893:
890:
877:
874:
861:
858:
849:The economist
842:
839:
838:
837:
834:
831:
824:
821:
814:working memory
810:
807:
804:speed of light
792:
772:
769:
745:
742:
711:
708:
706:
703:
631:nanotechnology
554:
551:
397:
396:
394:
393:
386:
379:
371:
368:
367:
364:
363:
357:
354:
353:
350:
349:
346:
345:
340:
335:
330:
324:
319:
318:
315:
314:
311:
310:
305:
300:
295:
290:
281:
276:
271:
265:
260:
259:
256:
255:
252:
251:
246:
241:
236:
231:
230:
229:
219:
214:
209:
208:
207:
202:
197:
187:
182:
180:Earth sciences
177:
172:
170:Bioinformatics
166:
161:
160:
157:
156:
153:
152:
147:
142:
137:
132:
127:
122:
116:
113:
112:
109:
108:
105:
104:
99:
94:
89:
84:
79:
74:
69:
64:
59:
53:
48:
47:
44:
43:
33:
32:
26:
25:
15:
9:
6:
4:
3:
2:
10378:
10367:
10364:
10362:
10359:
10357:
10354:
10352:
10349:
10347:
10344:
10342:
10339:
10338:
10336:
10319:
10316:
10314:
10313:Risk analysis
10311:
10309:
10306:
10304:
10301:
10299:
10296:
10295:
10288:
10286:
10285:
10280:
10274:
10273:
10270:
10264:
10261:
10259:
10258:Social crisis
10256:
10254:
10251:
10249:
10246:
10244:
10241:
10239:
10236:
10234:
10231:
10229:
10226:
10224:
10221:
10219:
10216:
10214:
10211:
10210:
10208:
10204:
10198:
10195:
10193:
10190:
10188:
10185:
10183:
10180:
10179:
10177:
10175:Organizations
10173:
10165:
10162:
10161:
10160:
10157:
10155:
10152:
10148:
10145:
10144:
10143:
10140:
10138:
10135:
10131:
10128:
10126:
10123:
10122:
10121:
10118:
10116:
10113:
10112:
10110:
10106:
10100:
10099:World to come
10097:
10095:
10092:
10088:
10085:
10083:
10080:
10079:
10078:
10075:
10073:
10070:
10068:
10065:
10061:
10058:
10056:
10053:
10051:
10048:
10047:
10046:
10045:Millennialism
10043:
10041:
10038:
10034:
10033:Messianic Age
10031:
10030:
10029:
10026:
10024:
10021:
10019:
10018:Gog and Magog
10016:
10014:
10011:
10009:
10008:Earth Changes
10006:
10004:
10001:
9999:
9996:
9994:
9991:
9989:
9986:
9984:
9981:
9979:
9976:
9974:
9971:
9970:
9968:
9964:
9956:
9953:
9952:
9951:
9948:
9946:
9943:
9937:
9934:
9932:
9929:
9927:
9924:
9923:
9922:
9919:
9915:
9912:
9910:
9907:
9905:
9902:
9900:
9897:
9895:
9892:
9890:
9887:
9885:
9882:
9880:
9877:
9876:
9875:
9872:
9868:
9865:
9863:
9860:
9856:
9853:
9852:
9851:
9848:
9846:
9845:New Jerusalem
9843:
9841:
9838:
9836:
9833:
9827:
9824:
9822:
9821:War in Heaven
9819:
9817:
9816:Two witnesses
9814:
9812:
9809:
9807:
9804:
9802:
9799:
9797:
9794:
9792:
9789:
9787:
9784:
9783:
9782:
9779:
9778:
9777:
9774:
9772:
9769:
9765:
9762:
9760:
9757:
9756:
9755:
9752:
9750:
9747:
9745:
9742:
9740:
9737:
9733:
9730:
9729:
9728:
9725:
9723:
9720:
9719:
9718:
9715:
9713:
9710:
9706:
9703:
9701:
9698:
9697:
9696:
9693:
9691:
9688:
9687:
9686:
9685:Second Coming
9683:
9681:
9678:
9674:
9671:
9669:
9666:
9665:
9664:
9661:
9657:
9654:
9652:
9649:
9648:
9647:
9644:
9643:
9641:
9639:
9635:
9629:
9626:
9624:
9621:
9619:
9616:
9614:
9611:
9609:
9606:
9602:
9599:
9597:
9594:
9590:
9587:
9586:
9585:
9582:
9580:
9577:
9575:
9572:
9571:
9570:
9567:
9565:
9562:
9560:
9557:
9555:
9552:
9550:
9547:
9545:
9542:
9540:
9537:
9535:
9532:
9530:
9527:
9525:
9522:
9520:
9517:
9516:
9514:
9510:
9500:
9497:
9495:
9492:
9490:
9487:
9485:
9482:
9480:
9477:
9475:
9472:
9470:
9467:
9463:
9460:
9458:
9455:
9453:
9450:
9449:
9448:
9445:
9441:
9438:
9436:
9433:
9432:
9431:
9428:
9427:
9425:
9421:
9415:
9412:
9410:
9407:
9405:
9402:
9400:
9397:
9395:
9392:
9390:
9387:
9386:
9384:
9382:
9378:
9375:
9371:
9359:
9356:
9355:
9354:
9351:
9349:
9346:
9345:
9343:
9341:
9337:
9331:
9328:
9326:
9323:
9321:
9318:
9314:
9311:
9310:
9309:
9306:
9304:
9301:
9299:
9296:
9294:
9291:
9289:
9286:
9284:
9281:
9279:
9276:
9274:
9271:
9267:
9264:
9262:
9259:
9258:
9257:
9254:
9252:
9249:
9247:
9244:
9242:
9239:
9237:
9234:
9232:
9229:
9227:
9224:
9222:
9219:
9217:
9214:
9212:
9209:
9207:
9204:
9200:
9197:
9196:
9195:
9192:
9190:
9187:
9185:
9184:Deforestation
9182:
9180:
9177:
9175:
9172:
9170:
9167:
9163:
9160:
9159:
9158:
9155:
9153:
9150:
9149:
9147:
9145:
9141:
9138:
9136:
9132:
9126:
9125:World War III
9123:
9121:
9118:
9114:
9111:
9109:
9106:
9104:
9101:
9100:
9099:
9096:
9094:
9091:
9089:
9086:
9084:
9081:
9077:
9074:
9072:
9069:
9068:
9067:
9064:
9062:
9059:
9057:
9054:
9053:
9051:
9047:
9041:
9040:Transhumanism
9038:
9034:
9031:
9029:
9026:
9024:
9021:
9020:
9019:
9015:
9012:
9008:
9005:
9003:
9000:
8999:
8998:
8995:
8992:
8989:
8985:
8982:
8980:
8977:
8975:
8972:
8970:
8967:
8965:
8962:
8961:
8960:
8957:
8953:
8950:
8949:
8948:
8945:
8943:
8940:
8938:
8935:
8931:
8928:
8926:
8923:
8921:
8918:
8917:
8916:
8913:
8911:
8908:
8907:
8905:
8903:Technological
8901:
8893:
8890:
8889:
8888:
8885:
8883:
8880:
8879:
8876:
8872:
8865:
8860:
8858:
8853:
8851:
8846:
8845:
8842:
8830:
8827:
8826:
8824:
8820:
8814:
8813:
8809:
8807:
8806:
8802:
8800:
8799:
8798:The Precipice
8795:
8793:
8792:
8788:
8786:
8785:
8781:
8779:
8778:
8774:
8772:
8771:
8767:
8765:
8764:
8760:
8758:
8757:
8753:
8751:
8750:
8746:
8745:
8743:
8739:
8733:
8730:
8728:
8725:
8723:
8720:
8718:
8715:
8713:
8710:
8708:
8705:
8703:
8700:
8698:
8695:
8693:
8690:
8688:
8687:Global health
8685:
8683:
8680:
8678:
8675:
8673:
8670:
8668:
8667:Cultured meat
8665:
8663:
8660:
8658:
8655:
8654:
8652:
8648:
8642:
8639:
8637:
8634:
8632:
8629:
8627:
8624:
8622:
8619:
8617:
8614:
8612:
8609:
8607:
8604:
8602:
8599:
8597:
8594:
8592:
8591:Good Ventures
8589:
8587:
8584:
8582:
8579:
8577:
8574:
8572:
8569:
8567:
8564:
8562:
8559:
8557:
8554:
8552:
8549:
8547:
8544:
8542:
8539:
8537:
8534:
8532:
8529:
8527:
8524:
8522:
8519:
8517:
8514:
8512:
8509:
8507:
8506:Animal Ethics
8504:
8502:
8499:
8496:
8494:
8491:
8489:
8486:
8485:
8483:
8481:Organizations
8479:
8473:
8470:
8468:
8465:
8463:
8460:
8458:
8455:
8453:
8450:
8448:
8445:
8443:
8440:
8438:
8435:
8433:
8430:
8428:
8425:
8423:
8420:
8418:
8415:
8413:
8410:
8409:
8407:
8403:
8397:
8394:
8392:
8389:
8387:
8384:
8382:
8379:
8377:
8374:
8372:
8369:
8367:
8364:
8362:
8359:
8357:
8354:
8352:
8349:
8347:
8344:
8342:
8339:
8337:
8334:
8332:
8329:
8327:
8324:
8323:
8321:
8317:
8313:
8306:
8301:
8299:
8294:
8292:
8287:
8286:
8283:
8273:
8263:
8257:
8254:
8252:
8251:
8247:
8245:
8244:
8240:
8238:
8237:
8236:The Precipice
8233:
8231:
8230:
8226:
8224:
8221:
8219:
8218:
8214:
8212:
8209:
8208:
8206:
8202:
8196:
8193:
8191:
8188:
8186:
8185:Frank Wilczek
8183:
8181:
8178:
8176:
8173:
8171:
8168:
8166:
8163:
8161:
8158:
8156:
8153:
8151:
8148:
8146:
8143:
8141:
8138:
8136:
8133:
8131:
8130:Dan Hendrycks
8128:
8126:
8123:
8121:
8118:
8116:
8113:
8111:
8108:
8106:
8103:
8101:
8100:Yoshua Bengio
8098:
8096:
8093:
8091:
8088:
8087:
8085:
8081:
8075:
8072:
8070:
8067:
8065:
8062:
8060:
8057:
8055:
8052:
8050:
8047:
8045:
8042:
8040:
8037:
8035:
8032:
8030:
8027:
8025:
8022:
8020:
8017:
8015:
8012:
8010:
8007:
8006:
8004:
8002:Organizations
8000:
7994:
7991:
7989:
7986:
7984:
7981:
7979:
7976:
7974:
7971:
7969:
7966:
7964:
7961:
7959:
7956:
7954:
7951:
7949:
7946:
7944:
7941:
7939:
7936:
7934:
7931:
7929:
7926:
7924:
7921:
7919:
7916:
7914:
7911:
7910:
7908:
7904:
7900:
7896:
7889:
7884:
7882:
7877:
7875:
7870:
7869:
7866:
7850:
7846:
7845:Bloomberg.com
7841:
7835:
7834:
7814:
7810:
7803:
7787:
7783:
7777:
7761:
7757:
7751:
7743:
7739:
7735:
7731:
7727:
7723:
7719:
7715:
7711:
7707:
7700:
7692:
7688:
7683:
7678:
7674:
7670:
7667:(1): 018001.
7666:
7662:
7658:
7651:
7636:
7632:
7626:
7611:
7607:
7603:
7599:
7592:
7590:
7574:
7570:
7563:
7547:
7543:
7539:
7532:
7516:
7512:
7508:
7501:
7485:
7481:
7477:
7470:
7454:
7450:
7444:
7429:
7425:
7418:
7402:
7398:
7392:
7376:
7372:
7366:
7358:
7354:
7349:
7344:
7340:
7336:
7332:
7325:
7317:
7311:
7307:
7303:
7299:
7295:
7288:
7286:
7278:
7265:
7261:
7257:
7250:
7243:
7238:
7234:
7233:
7228:
7225:Sotala, Kaj;
7221:
7214:
7213:infeasible...
7201:
7197:
7193:
7192:
7187:
7183:
7177:
7161:
7157:
7153:
7146:
7130:
7126:
7122:
7115:
7099:
7095:
7091:
7084:
7069:
7065:
7058:
7050:
7046:
7042:
7038:
7033:
7028:
7024:
7020:
7013:
6997:
6993:
6989:
6982:
6966:
6962:
6958:
6951:
6942:
6926:
6922:
6918:
6911:
6896:
6890:
6886:
6882:
6878:
6874:
6867:
6859:
6855:
6851:
6847:
6842:
6837:
6833:
6829:
6825:
6821:
6817:
6813:
6809:
6802:
6786:
6782:
6778:
6774:
6767:
6759:
6755:
6751:
6747:
6742:
6737:
6733:
6729:
6726:(1): 018001.
6725:
6721:
6717:
6710:
6702:
6698:
6694:
6690:
6686:
6682:
6677:
6672:
6668:
6664:
6657:
6649:
6645:
6644:
6639:
6636:Sotala, Kaj;
6632:
6617:
6613:
6606:
6590:
6586:
6580:
6564:
6560:
6554:
6548:
6544:
6541:
6536:
6530:
6526:
6525:
6524:What Happened
6517:
6501:
6497:
6493:
6486:
6470:
6466:
6462:
6455:
6439:
6435:
6431:
6425:
6410:
6406:
6399:
6383:
6379:
6375:
6371:
6365:
6357:
6353:
6348:
6343:
6339:
6335:
6328:
6313:
6312:
6307:
6301:
6286:
6282:
6275:
6260:. 31 May 2023
6259:
6255:
6249:
6233:
6229:
6225:
6219:
6211:
6207:
6201:
6186:
6182:
6175:
6159:
6155:
6149:
6133:
6129:
6128:
6123:
6116:
6100:
6096:
6092:
6085:
6069:
6065:
6061:
6055:
6048:
6043:
6027:
6023:
6019:
6013:
5997:
5993:
5987:
5971:
5966:
5959:
5951:
5947:
5946:
5941:
5934:
5918:
5914:
5913:
5908:
5901:
5885:
5881:
5875:
5873:
5864:
5860:
5856:
5852:
5848:
5844:
5840:
5836:
5829:
5827:
5819:
5806:
5802:
5796:
5788:
5787:
5782:
5781:Bostrom, Nick
5776:
5768:
5762:
5758:
5757:
5752:
5746:
5730:
5726:
5725:
5720:
5714:
5698:
5694:
5693:
5688:
5682:
5667:
5663:
5657:
5655:
5639:
5635:
5628:
5620:
5614:
5610:
5609:
5604:
5598:
5596:
5594:
5585:
5581:
5576:
5571:
5567:
5563:
5559:
5555:
5551:
5544:
5528:
5527:IEEE Spectrum
5524:
5518:
5510:
5506:
5502:
5498:
5494:
5490:
5486:
5479:
5471:
5467:
5463:
5456:
5448:
5444:
5439:
5434:
5430:
5426:
5422:
5418:
5414:
5407:
5399:
5395:
5391:
5385:
5382:(Audiobook),
5381:
5374:
5358:
5354:
5350:
5344:
5336:
5332:
5327:
5322:
5318:
5314:
5310:
5303:
5287:
5283:
5277:
5261:
5257:
5253:
5249:
5245:
5241:
5237:
5233:
5229:
5225:
5218:
5216:
5199:
5195:
5194:
5188:
5181:
5165:
5161:
5157:
5150:
5148:
5140:
5136:
5133:
5119:
5115:
5111:
5107:
5100:
5081:
5077:
5070:
5063:
5055:
5049:
5045:
5038:
5022:
5018:
5014:
5008:
5001:
4996:
4992:
4988:
4984:
4980:
4976:
4969:
4953:
4949:
4948:
4943:
4936:
4920:
4916:
4915:
4910:
4903:
4887:
4883:
4879:
4873:
4857:
4853:
4849:
4842:
4827:
4823:
4817:
4808:
4803:
4796:
4789:
4785:
4781:
4777:
4773:
4769:
4765:
4761:
4757:
4750:
4735:
4731:
4724:
4722:
4702:
4698:
4694:
4690:
4686:
4683:(10): 38–40.
4682:
4678:
4677:
4669:
4665:
4659:
4643:
4639:
4638:
4633:
4629:
4623:
4614:
4606:
4600:
4596:
4595:
4590:
4584:
4582:
4562:
4555:
4548:
4532:
4528:
4524:
4517:
4502:
4498:
4492:
4483:
4478:
4471:
4464:
4458:
4443:
4439:
4432:
4417:
4413:
4406:
4391:
4387:
4380:
4364:
4360:
4353:
4344:
4339:
4332:
4317:
4313:
4306:
4298:
4292:
4288:
4281:
4279:
4277:
4275:
4266:
4260:
4256:
4249:
4234:
4230:
4226:
4222:
4215:
4207:
4203:
4199:
4195:
4194:Global Policy
4188:
4181:
4166:
4162:
4155:
4140:
4136:
4129:
4113:
4109:
4103:
4094:
4089:
4085:
4081:
4080:AI and Ethics
4077:
4070:
4062:
4058:
4053:
4048:
4044:
4040:
4036:
4032:
4028:
4024:
4020:
4013:
4004:
3999:
3991:
3976:
3972:
3966:
3950:
3946:
3940:
3925:
3921:
3917:
3913:
3906:
3897:
3892:
3885:
3883:
3881:
3879:
3877:
3875:
3873:
3871:
3869:
3852:
3848:
3842:
3835:
3822:
3818:
3812:
3805:
3793:
3789:
3782:
3767:
3763:
3756:
3740:
3736:
3732:
3726:
3711:
3707:
3701:
3686:
3685:
3677:
3675:
3659:
3655:
3648:
3646:
3644:
3628:
3624:
3618:
3603:
3599:
3593:
3578:
3574:
3568:
3560:
3556:
3549:
3534:
3530:
3524:
3509:
3505:
3501:
3500:The Economist
3497:
3491:
3476:. 30 May 2023
3475:
3471:
3465:
3450:
3446:
3440:
3424:
3420:
3414:
3406:
3402:
3397:
3392:
3388:
3384:
3380:
3376:
3372:
3365:
3349:
3345:
3339:
3335:
3331:
3330:
3322:
3314:
3310:
3306:
3302:
3298:
3294:
3289:
3284:
3280:
3276:
3272:
3268:
3264:
3258:
3242:
3238:
3234:
3228:
3212:
3208:
3207:
3202:
3195:
3179:
3175:
3171:
3165:
3149:
3145:
3141:
3135:
3133:
3131:
3114:
3110:
3109:
3104:
3097:
3081:
3077:
3073:
3066:
3050:
3046:
3045:
3040:
3033:
3026:
3021:
3015:
3011:
3004:
2997:
2992:
2986:
2982:
2981:
2973:
2966:
2962:
2958:
2954:
2951:
2945:
2929:
2925:
2921:
2914:
2905:
2900:
2896:
2892:
2888:
2881:
2874:
2870:
2867:
2861:
2845:
2841:
2835:
2819:
2815:
2811:
2804:
2802:
2800:
2783:
2779:
2775:
2757:
2753:
2749:
2744:
2739:
2735:
2728:
2724:
2720:
2714:
2695:
2691:
2687:
2683:
2676:
2669:
2667:
2665:
2663:
2661:
2645:
2641:
2634:
2632:
2623:
2619:
2612:
2597:
2593:
2589:
2585:
2578:
2562:
2558:
2552:
2550:
2542:
2531:
2527:
2521:
2506:
2502:
2495:
2493:
2476:
2472:
2471:
2466:
2459:
2457:
2448:
2444:
2437:
2421:
2417:
2416:
2408:
2393:. 26 May 2023
2392:
2388:
2382:
2366:
2362:
2356:
2341:
2337:
2333:
2329:
2322:
2307:
2303:
2299:
2295:
2288:
2280:
2274:
2270:
2269:
2264:
2263:Bostrom, Nick
2258:
2256:
2254:
2252:
2250:
2248:
2246:
2244:
2242:
2240:
2238:
2236:
2234:
2232:
2230:
2228:
2226:
2216:
2211:
2207:
2203:
2202:
2197:
2190:
2182:
2178:
2174:
2170:
2166:
2162:
2158:
2154:
2150:
2143:
2141:
2132:
2128:
2127:
2122:
2121:Bostrom, Nick
2116:
2108:
2102:
2098:
2097:
2092:
2091:Norvig, Peter
2088:
2082:
2080:
2078:
2076:
2074:
2072:
2070:
2065:
2052:
2046:
2038:
2031:
2029:
2024:
2013:
2010:
2008:
2005:
2003:
2000:
1998:
1997:
1993:
1991:
1988:
1986:
1983:
1981:
1978:
1976:
1973:
1971:
1968:
1966:
1965:
1961:
1959:
1956:
1954:
1951:
1949:
1946:
1944:
1941:
1939:
1936:
1934:
1931:
1930:
1923:
1921:
1917:
1912:
1910:
1906:
1902:
1898:
1894:
1890:
1885:
1881:
1878:
1875:
1871:
1867:
1862:
1860:
1855:
1853:
1849:
1848:
1843:
1837:
1833:
1823:
1821:
1805:
1803:
1799:
1795:
1791:
1787:
1782:
1780:
1774:
1772:
1765:
1761:
1757:
1753:
1743:
1739:
1736:
1731:
1728:
1727:
1722:
1711:
1706:
1704:
1703:
1702:What Happened
1698:
1693:
1689:
1688:Obama added:
1685:
1680:
1678:
1674:
1670:
1669:
1658:
1656:
1651:
1649:
1645:
1641:
1638:
1634:
1633:
1628:
1626:
1622:
1618:
1614:
1610:
1606:
1600:
1597:
1593:
1588:
1578:
1576:
1572:
1571:
1565:
1563:
1559:
1558:deep learning
1555:
1550:
1548:
1544:
1539:
1535:
1531:
1527:
1523:
1519:
1515:
1511:
1505:
1500:
1498:
1494:
1490:
1486:
1482:
1478:
1474:
1468:
1458:
1455:
1449:
1448:
1442:
1440:
1439:
1434:
1430:
1425:
1423:
1419:
1415:
1410:
1402:
1400:
1396:
1391:
1387:
1383:
1379:
1378:
1374:'s 2017 book
1373:
1363:
1361:
1351:
1349:
1343:
1339:
1329:
1327:
1322:
1316:
1312:
1308:
1298:
1296:
1290:
1288:
1284:
1283:Steven Pinker
1279:
1275:
1273:
1268:
1259:
1256:
1251:
1249:
1248:moral realism
1245:
1240:
1238:
1234:
1228:
1226:
1225:
1214:
1207:
1203:
1199:
1195:
1194:
1193:
1191:
1190:
1180:
1176:
1169:
1165:
1161:
1158:
1155:
1154:
1153:
1145:
1143:
1138:
1136:
1132:
1122:
1113:
1110:
1102:
1098:
1095:
1091:
1085:
1075:
1071:
1061:
1059:
1055:
1050:
1048:
1044:
1039:
1035:
1031:
1029:
1023:
1021:
1012:
1007:
997:
995:
994:
993:Slaughterbots
989:
985:
981:
976:
974:
968:
958:
956:
950:
948:
944:
941:to engage in
940:
930:
927:
923:
921:
917:
907:
898:
889:
887:
883:
873:
870:
868:
857:
854:
852:
847:
835:
832:
829:
825:
822:
819:
815:
811:
808:
805:
801:
797:
793:
790:
786:
782:
781:
780:
778:
768:
766:
761:
758:
754:
751:
741:
738:
734:
729:
727:
723:
718:
716:
702:
699:
695:
690:
688:
687:
682:
678:
677:
672:
668:
664:
660:
656:
652:
651:Frank Wilczek
648:
644:
643:
638:
634:
632:
628:
624:
620:
616:
615:Marvin Minsky
610:
605:
603:
597:
595:
594:
589:
588:Samuel Butler
582:
579:
573:
568:
566:
565:
560:
559:Samuel Butler
550:
548:
544:
540:
536:
532:
527:
525:
522:
518:
514:
509:
507:
506:AI regulation
503:
500:
496:
493:
489:
485:
481:
477:
473:
469:
465:
461:
460:Yoshua Bengio
457:
453:
449:
445:
440:
438:
434:
430:
426:
422:
417:
415:
411:
407:
403:
392:
387:
385:
380:
378:
373:
372:
370:
369:
362:
359:
358:
352:
351:
344:
341:
339:
336:
334:
331:
329:
326:
325:
322:
317:
316:
309:
306:
304:
301:
299:
296:
294:
291:
289:
285:
282:
280:
277:
275:
272:
270:
267:
266:
263:
258:
257:
250:
247:
245:
242:
240:
237:
235:
232:
228:
227:Mental health
225:
224:
223:
220:
218:
215:
213:
210:
206:
203:
201:
198:
196:
193:
192:
191:
190:Generative AI
188:
186:
183:
181:
178:
176:
173:
171:
168:
167:
164:
159:
158:
151:
148:
146:
143:
141:
138:
136:
133:
131:
130:Deep learning
128:
126:
123:
121:
118:
117:
111:
110:
103:
100:
98:
95:
93:
90:
88:
85:
83:
80:
78:
75:
73:
70:
68:
65:
63:
60:
58:
55:
54:
51:
46:
45:
39:
35:
34:
31:
28:
27:
23:
22:
19:
10275:
10218:Cyberwarfare
9936:Third Temple
9791:Lake of fire
9601:Rogue planet
9569:Impact event
9559:Proton decay
9512:Astronomical
9462:Bioterrorism
9308:Supervolcano
9206:Flood basalt
9152:Anoxic event
9061:Collapsology
9049:Sociological
8920:Cyberwarfare
8810:
8803:
8796:
8789:
8782:
8775:
8768:
8761:
8754:
8747:
8566:GiveDirectly
8488:80,000 Hours
8462:Peter Singer
8457:Derek Parfit
8447:Yew-Kwang Ng
8422:Nick Bostrom
8248:
8241:
8234:
8227:
8215:
8175:Jaan Tallinn
8115:Eric Drexler
8105:Nick Bostrom
7918:AI alignment
7853:. Retrieved
7844:
7831:Bibliography
7816:. Retrieved
7812:
7802:
7790:. Retrieved
7785:
7776:
7764:. Retrieved
7759:
7750:
7709:
7705:
7699:
7664:
7660:
7650:
7638:. Retrieved
7634:
7625:
7613:. Retrieved
7601:
7576:. Retrieved
7572:
7562:
7550:. Retrieved
7541:
7531:
7519:. Retrieved
7511:The Guardian
7510:
7500:
7488:. Retrieved
7479:
7469:
7457:. Retrieved
7452:
7443:
7431:. Retrieved
7427:
7417:
7405:. Retrieved
7400:
7391:
7379:. Retrieved
7374:
7365:
7338:
7334:
7324:
7297:
7277:legislation.
7275:
7268:. Retrieved
7259:
7249:
7240:
7236:
7230:
7220:
7211:
7204:. Retrieved
7195:
7189:
7176:
7164:. Retrieved
7155:
7145:
7133:. Retrieved
7124:
7114:
7102:. Retrieved
7093:
7083:
7071:. Retrieved
7067:
7057:
7022:
7018:
7012:
7000:. Retrieved
6991:
6981:
6969:. Retrieved
6960:
6950:
6941:
6929:. Retrieved
6920:
6910:
6898:, retrieved
6876:
6866:
6815:
6811:
6801:
6789:. Retrieved
6780:
6776:
6766:
6723:
6719:
6709:
6666:
6662:
6656:
6647:
6641:
6631:
6621:17 September
6619:. Retrieved
6615:
6605:
6593:. Retrieved
6579:
6567:. Retrieved
6553:
6523:
6516:
6504:. Retrieved
6495:
6485:
6473:. Retrieved
6464:
6454:
6442:. Retrieved
6433:
6424:
6412:. Retrieved
6408:
6398:
6386:. Retrieved
6382:the original
6377:
6370:Kelly, Kevin
6364:
6338:First Monday
6337:
6327:
6315:. Retrieved
6309:
6300:
6288:. Retrieved
6284:
6274:
6262:. Retrieved
6257:
6248:
6236:. Retrieved
6227:
6218:
6209:
6200:
6188:. Retrieved
6184:
6174:
6162:. Retrieved
6157:
6148:
6136:. Retrieved
6125:
6115:
6103:. Retrieved
6094:
6084:
6072:. Retrieved
6064:Fast Company
6063:
6054:
6042:
6030:. Retrieved
6022:Tech Insider
6021:
6012:
6000:. Retrieved
5995:
5986:
5974:. Retrieved
5969:
5958:
5943:
5933:
5921:. Retrieved
5910:
5900:
5888:. Retrieved
5883:
5838:
5834:
5816:
5809:. Retrieved
5804:
5795:
5785:
5775:
5755:
5751:Ford, Martin
5745:
5733:. Retrieved
5722:
5713:
5701:. Retrieved
5690:
5681:
5669:. Retrieved
5665:
5641:. Retrieved
5637:
5627:
5607:
5603:Tegmark, Max
5557:
5553:
5543:
5533:13 September
5531:. Retrieved
5526:
5517:
5484:
5478:
5461:
5455:
5420:
5416:
5406:
5379:
5373:
5361:. Retrieved
5353:www.edge.org
5352:
5343:
5316:
5312:
5302:
5290:. Retrieved
5285:
5276:
5264:. Retrieved
5231:
5227:
5202:. Retrieved
5191:
5180:
5168:. Retrieved
5159:
5122:. Retrieved
5113:
5109:
5099:
5087:. Retrieved
5076:Nick Bostrom
5075:
5062:
5043:
5037:
5025:. Retrieved
5016:
5007:
4998:
4978:
4974:
4968:
4956:. Retrieved
4952:the original
4945:
4935:
4923:. Retrieved
4912:
4902:
4890:. Retrieved
4881:
4872:
4860:. Retrieved
4851:
4841:
4829:. Retrieved
4825:
4816:
4795:
4787:
4762:(2): 71–85.
4759:
4755:
4749:
4737:. Retrieved
4733:
4708:. Retrieved
4680:
4674:
4658:
4646:. Retrieved
4635:
4622:
4613:
4592:
4568:. Retrieved
4547:
4535:. Retrieved
4526:
4516:
4504:. Retrieved
4500:
4491:
4470:
4462:
4457:
4445:. Retrieved
4441:
4431:
4419:. Retrieved
4415:
4405:
4393:. Retrieved
4389:
4379:
4367:. Retrieved
4362:
4352:
4331:
4321:26 September
4319:. Retrieved
4315:
4305:
4286:
4254:
4248:
4236:. Retrieved
4225:The Guardian
4224:
4214:
4197:
4193:
4180:
4168:. Retrieved
4164:
4154:
4142:. Retrieved
4138:
4128:
4116:. Retrieved
4111:
4102:
4083:
4079:
4069:
4026:
4022:
4012:
3990:
3978:. Retrieved
3974:
3965:
3953:. Retrieved
3948:
3939:
3927:. Retrieved
3916:The Guardian
3915:
3905:
3855:. Retrieved
3850:
3841:
3832:
3825:. Retrieved
3820:
3811:
3802:
3795:. Retrieved
3791:
3781:
3769:. Retrieved
3765:
3755:
3745:20 September
3743:. Retrieved
3734:
3725:
3713:. Retrieved
3709:
3700:
3688:, retrieved
3683:
3661:. Retrieved
3657:
3630:. Retrieved
3626:
3617:
3607:27 September
3605:. Retrieved
3601:
3592:
3580:. Retrieved
3576:
3567:
3558:
3548:
3536:. Retrieved
3532:
3523:
3511:. Retrieved
3499:
3490:
3478:. Retrieved
3473:
3464:
3452:. Retrieved
3448:
3439:
3427:. Retrieved
3422:
3413:
3378:
3374:
3364:
3352:. Retrieved
3328:
3321:
3270:
3266:
3257:
3245:. Retrieved
3227:
3215:. Retrieved
3204:
3194:
3182:. Retrieved
3164:
3152:. Retrieved
3117:. Retrieved
3106:
3096:
3084:. Retrieved
3075:
3065:
3053:. Retrieved
3042:
3032:
3023:
3009:
3003:
2994:
2979:
2972:
2964:
2944:
2932:. Retrieved
2923:
2913:
2894:
2890:
2880:
2860:
2848:. Retrieved
2843:
2834:
2822:. Retrieved
2813:
2786:. Retrieved
2777:
2763:. Retrieved
2733:
2723:Tegmark, Max
2713:
2701:. Retrieved
2681:
2647:. Retrieved
2644:CNN Business
2643:
2621:
2611:
2599:. Retrieved
2587:
2577:
2565:. Retrieved
2560:
2540:
2533:. Retrieved
2529:
2520:
2508:. Retrieved
2504:
2479:. Retrieved
2470:The Guardian
2468:
2442:
2436:
2424:. Retrieved
2414:
2407:
2395:. Retrieved
2390:
2381:
2369:. Retrieved
2364:
2355:
2343:. Retrieved
2331:
2321:
2309:. Retrieved
2297:
2287:
2267:
2205:
2199:
2189:
2156:
2152:
2130:
2124:
2115:
2095:
2045:
1994:
1962:
1938:AI alignment
1913:
1886:
1882:
1864:In 2021 the
1863:
1856:
1845:
1839:
1822:the debate.
1816:
1783:
1775:
1767:
1752:AI alignment
1740:
1732:
1724:
1721:SurveyMonkey
1718:
1708:
1700:
1695:
1691:
1687:
1682:
1679:, Ito said:
1673:Barack Obama
1667:
1664:
1652:
1642:
1631:
1629:
1605:Timnit Gebru
1601:
1590:
1568:
1566:
1561:
1551:
1507:
1502:
1470:
1456:
1445:
1443:
1436:
1426:
1411:
1408:
1405:Perspectives
1390:astroturfing
1375:
1369:
1359:
1357:
1345:
1318:
1291:
1280:
1276:
1265:
1252:
1241:
1229:
1222:
1220:
1211:
1187:
1186:
1177:
1173:
1151:
1141:
1139:
1128:
1119:
1107:
1093:
1087:
1073:
1070:AI alignment
1064:AI alignment
1051:
1040:
1036:
1032:
1024:
1017:
991:
984:cyberwarfare
977:
970:
961:AI arms race
951:
943:bioterrorism
936:
928:
924:
916:cyberattacks
913:
910:Cyberattacks
904:
895:
879:
871:
863:
855:
851:Robin Hanson
848:
844:
774:
762:
755:
747:
730:
719:
713:
691:
684:
674:
640:
637:Nick Bostrom
635:
612:
607:
599:
591:
584:
575:
570:
562:
556:
528:
510:
448:AI takeovers
441:
421:human beings
418:
401:
400:
274:Chinese room
163:Applications
18:
10294:Categories
10263:Survivalism
9950:Zoroastrian
9806:Seven seals
9801:Seven bowls
9727:Historicism
9623:Solar flare
9499:Overfishing
9474:Defaunation
9261:coral reefs
9023:AI takeover
8942:Nanoweapons
8930:Cybergeddon
8915:Cyberattack
8650:Focus areas
8541:Faunalytics
8405:Key figures
8366:Longtermism
8180:Max Tegmark
8165:Martin Rees
7973:Longtermism
7933:AI takeover
7818:19 December
7552:27 November
7521:27 November
7490:27 November
7401:VentureBeat
7335:Information
7104:12 December
6506:27 November
6475:27 November
6388:19 February
6258:VentureBeat
6047:Clark 2015a
5735:11 December
5703:11 December
5319:(10): 244.
5313:Information
5266:27 November
5170:27 November
4925:27 November
4537:27 November
4416:www.bbc.com
4316:The Debrief
4200:(1): 15–3.
3044:Vanity Fair
2948:I.J. Good,
2824:27 November
2771:, cited in
2734:AI Magazine
2684:: 308–345.
1719:In 2018, a
1637:Kevin Kelly
1617:longtermism
1510:Peter Thiel
1473:Alan Turing
1461:Endorsement
1433:Dick Cheney
1429:Martin Ford
1372:Max Tegmark
1342:AI takeover
914:AI-enabled
777:human brain
621:co-founder
578:Alan Turing
495:Rishi Sunak
488:nuclear war
464:Alan Turing
431:and become
425:human brain
303:Turing test
279:Friendly AI
50:Major goals
10335:Categories
10223:Depression
10213:Ransomware
10028:Messianism
9998:Armageddon
9983:Apocalypse
9771:Antichrist
9759:Man of sin
9656:Three Ages
9519:Big Crunch
9381:Extinction
9373:Biological
9135:Ecological
9007:Strangelet
8741:Literature
8417:Liv Boeree
8145:Shane Legg
8120:Sam Harris
8095:Sam Altman
8034:EleutherAI
7855:30 October
7341:(9): 209.
7032:1607.07730
6676:1607.07730
6496:Select All
6285:TechCrunch
6238:31 January
6228:Fusion.net
6138:7 February
6074:30 October
6032:30 October
5923:30 January
5470:1106238048
5398:1061147095
5204:7 February
5089:29 October
5027:30 October
5000:functions.
4958:5 February
4826:openai.com
4807:2209.00626
4710:23 October
4648:23 October
4482:2206.13353
4343:2401.07836
4003:2305.15324
3896:2306.12001
3851:AI Impacts
3710:openai.com
3627:Spiceworks
3354:5 December
3247:23 October
3217:30 January
3184:3 December
3154:3 December
3119:28 January
3055:30 January
2743:1602.03506
2561:AI Impacts
2481:5 February
2133:(1): 1–31.
2060:References
1830:See also:
1826:Regulation
1820:politicize
1800:, and the
1750:See also:
1746:Mitigation
1648:Yann LeCun
1581:Skepticism
1493:Bill Gates
1489:Sam Altman
1447:Terminator
1326:cybercrime
1305:See also:
1287:Yann LeCun
1197:occurring.
1175:be true".
884:nature or
860:Alien mind
683:published
667:Bill Gates
639:published
602:I. J. Good
524:Yann LeCun
476:Sam Altman
308:Regulation
262:Philosophy
217:Healthcare
212:Government
114:Approaches
10356:AI safety
10108:Fictional
9955:Saoshyant
9840:New Earth
9811:The Beast
9744:Preterism
9717:Christian
9673:Kali Yuga
9618:Micronova
9613:Hypernova
9479:Dysgenics
9320:Verneshot
9226:Hypercane
8969:Dead Hand
8467:Cari Tuna
8160:Huw Price
8150:Elon Musk
8054:Humanity+
7928:AI safety
7742:151967826
7734:0096-3402
7691:0031-8949
7610:0362-4331
7428:The Verge
7357:2078-2489
7156:The Times
6921:The Verge
6858:245877737
6832:0951-5666
6791:7 January
6750:0031-8949
6693:0952-813X
6569:8 January
6356:1396-0466
6210:MambaPost
5863:159310223
5855:1352-3260
5501:1556-5068
5447:0031-8949
5423:(1): 12.
5335:2078-2489
5234:(3): 77.
4784:254835485
4776:1572-8641
4589:Ord, Toby
4570:10 August
4421:19 August
4395:19 August
4369:19 August
4233:0261-3077
4043:2522-5839
3924:0261-3077
3508:0013-0613
3405:235216649
3297:1476-4687
2765:10 August
2703:27 August
2596:0362-4331
2340:0362-4331
2306:0190-8286
2173:0951-5666
2051:Seth Baum
1943:AI safety
1916:Joe Biden
1905:Microsoft
1726:USA Today
1699:wrote in
1596:Andrew Ng
1547:Vicarious
1534:Elon Musk
1481:Elon Musk
1438:Economist
1332:Scenarios
663:Elon Musk
600:In 1965,
572:question.
539:AlphaZero
517:alignment
484:pandemics
468:Elon Musk
338:AI winter
239:Military
102:AI safety
10248:Pandemic
10233:Epidemic
10228:Droughts
10082:Prewrath
10013:End time
9879:Al-Qa'im
9764:Katechon
9749:2 Esdras
9739:Idealism
9722:Futurism
9651:Maitreya
9646:Buddhist
9489:Pandemic
8937:Gray goo
8571:GiveWell
8452:Toby Ord
8319:Concepts
8272:Category
8140:Bill Joy
7906:Concepts
7849:Archived
7546:Archived
7515:Archived
7484:Archived
7381:30 March
7264:Archived
7200:Archived
7166:21 March
7160:Archived
7135:29 March
7129:Archived
7125:BBC News
7098:Archived
6996:Archived
6965:Archived
6961:NBC News
6925:Archived
6850:35035113
6785:Archived
6589:Archived
6563:Archived
6543:Archived
6500:Archived
6469:Archived
6438:Archived
6232:Archived
6190:31 March
6132:Archived
6099:Archived
6068:Archived
6026:Archived
6002:30 April
5976:30 April
5950:Archived
5917:Archived
5912:BBC News
5890:30 March
5884:ABC News
5811:30 March
5729:Archived
5724:Newsweek
5697:Archived
5671:30 March
5605:(2017).
5509:86743553
5363:11 March
5357:Archived
5292:31 March
5286:BBC News
5260:Archived
5256:28207698
5198:Archived
5164:Archived
5135:Archived
5118:Archived
5080:Archived
5021:Archived
4995:16477341
4947:IT World
4919:Archived
4886:Archived
4856:Archived
4701:Archived
4697:20395145
4642:Archived
4630:(2014).
4591:(2020).
4561:Archived
4531:Archived
4527:BBC News
4086:(2): 1.
4061:36211133
3949:euronews
3857:16 April
3739:Archived
3663:19 March
3474:BBC News
3423:BBC News
3348:Archived
3305:27121801
3241:Archived
3211:Archived
3178:Archived
3148:Archived
3113:Archived
3108:NBC News
3080:Archived
3049:Archived
2953:Archived
2934:15 March
2928:Archived
2869:Archived
2818:Archived
2814:The Hive
2788:9 August
2782:Archived
2756:Archived
2725:(2015).
2694:Archived
2567:10 April
2535:10 April
2510:10 April
2475:Archived
2420:Archived
2371:10 April
2265:(2014).
2181:19208453
1958:Gray goo
1926:See also
1844:drafted
1625:TESCREAL
1543:DeepMind
1382:in a box
1377:Life 3.0
1366:Life 3.0
1253:Skeptic
1129:In the "
1047:Max More
1043:Toby Ord
623:Bill Joy
454:such as
361:Glossary
355:Glossary
333:Progress
328:Timeline
288:Takeover
249:Projects
222:Industry
185:Finance
175:Deepfake
125:Symbolic
97:Robotics
72:Planning
10308:Hazards
10206:General
10077:Rapture
9926:Messiah
9914:Sufyani
9904:Israfil
9874:Islamic
9712:Messiah
9690:1 Enoch
9524:Big Rip
9236:Ecocide
9231:Ice age
7792:21 July
7766:21 July
7760:AP News
7714:Bibcode
7669:Bibcode
7640:20 July
7615:20 July
7578:28 July
7459:20 July
7433:20 July
7407:20 July
7206:16 July
7073:28 July
7002:5 April
6971:5 April
6931:5 April
6841:8748529
6758:4749656
6728:Bibcode
6414:23 July
6317:23 July
6290:23 July
6164:23 July
5643:23 July
5584:4459076
5562:Bibcode
5425:Bibcode
5236:Bibcode
5124:2 April
4882:Tor.com
4831:16 July
4739:16 July
4506:10 June
4501:Fortune
4447:14 July
4238:16 July
4170:20 July
4144:20 July
4118:17 July
4052:9544280
3980:13 July
3955:13 July
3929:13 July
3827:13 July
3797:14 July
3771:12 July
3715:12 July
3690:13 July
3632:14 July
3582:12 July
3577:Fortune
3538:12 July
3513:12 July
3383:Bibcode
3313:4399193
3275:Bibcode
3086:3 April
2850:28 July
2748:Bibcode
2686:Bibcode
2649:20 July
2426:22 July
2345:27 July
2311:27 July
1813:Banning
1677:Joi Ito
1635:editor
1109:Russell
882:chaotic
785:neurons
593:Erewhon
553:History
513:control
343:AI boom
321:History
244:Physics
10238:Famine
10164:Zombie
9966:Others
9921:Jewish
9899:Dajjal
9781:Events
9695:Daniel
9596:winter
9423:Others
9313:winter
9113:winter
9108:famine
9103:cobalt
8822:Events
8083:People
8074:OpenAI
7740:
7732:
7689:
7608:
7355:
7312:
7270:15 May
7049:928824
7047:
6900:8 June
6891:
6856:
6848:
6838:
6830:
6756:
6748:
6701:928824
6699:
6691:
6595:9 July
6531:
6354:
6264:8 July
6160:. 2023
6105:15 May
5861:
5853:
5763:
5638:Forbes
5615:
5582:
5554:Nature
5507:
5499:
5468:
5445:
5396:
5386:
5333:
5254:
5050:
4993:
4892:15 May
4862:15 May
4782:
4774:
4695:
4601:
4293:
4261:
4231:
4059:
4049:
4041:
3922:
3834:goals.
3506:
3480:8 June
3454:8 June
3429:9 June
3403:
3375:Nature
3340:
3311:
3303:
3295:
3267:Nature
3016:
2987:
2601:3 June
2594:
2397:26 May
2338:
2304:
2275:
2179:
2171:
2103:
1903:, and
1897:Google
1893:Amazon
1889:OpenAI
1796:, the
1792:, the
1788:, the
1762:, and
1735:YouGov
1518:OpenAI
1495:, and
1485:OpenAI
1452:
1313:, and
1142:intend
1097:goal.
876:Limits
828:copied
818:chunks
765:OpenAI
676:Nature
472:OpenAI
470:, and
293:Ethics
10253:Riots
9945:Norse
9909:Mahdi
9668:Kalki
9663:Hindu
8993:(EMP)
8204:Other
7897:from
7738:S2CID
7260:Slate
7239:(1).
7094:Wired
7045:S2CID
7027:arXiv
6854:S2CID
6754:S2CID
6697:S2CID
6671:arXiv
6465:WIRED
6444:6 May
6378:Wired
6185:Wired
6095:Slate
5859:S2CID
5818:risk.
5580:S2CID
5505:S2CID
5160:Slate
5083:(PDF)
5072:(PDF)
4991:S2CID
4852:Slate
4802:arXiv
4780:S2CID
4704:(PDF)
4693:S2CID
4671:(PDF)
4564:(PDF)
4557:(PDF)
4477:arXiv
4338:arXiv
4190:(PDF)
3998:arXiv
3975:ZDNET
3891:arXiv
3804:that.
3401:S2CID
3309:S2CID
2996:goal.
2759:(PDF)
2738:arXiv
2730:(PDF)
2697:(PDF)
2678:(PDF)
2208:(2).
2177:S2CID
2019:Notes
1852:GPT-4
1668:Wired
1632:Wired
1592:Baidu
1562:sweet
986:, or
796:axons
205:Music
200:Audio
7857:2015
7820:2023
7794:2023
7768:2023
7730:ISSN
7687:ISSN
7642:2023
7617:2023
7606:ISSN
7580:2023
7554:2017
7542:CNBC
7523:2017
7492:2017
7461:2023
7453:Time
7435:2023
7409:2023
7383:2023
7353:ISSN
7310:ISBN
7272:2016
7208:2014
7168:2018
7137:2014
7106:2015
7075:2023
7004:2020
6973:2020
6933:2020
6902:2023
6889:ISBN
6846:PMID
6828:ISSN
6793:2022
6746:ISSN
6689:ISSN
6650:(1).
6623:2023
6597:2023
6571:2018
6539:via
6529:ISBN
6508:2017
6477:2017
6446:2019
6416:2023
6390:2022
6352:ISSN
6319:2023
6292:2023
6266:2023
6240:2016
6192:2023
6166:2023
6140:2016
6107:2016
6076:2015
6034:2015
6004:2023
5978:2023
5925:2015
5892:2023
5851:ISSN
5813:2023
5805:Time
5761:ISBN
5737:2017
5705:2017
5673:2023
5645:2023
5613:ISBN
5535:2024
5497:ISSN
5466:OCLC
5443:ISSN
5394:OCLC
5384:ISBN
5365:2020
5331:ISSN
5294:2023
5268:2017
5252:PMID
5206:2016
5172:2017
5132:here
5126:2020
5091:2015
5048:ISBN
5029:2015
4960:2018
4927:2017
4894:2016
4864:2016
4833:2023
4772:ISSN
4741:2023
4734:Time
4712:2015
4650:2015
4637:Edge
4599:ISBN
4572:2020
4539:2017
4508:2023
4449:2023
4423:2023
4397:2023
4390:Time
4371:2023
4323:2024
4291:ISBN
4259:ISBN
4240:2023
4229:ISSN
4172:2023
4146:2023
4120:2023
4112:Time
4057:PMID
4039:ISSN
3982:2023
3957:2023
3931:2023
3920:ISSN
3859:2023
3829:2023
3799:2023
3792:Time
3773:2023
3747:2017
3717:2023
3692:2023
3665:2024
3634:2023
3609:2024
3602:ORNL
3584:2023
3540:2023
3515:2023
3504:ISSN
3482:2023
3456:2023
3431:2023
3356:2021
3338:ISBN
3301:PMID
3293:ISSN
3249:2015
3219:2016
3186:2014
3156:2014
3121:2016
3088:2019
3057:2016
3014:ISBN
2985:ISBN
2961:HTML
2936:2020
2852:2023
2826:2017
2790:2019
2767:2019
2705:2018
2651:2023
2622:Time
2603:2023
2592:ISSN
2569:2023
2537:2023
2512:2023
2483:2018
2428:2022
2399:2023
2373:2023
2347:2023
2336:ISSN
2313:2023
2302:ISSN
2273:ISBN
2169:ISSN
2101:ISBN
1901:Meta
1834:and
1644:Meta
1545:and
1487:CEO
1340:and
920:NATO
731:The
665:and
657:and
515:and
497:and
486:and
474:CEO
452:CEOs
7913:AGI
7722:doi
7677:doi
7480:NPR
7375:CNN
7343:doi
7302:doi
7196:104
7068:Vox
7037:doi
6881:doi
6836:PMC
6820:doi
6736:doi
6681:doi
6342:doi
5843:doi
5570:doi
5558:548
5489:doi
5433:doi
5321:doi
5244:doi
5232:316
5017:Vox
4983:doi
4764:doi
4685:doi
4463:AGI
4202:doi
4088:doi
4047:PMC
4031:doi
3391:doi
3379:593
3283:doi
3271:532
3174:BBC
2963:),
2899:doi
2210:doi
2161:doi
2037:BBC
1859:NPR
1420:'s
1370:In
1358:In
1224:Vox
1206:Tay
1094:any
1088:An
1056:or
1018:An
735:at
619:Sun
590:'s
195:Art
10337::
9016:/
7847:.
7843:.
7811:.
7784:.
7758:.
7736:.
7728:.
7720:.
7710:72
7708:.
7685:.
7675:.
7665:90
7663:.
7659:.
7633:.
7604:.
7600:.
7588:^
7571:.
7544:.
7540:.
7513:.
7509:.
7482:.
7478:.
7451:.
7426:.
7399:.
7373:.
7351:.
7337:.
7333:.
7308:.
7284:^
7274:.
7262:.
7258:.
7237:90
7235:.
7210:.
7194:.
7188:.
7158:.
7154:.
7127:.
7123:.
7096:.
7092:.
7066:.
7043:.
7035:.
7023:29
7021:.
6994:.
6990:.
6963:.
6959:.
6923:.
6919:.
6887:,
6875:,
6852:.
6844:.
6834:.
6826:.
6816:38
6814:.
6810:.
6779:.
6775:.
6752:.
6744:.
6734:.
6724:90
6722:.
6718:.
6695:.
6687:.
6679:.
6667:29
6665:.
6648:90
6646:.
6614:.
6498:.
6494:.
6467:.
6463:.
6432:.
6407:.
6376:.
6350:.
6340:.
6336:.
6308:.
6283:.
6256:.
6230:.
6226:.
6208:.
6183:.
6156:.
6130:.
6124:.
6097:.
6093:.
6062:.
6024:.
6020:.
5994:.
5968:.
5948:.
5942:.
5915:.
5909:.
5882:.
5871:^
5857:.
5849:.
5839:40
5837:.
5825:^
5815:.
5803:.
5721:.
5689:.
5664:.
5653:^
5636:.
5592:^
5578:.
5568:.
5556:.
5552:.
5525:.
5503:.
5495:.
5487:.
5464:.
5441:.
5431:.
5421:90
5419:.
5415:.
5392:,
5355:.
5351:.
5329:.
5315:.
5311:.
5284:.
5258:.
5250:.
5242:.
5230:.
5226:.
5214:^
5190:.
5162:.
5158:.
5146:^
5116:.
5114:12
5112:.
5108:.
5078:.
5074:.
5015:.
4997:.
4989:.
4979:26
4977:.
4944:.
4911:.
4880:.
4854:.
4850:.
4824:.
4786:.
4778:.
4770:.
4760:22
4758:.
4732:.
4720:^
4699:.
4691:.
4681:58
4679:.
4673:.
4640:.
4634:.
4580:^
4559:.
4529:.
4525:.
4499:.
4440:.
4414:.
4388:.
4361:.
4314:.
4273:^
4227:.
4223:.
4196:.
4192:.
4163:.
4137:.
4110:.
4082:.
4078:.
4055:.
4045:.
4037:.
4025:.
4021:.
3973:.
3947:.
3918:.
3914:.
3867:^
3849:.
3831:.
3819:.
3801:.
3790:.
3764:.
3737:.
3733:.
3708:.
3673:^
3656:.
3642:^
3625:.
3600:.
3575:.
3557:.
3531:.
3502:.
3498:.
3472:.
3447:.
3421:.
3399:.
3389:.
3377:.
3373:.
3346:.
3336:.
3332:.
3307:.
3299:.
3291:.
3281:.
3269:.
3265:.
3239:.
3235:.
3209:.
3203:.
3172:.
3146:.
3142:.
3129:^
3111:.
3105:.
3078:.
3074:.
3047:.
3041:.
3022:.
2993:.
2926:.
2922:.
2893:.
2889:.
2842:.
2816:.
2812:.
2798:^
2776:.
2754:.
2746:.
2732:.
2692:.
2680:.
2659:^
2642:.
2630:^
2620:.
2590:.
2586:.
2559:.
2548:^
2539:.
2528:.
2503:.
2491:^
2473:.
2467:.
2455:^
2389:.
2363:.
2334:.
2330:.
2300:.
2296:.
2224:^
2206:19
2204:.
2198:.
2175:.
2167:.
2157:35
2155:.
2151:.
2139:^
2129:.
2089:;
2068:^
2027:^
1899:,
1895:,
1891:,
1758:,
1754:,
1705::
1627:.
1611:,
1607:,
1512:,
1491:,
1483:,
1479:,
1309:,
1297:.
1274:.
1233:pi
982:,
945:.
789:Hz
779::
567::
543:Go
508:.
466:,
462:,
458:,
416:.
8863:e
8856:t
8849:v
8304:e
8297:t
8290:v
7887:e
7880:t
7873:v
7859:.
7822:.
7796:.
7770:.
7744:.
7724::
7716::
7693:.
7679::
7671::
7644:.
7619:.
7582:.
7556:.
7525:.
7494:.
7463:.
7437:.
7411:.
7385:.
7359:.
7345::
7339:9
7318:.
7304::
7170:.
7139:.
7108:.
7077:.
7051:.
7039::
7029::
7006:.
6975:.
6935:.
6905:.
6883::
6860:.
6822::
6795:.
6781:1
6760:.
6738::
6730::
6703:.
6683::
6673::
6625:.
6599:.
6573:.
6537:.
6510:.
6479:.
6448:.
6418:.
6392:.
6358:.
6344::
6321:.
6294:.
6268:.
6242:.
6194:.
6168:.
6142:.
6109:.
6078:.
6049:.
6036:.
6006:.
5980:.
5927:.
5894:.
5865:.
5845::
5769:.
5739:.
5707:.
5675:.
5647:.
5621:.
5586:.
5572::
5564::
5537:.
5511:.
5491::
5472:.
5449:.
5435::
5427::
5401:.
5367:.
5337:.
5323::
5317:9
5296:.
5270:.
5246::
5238::
5208:.
5174:.
5141:.
5128:.
5093:.
5056:.
5031:.
4985::
4962:.
4929:.
4896:.
4866:.
4835:.
4810:.
4804::
4766::
4743:.
4714:.
4687::
4652:.
4607:.
4574:.
4541:.
4510:.
4485:.
4479::
4451:.
4425:.
4399:.
4373:.
4346:.
4340::
4325:.
4299:.
4267:.
4242:.
4204::
4198:4
4174:.
4148:.
4122:.
4096:.
4090::
4084:4
4063:.
4033::
4027:4
4006:.
4000::
3984:.
3959:.
3933:.
3899:.
3893::
3861:.
3775:.
3749:.
3719:.
3695:.
3667:.
3636:.
3611:.
3586:.
3561:.
3542:.
3517:.
3484:.
3458:.
3433:.
3407:.
3393::
3385::
3358:.
3315:.
3285::
3277::
3251:.
3221:.
3188:.
3158:.
3123:.
3090:.
3059:.
2959:(
2938:.
2907:.
2901::
2895:4
2854:.
2828:.
2792:.
2769:.
2750::
2740::
2707:.
2688::
2653:.
2624:.
2605:.
2571:.
2514:.
2485:.
2449:.
2430:.
2401:.
2375:.
2349:.
2315:.
2281:.
2218:.
2212::
2183:.
2163::
2131:9
2109:.
830:.
806:.
596:.
390:e
383:t
376:v
286:/
Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.