46:, as well as the scientific and executive leaders of several major AI companies, and experts in pandemics, climate, nuclear disarmament, philosophy, social sciences, and other fields. Media coverage has emphasized the signatures from several tech leaders; this was followed by concerns in other newspapers that the statement could be motivated by public relations or regulatory capture. The statement was released shortly after an open letter calling for a
61:
stated that "systemic bias, misinformation, malicious use, cyberattacks, and weaponization" are all examples of "important and urgent risks from AI... not just the risk of extinction" and added, "ocieties can manage multiple risks at once; it's not 'either/or' but
296:. Skeptics also argue that signatories of the letter were continuing funding of AI research. Companies would benefit from public perception that AI algorithms were far more advanced than currently possible. Skeptics, including from
57:. It was released with an accompanying text which states that it is still difficult to speak up about extreme risks of AI and that the statement aims to overcome this obstacle. The center's CEO
289:, commented that AI "is one of the most powerful technologies that we see currently in our time. But in order to seize the opportunities it presents, we must first mitigate its risks."
662:
580:
326:
633:
206:
555:
27:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
304:
has criticized elevating the risk of AI agency, especially by the "same people who have poured billions of dollars into these companies."
457:
331:
47:
482:
419:
270:
300:, have argued that scientists should focus on the known risks of AI instead of distracting with speculative future risks.
158:
19:
On May 30, 2023, hundreds of artificial intelligence experts and other notable figures signed the following short
281:
the statement and wrote, "The government is looking very carefully at this." When asked about the statement, the
292:
Skeptics of the letter point out that AI has failed to reach certain milestones, such as predictions around
282:
379:
608:
634:"AI and the threat of "human extinction": What are the tech-bros worried about? It's not you and me"
507:
32:
531:
54:
8:
297:
286:
508:"OpenAI's Altman and other AI giants back warning of advanced AI as 'extinction' risk"
305:
427:
387:
202:
194:
186:
94:
458:"Top AI researchers and CEOs warn against 'risk of extinction' in 22-word statement"
293:
254:
222:
198:
53:
The statement is hosted on the website of the AI research and advocacy non-profit
258:
246:
102:
39:
581:"President Biden warns artificial intelligence could 'overtake human thinking'"
230:
226:
210:
190:
182:
142:
138:
126:
106:
82:
556:"Artificial intelligence warning over human extinction – all you need to know"
308:
and Gebru both argue against the statement, suggesting it may be motivated by
656:
431:
391:
238:
214:
146:
58:
43:
355:
321:
301:
250:
234:
178:
174:
162:
110:
98:
78:
36:
274:
218:
170:
166:
154:
150:
122:
118:
114:
86:
74:
70:
418:
Gregg, Aaron; Lima-Strong, Cristiano; Vynck, Gerrit De (2023-05-31).
130:
420:"AI poses 'risk of extinction' on par with nukes, tech leaders say"
309:
242:
134:
278:
31:
At release time, the signatories included over 100 professors of
90:
380:"A.I. Poses 'Risk of Extinction,' Industry Leaders Warn"
663:
Existential risk from artificial general intelligence
417:
327:
Existential risk from artificial general intelligence
35:
including the two most-cited computer scientists and
654:
609:"It's time to talk about the real AI risks"
606:
332:Pause Giant AI Experiments: An Open Letter
16:Open letter about extinction risk from AI
455:
655:
631:
69:Among the well-known signatories are:
602:
600:
505:
377:
480:
451:
449:
447:
413:
411:
409:
407:
373:
371:
369:
350:
348:
346:
271:Prime Minister of the United Kingdom
13:
607:Ryan-Mosley, Tate (12 June 2023).
597:
14:
674:
444:
404:
366:
343:
632:Torres, Émile P. (2023-06-11).
625:
573:
548:
524:
499:
474:
1:
532:"Statement on AI Risk | CAIS"
506:Lomas, Natasha (2023-05-30).
456:Vincent, James (2023-05-30).
337:
264:
7:
481:Wong, Matteo (2023-06-02).
378:Roose, Kevin (2023-05-30).
315:
283:White House Press Secretary
10:
679:
483:"AI Doomerism Is a Decoy"
48:pause on AI experiments
356:"Statement on AI Risk"
29:
613:MIT Technology Review
25:
360:Center for AI Safety
55:Center for AI Safety
21:Statement on AI Risk
384:The New York Times
298:Human Rights Watch
287:Karine Jean-Pierre
294:self-driving cars
203:Rusty Schweickart
195:Baburam Bhattarai
187:Erik Brynjolfsson
95:Stuart J. Russell
670:
648:
647:
645:
644:
629:
623:
622:
620:
619:
604:
595:
594:
592:
591:
577:
571:
570:
568:
567:
552:
546:
545:
543:
542:
528:
522:
521:
519:
518:
503:
497:
496:
494:
493:
478:
472:
471:
469:
468:
453:
442:
441:
439:
438:
415:
402:
401:
399:
398:
375:
364:
363:
352:
255:James Pennebaker
223:Dustin Moskovitz
207:Nicholas Fairfax
199:Kersti Kaljulaid
65:
37:Turing laureates
678:
677:
673:
672:
671:
669:
668:
667:
653:
652:
651:
642:
640:
630:
626:
617:
615:
605:
598:
589:
587:
579:
578:
574:
565:
563:
560:The Independent
554:
553:
549:
540:
538:
530:
529:
525:
516:
514:
504:
500:
491:
489:
479:
475:
466:
464:
454:
445:
436:
434:
424:Washington Post
416:
405:
396:
394:
376:
367:
362:. May 30, 2023.
354:
353:
344:
340:
318:
306:Émile P. Torres
267:
259:Ronald C. Arkin
247:Jacob Tsimerman
103:Vitalik Buterin
63:
40:Geoffrey Hinton
17:
12:
11:
5:
676:
666:
665:
650:
649:
624:
596:
572:
547:
523:
498:
473:
443:
403:
365:
341:
339:
336:
335:
334:
329:
324:
317:
314:
266:
263:
231:Bruce Schneier
227:Scott Aaronson
211:David Haussler
191:Ian Goodfellow
183:Joseph Sifakis
143:Martin Hellman
139:Ilya Sutskever
127:Demis Hassabis
107:David Chalmers
83:Daniel Dennett
15:
9:
6:
4:
3:
2:
675:
664:
661:
660:
658:
639:
635:
628:
614:
610:
603:
601:
586:
582:
576:
561:
557:
551:
537:
533:
527:
513:
509:
502:
488:
484:
477:
463:
459:
452:
450:
448:
433:
429:
425:
421:
414:
412:
410:
408:
393:
389:
385:
381:
374:
372:
370:
361:
357:
351:
349:
347:
342:
333:
330:
328:
325:
323:
320:
319:
313:
311:
307:
303:
299:
295:
290:
288:
284:
280:
276:
272:
262:
260:
256:
252:
248:
244:
240:
239:Andrew Revkin
236:
232:
228:
224:
220:
216:
215:Peter Railton
212:
208:
204:
200:
196:
192:
188:
184:
180:
176:
172:
168:
164:
160:
156:
152:
148:
147:Bill McKibben
144:
140:
136:
132:
128:
124:
120:
116:
112:
108:
104:
100:
96:
92:
88:
84:
80:
76:
72:
67:
60:
59:Dan Hendrycks
56:
51:
49:
45:
44:Yoshua Bengio
41:
38:
34:
28:
24:
22:
641:. Retrieved
637:
627:
616:. Retrieved
612:
588:. Retrieved
584:
575:
564:. Retrieved
562:. 2023-05-31
559:
550:
539:. Retrieved
535:
526:
515:. Retrieved
511:
501:
490:. Retrieved
487:The Atlantic
486:
476:
465:. Retrieved
461:
435:. Retrieved
423:
395:. Retrieved
383:
359:
322:AI alignment
312:ideologies.
302:Timnit Gebru
291:
268:
251:Ramy Youssef
235:Martha Minow
179:Peter Norvig
175:Eric Horvitz
163:Andrew Barto
159:David Silver
111:Ray Kurzweil
99:Jaan Tallinn
79:Peter Singer
68:
52:
30:
26:
20:
18:
536:www.safe.ai
275:Rishi Sunak
219:Bart Selman
171:Pattie Maes
167:Mira Murati
155:Audrey Tang
151:Angela Kane
123:Martin Rees
119:Lex Fridman
115:Max Tegmark
643:2024-07-03
618:2024-07-03
590:2023-06-03
566:2023-06-03
541:2024-03-18
517:2023-05-30
512:TechCrunch
492:2023-12-26
467:2024-07-03
437:2024-07-03
397:2023-05-30
338:References
87:Sam Harris
75:Bill Gates
71:Sam Altman
585:USA TODAY
462:The Verge
432:0190-8286
392:0362-4331
279:retweeted
265:Reception
131:Dawn Song
62:'yes/and.
657:Category
316:See also
310:TESCREAL
243:Rob Pike
135:Ted Lieu
430:
390:
91:Grimes
638:Salon
428:ISSN
388:ISSN
269:The
257:and
42:and
659::
636:.
611:.
599:^
583:.
558:.
534:.
510:.
485:.
460:.
446:^
426:.
422:.
406:^
386:.
382:.
368:^
358:.
345:^
285:,
277:,
273:,
261:.
253:,
249:,
245:,
241:,
237:,
233:,
229:,
225:,
221:,
217:,
213:,
209:,
205:,
201:,
197:,
193:,
189:,
185:,
181:,
177:,
173:,
169:,
165:,
161:,
157:,
153:,
149:,
145:,
141:,
137:,
133:,
129:,
125:,
121:,
117:,
113:,
109:,
105:,
101:,
97:,
93:,
89:,
85:,
81:,
77:,
73:,
66:"
50:.
33:AI
646:.
621:.
593:.
569:.
544:.
520:.
495:.
470:.
440:.
400:.
64:'
23::
Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.