Knowledge

:Knowledge Signpost/2022-08-01/From the editors - Knowledge

Source 📝

462: 309: 537: 420: 441: 1309:
era artists have created quantum leaps in artistic style like cubism, impressionism etc. The model cannot possibly create anything new. When you learn fine art you don't go look at a Rothko painting and then immediately pick up a bucket of paint, you go through years of learning the foundations of figure drawing, perspective etc. Artists have an understanding of the world and their own interior life that the model cannot possibly have and that's why human works, even if derivative, are art and these images are imitation.
130: 150: 512:. For this issue, I used both of these services to generate illustrations for our articles: some came out very impressively, and some came out a little goofy. It was definitely surprising to see it have a coherent response for the prompt "Technoblade's avatar" that actually looked like it – I guess this is what happens when the training set is massive. Anyway, you can see a bunch of these on the issue page. For a comparison between the three models I found usable, see the embedded images above. 1134:, created in 2009) was done that way. Over time, the Google Translate translations have become better and better, and with some languages (eg French, Italian, Portuguese) they are now generally so good that only minimal copyediting is necessary. I even occasionally receive compliments from native speakers for the quality of my translations from languages such as French (in which I am self taught, and which I do not speak well), and Italian (which I cannot read or speak). 803: 110: 1158:"Again, the company integrated certain filters to keep generated images in line with its content policy and has pledged to keep updating those filters. Prompts that seem likely to produce forbidden content are blocked and, in an attempt to prevent deepfakes, it can't exactly reproduce faces it has seen during its training. Thus far, OpenAI has also used human reviewers to check images that have been flagged as possibly problematic." 192: 140: 700:
trained in 2021, and as far as I can tell I am the only journalist who has ever written a monthly recap of the ten longest Knowledge deletion discussions, so it's possible that when instructed to write an AfD report, it simply learned from the best (and/or only). On the other hand, it's also possible that there are just very few ways in which to describe the closure of an AfD, and this happens to be one of them. Who knows.
633:
because", you will probably end up with a bunch of text about how the French are bastards, similar to if you typed that into a search engine. In this particular instance, I did not observe GPT-3 saying anything prejudiced. This may be due to the people I prompted it to emulate; presumably, if I had told it to write in the style of Adolf Hitler, I would have gotten some nasty stuff. My solution to this was to not do that.
36: 160: 396:. This produced surprisingly insightful commentary; Justice GPT-Holmes proved able to summarize minute details of proceedings, including some things I'd missed while originally reading them. In general, he was more well-behaved (and less prone to obscene tirades) than GPT-Thompson, although he did have a tendency for long-winded digressions, and would often quote entire paragraphs from the source text. 120: 504:, announced by OpenAI in January 2021. Since then, a number of services have become available, which use a variety of architectures to generate images from natural-language prompts (i.e. a prompt phrased in normal language like "a dog eating the Empire State Building", rather than a procedurally defined set of attributes and subjects written in a specialized description language). Among these are 550:
generally that they're ripping off human artists because they were trained on a bunch of images from the Internet, including copyrighted ones. However, it's not clear (at least to me) in what way this process differs from the same being done by human artists. As far as I can tell, this is the way that humans have created art for the last several tens of thousands of years – as far as I can tell,
170: 298:. It's obvious that a computer program capable of writing on the level of humans would have enormous implications for the corporate, academic, journalistic, and literary world. While there are certainly some unrealistically hyped-up claims, it's hard to overstate how much these things are capable of, despite their constraints. 615:
sort of art to writing prompts in a way that causes useful answers to be generated, which gradually became easier to do as time went on. For example, replacing "The following is a summary of the discussion" with "The following is a rigorously accurate summary of the discussion" (yes, this really works).
1308:
With regards to the process a neural net uses to create images versus a human artist: the model does not experience qualia. It cannot have intent so it cannot create in the way a human can. Humans created art in prehistory without training on other art because it didn't exist, just like in the modern
1069:
who have both been pilloried for taking material from elsewhere and doing a weak job of turning it into Knowledge copy. A good bot seems likely to do a better job of such mechanical editing. As the number of active editors and admins suffers atrophy and attrition, I expect that this is the future.
461: 632:
Language models like GPT-3 predict the most likely completion for a given input sequence, based on its training corpus, which is a very broad spectrum of text from the Internet (ranging from old books to forum arguments to furry roleplay). If its prompt is the phrase "I think the French are bastards
614:
Even accounting for the time spent verifying claims, it was still generally faster than writing the articles myself, as it was capable of structuring full paragraphs of text in seconds. It was sometimes time-consuming to re-prompt it when it would write something incorrect or useless, but there is a
549:
While some concerns have been raised about the intellectual property implications of images generated by such models, the determination has been made (at least on Commons) that they're ineligible for copyright due to being the output of a computer algorithm. With respect to moral rights, the idea is
388:
produced output ranging from obscene and irreverent to maliciously slanderous. Notably, this behavior is identical to what Hunter S. Thompson habitually did in real life, and part of why many editors allegedly loathed working with him. Personally, I didn't mind sifting through the diatribes (some of
399:
Similar to the deletion report, input consisted of brief prologues (e.g. "The following is a verbatim transcript of the findings of fact in a Knowledge arbitration request titled 'WikiProject Tropical Cyclones'"). This was followed by the transcript of the relevant pages (whether they were the main
1256:
be—the key novelty of machine learning is that the programmers have little idea how it works). I'm sure if you analyse a large range of its output, you'd find it draws Jewish people or fictional people with Jewish-sounding names as having larger noses than non-Jewish people, or something similarly
1088:
I am completely blown away by this. I have been following these AI developments for some time, but seeing them used for this application with such coherence is unbelievable. I have many confused and contradictory thoughts about the implications of AI advancement on Wikimedia projects, but for now
1024:
An epistomological reason : large language models such as BERT, GPT-3 and the most recent one Bloom are trained using a lot of text from the Internet including Knowledge. The quality of those models comes from the fact that they are trained on text written by humans. If we use generative language
621:
In terms of typographical errors, it was far better: I don't remember it making a single misspelling. The few grammatical errors it made were minor, and not objectively incorrect (e.g. saying "other editors argue" instead of "other editors argued" for a discussion that the prompt said had already
1041:
Maybe it would be worth to have a sister project from the foundation using an AI based encyclopedia (Wiki-AI-pedia). Now, we have a problem. It may be very difficult in a near future to detect contributions generated with generative AI. This will be a big challenge. Imagine an AI which would be
1028:
A legal argument : GPT-3 is not open source. It is a proprietary algorithm produced by OpenAI. We should be very suspicious with such a powerful proprietary tool. What happens if the price prohibitive? BLOOM, the most recent model, is not proprietary but not open source. It uses a responsable AI
699:
One thing I couldn't help but notice was that some of the phrasing was quite similar to those I've used when writing previous deletion reports, like "Ultimately, the discussion resulted in a no consensus decision". My understanding is that GPT-3 included Knowledge in its corpus, this version was
515:
DALL-E 2 creates much higher-quality images than what I used, but there's a waitlist for access, and it didn't end up happening by press time (although I did get my friend to generate me one). For a comparison, see below; both were prompted from the string "Teddy bears working on new AI research
1129:
I've been doing something broadly similar to your little exercise for quite a few years. I find an interesting high quality article on a foreign language Knowledge, use Google Translate to translate it into English, copyedit the translation, and then publish it on en Knowledge (with appropriate
718:
It's not entirely clear to me what causes this. While there are obviously differences between the writing styles of Gonzo journalism and Supreme Court opinions, there are also inherent differences in the format of AfD discussions (which are a bunch of people replying to each other directly) and
593:
Not exactly. I organized the layout of each article, determined what sections would go where, and had GPT-3 write the body text of each section according to specific prompts (as described above). It was also necessary to format the model's output in MediaWiki markup. Although GPT-3 is more than
1036:
A technical argument : Knowledge is not only about writing articles but also about collaborating, explaining decisions and argumenting. I don't think that AI are able to have a real discussion in a talk page and we should still remember that the AI don't know what is good, true or just. Humans
943: 1217:
It should be noted that such filters are often a rather ad-hoc measure, with DALL-E 2 believed to merely be adding keywords like "black," "Women," or "Asian American" randomly to text prompts to make the output appear more diverse. It is fairly easy to get past such filters using
600:
Well, we don't do that for human writers, either. I don't even do that for myself – typically, by the time I flag my own stuff to be copyedited, I have gone through multiple stages of writing, rewriting, editing, adding notes for clarification, and deleting unnecessary
1019:
Thanks for this stimulating piece. I think that this raises questions about the use of generative language models in Knowledge. Even if the results are mind blowing, I think that we should refuse the use of generative language models in Knowledge for several reasons :
523: 355:. This produced a mixture of insightful, incisive, and derisive commentary. GPT-Thompson proved quite capable of accurately summarizing the slings and arrows of every discussion in the report – even though it specifically covers the longest and most convoluted AfDs. " 607:
I don't – every claim that it made was individually fact-checked (we do this for human writers, too). The overwhelming majority were correct, and in the rare cases where it got something wrong, it could be fixed by asking it to complete the prompt
163: 1089:
I'll limit myself to one thing I am clear on: whether for good reasons or bad reasons, soon each person in the Wikimedia community will need to be aware of the technological levels of tools like GPT-3, DALL-E and their successors, and this
123: 133: 649:
Large language models are notorious for requiring massive amounts of processing power. I still think it's a bargain: imagine how much it would cost to actually hire Hunter S. Thompson and Oliver Wendell Holmes after you adjusted for
356: 419: 362:
For each discussion in the report, I provided a full transcript of the AfD page (with timestamps and long signatures truncated to aid processing), and prompted GPT-3 for a completion, using some variation on the following:
586:
No. I really did have GPT-3 write these articles. I can show you screenshots of my account on the OpenAI website if you want. Copyediting was minimal, and consisted mostly of reordering entries and removing irrelevant
173: 847: 153: 1182:
The GPT-3 used on OpenAI's site has a mandatory content filter model that it goes through; if content is marked as problematic, a warning appears and OpenAI's content policy doesn't allow for reusing the text.
440: 405:
The following text is an article written by United States Supreme Court Justice Oliver Wendell Holmes, summarizing the findings of fact and remedies, and their broader implications for the English Knowledge's
282:. Perhaps as a testament to the rapidity of developments in the field, even Knowledge (famous for articles written within minutes of speeches being made and explosions being heard) currently has a redlink for 400:
case page, arbitration noticeboard posting, preliminary statements, arbitrator statements, or findings of fact and remedies). Afterwards, a prompt was given for a summary, of the following general form:
236:
For anyone who is not familiar with this alphabet soup, I've written a fairly comprehensive overview of the field's origins and history, as well as an explanation of the technologies involved,
90: 377:
Despite being ostensibly written in Thompson's style, these were generally quite straightforward summaries that covered the arguments made during each discussion, with hardly any profanity.
326:
The first is for me to keep droning on about how these models are a big deal, in a boring wall of text that makes increasingly outlandish and far-fetched claims about their capabilities.
96: 958: 902: 867: 342: 380:
Afterwards, I provided the summary itself as a prompt, and asked GPT-Thompson for an "acerbic quip" on each. Unlike the "summary" prompts (in which GPT-Thompson only
927: 877: 143: 1008:
I have some concerns, but I also have no idea what I'm doing, so there's that! Fantastic read, and I'm very interested to see the ongoing implications of this tech.
643:
writing, so it's hard to tell precisely how much compute went towards these articles. However, the total cost of all the API requests I've made so far is 48.12 USD.
887: 789: 259:(2020–22) wrote this month's arbitration report (a full explanation of what I did, how I did it, and responses to the most obvious questions can be found below). 912: 892: 872: 594:
capable of writing code (including wikitext), I didn't want to overwhelm it by asking it to do too much stuff at the same time, as this tends to degrade quality.
522: 348:
For the deletion report, GPT-3 was prompted with a transcript of each discussion in the report, and instructed to write a summary of it in the style of deceased
338: 233:, and more. While a precise definition of all these terms would take multiple paragraphs, the thing they have in common is that a computer is doing some stuff. 882: 857: 852: 832: 1025:
models on Knowledge, future language models will be trained on a mixture of human and AI generated text. I guess that I some point it will become meaningless.
1212: 1187: 837: 1318: 1231: 917: 820: 1301: 1278: 1152:
Don't AI models have some sort of system to block "problematic prompts"? I know that DALL-E 2 blocks problematic prompts as per the following quote in
1083: 247:
has advanced at a pace which is, depending on who you ask, somewhere between "astounding", "terrifying", "overhyped" and "revolutionary". For example,
1240:
Racism is far more nuanced, pernicious and deeply embedded than just saying the N-word or writing in the style of Hitler. To adapt the common phrase "
709:
I would provide a highlights reel of these, but I was serious about the "unprintable" part – you'd be reading about it in the next arbitration report.
1332: 1176: 1122: 897: 814: 536: 55: 44: 1143: 1057:
Many thanks for this trial which is quite remarkable. A key strength of such bots is that they are good at following rules while humans tend to
922: 862: 932: 1106: 1014: 392:
For the arbitration report, GPT-3 was instructed to write a summary of each page in the style of deceased United States Supreme Court justice
907: 842: 776:
They speak for all that is cruel and stupid and vicious in the American character. They are the racists and hate mongers among us Fuck them.
500:(in conjunction with other technologies) have proven themselves quite capable of image generation. The first of these, broadly speaking, was 1384: 1051: 738:
Kojima, Takeshi; Shixiang Shane Gu; Reid, Machel; Matsuo, Yutaka; Iwasawa, Yusuke (2022). "Large Language Models are Zero-Shot Reasoners".
728:
Sometimes, he would start adding his own recommendations, and I would have to append "Holmes is not himself an arbitrator." to the prompt.
308: 291: 287: 963: 970: 113: 389:
them were quite entertaining), but having to run each prompt several times to get something usable did make it fairly expensive.
21: 1070:
The people with the power and money like Google and the WMF will naturally tend to replace human volunteers with such AI bots.
359:", for example, was a whopping 126,000 bytes (and needed to be processed in several segments) but the description was accurate. 1360: 719:
ArbCom proceedings (which are a bunch of individual sections of highly procedural text). More research is needed on this front.
237: 1355: 1350: 760: 497: 267: 30:
Rise of the machines, or something: The future of stuff? Who knows, but two articles were written by a computer this month.
1207: 1171: 1345: 255:(2019) could write human-level text but was barely capable of staying on topic for more than a couple paragraphs, and 947: 769: 384:
chose to accompany his commentary with unprintable calumny and scathing political rants), the "acerbic quip" prompts
368:"The following text is a summary of the above discussion, written by Gonzo journalist Hunter S. Thompson for 1340: 802: 49: 35: 17: 230: 1079: 393: 762:
Kingdom of Fear: Loathsome Secrets of a Star-crossed Child in the Final Days of the American Century
577: 1241: 1203: 1167: 411: 333:
I have opted for the second. In this issue, two articles have been written by an AI model called
214: 1265:
DALL-E 2, Craiyon and GPT-3 content; I can't see any particular biases in this month's issue. —
248: 1222:, and as such, I would not rely on those filters to protect us from malicious and biased uses. 295: 191: 1118: 673: 275: 677:-style captions for cartoons about Oliver Wendell Holmes being resurrected to write for the 1366: 1113:
The "Damn" part is something I didn't think about and is so true. Thanks for including it!
1075: 1071: 283: 8: 1314: 1227: 1139: 1062: 597:
This is obviously cherry-picked – you didn't just publish the direct output of the model.
571: 1248:. If DALL-E 2 isn't specifically designed to avert stereotypes from the dataset then it 240:, and ask forgiveness for starting the explanation of a 2019 software released in 1951. 1219: 1199: 1163: 739: 480:, a GPT-3 implementation paired with CLIP (Contrastive Language-Image Pre-training) by 352: 301: 1297: 1009: 954: 766: 567: 563: 1271: 1245: 1114: 1099: 1030: 690:
All statements were individually fact-checked, per the "obvious questions" section.
349: 270:(this is what "GPT" stands for) are a family of large language models developed by 263: 244: 226: 1184: 1131: 1033:). It is far better than OpenAI's approach but it also raises lots of questions. 671:
2. The caption was also written by GPT-3, in response to a prompt asking it for
434:(formerly "DALL-E Mini"), a VQGAN- and BART-based generative adversarial network 1310: 1223: 1135: 1093:
experiment in writing is a fascinating way to draw people's attention to it. —
1047: 222: 1378: 1058: 551: 369: 218: 1153: 508:(formerly known as "DALL-E Mini", despite having no relation to DALL-E) and 1293: 1066: 654: 639:
Since I signed up for the GPT-3 beta, I have used it for things other than
1196:
could have mentioned that such models have restrictions to prevent abuse.
1266: 1094: 559: 509: 493:
images where the sky got turned into dogs. This is a little different.
452: 1286:
formerly known as "DALL-E Mini", despite having no relation to DALL-E
1193: 1043: 490: 205: 183: 316:"I see that Knowledge has finally found a use for my old law books." 744: 455:, a diffusion network whose architecture is not publicly documented 279: 1244:": racism in, racism out. Take a look at our excellent article on 737: 505: 431: 213:
There are a few terms that have been thrown around a lot lately:
1130:
attribution). In fact my first ever Knowledge article creation (
668: 501: 481: 477: 271: 334: 256: 252: 1288:"Formerly"? Aw, why? The Java / JavaScript relationship was 590:
So you just pushed a button and the whole thing popped out?
555: 979:
I commissioned GPT-3 to write a poem about this article:
611:
Why not just write the articles yourself at that point?
357:
Ukrainian Insurgent Army war against Russian occupation
286:. Much ink has already been spilled on claims of GPTs' 322:
With that said, there are basically two options here.
372:'s monthly feature on Knowledge deletion processes." 68:
File:A cyborg version of Oliver Wendell Holmes 2.png
968:If your comment has not appeared here, you can try 201:
Here is the deal: it's pretty good at what it does.
329:The second is to show you what I am talking about. 604:How do you know it's not completely full of crap? 1376: 1031:https://huggingface.co/spaces/bigscience/license 622:been closed – this is not even really an error 251:(2018) was a mildly interesting research tool, 1252:perpetuate them (and it's hard to see how it 1061:. For example, consider the recent cases of 181: 1257:offensive. However, this is no criticism of 1001:Long may it reign, and write more reports, 618:So it is a worse version of a human writer? 995:So let's all give three cheers for GPT-3, 1292:the right model to follow on this. /s -- 848:Eyewitness Wikimedian, Vinnytsia, Ukraine 759:Thompson, Hunter S. (November 24, 2011). 743: 1160:Maybe GPT-3 could use a similar system. 758: 971: 14: 1377: 998:The greatest machine we've ever seen, 989:With insights both derisive and sage, 1150:"I heard language models were racist" 636:How much did this whole gimmick cost? 54: 29: 629:I heard language models were racist. 1385:Knowledge Signpost archives 2022-08 992:It's sure to make history's pages. 986:Has written an AfD report so fine, 516:underwater with 1990s technology". 27: 801: 307: 91:Rise of the machines, or something 56: 34: 28: 1396: 953:These comments are automatically 535: 521: 496:In addition to text completion, 460: 439: 418: 190: 168: 158: 148: 138: 128: 118: 108: 566:didn't get in trouble with the 964:add the page to your watchlist 752: 731: 722: 712: 703: 693: 684: 661: 243:In recent years, the field of 13: 1: 983:GPT-3, the glorious machine, 939: 667:This image was generated by 489:We all remember those weird 18:Knowledge:Knowledge Signpost 7: 1319:07:52, 23 August 2022 (UTC) 1302:08:57, 10 August 2022 (UTC) 1192:Exactly. My point is that @ 562:for writing pop music, and 10: 1401: 1279:22:16, 4 August 2022 (UTC) 1232:19:12, 4 August 2022 (UTC) 1213:05:48, 4 August 2022 (UTC) 1188:04:25, 4 August 2022 (UTC) 1177:03:40, 4 August 2022 (UTC) 1144:05:34, 2 August 2022 (UTC) 1123:19:25, 1 August 2022 (UTC) 1107:14:39, 1 August 2022 (UTC) 1084:09:29, 1 August 2022 (UTC) 1052:06:27, 1 August 2022 (UTC) 1015:01:06, 1 August 2022 (UTC) 572:obviously derivative works 765:. Penguin Books Limited. 468:Justice Holmes editing a 394:Oliver Wendell Holmes, Jr 1242:garbage in, garbage out 1333:looking for new talent 1042:expert in vandalism? 1006: 961:. To follow comments, 806: 583:This is a joke, right? 570:(even when he painted 312: 39: 1004:On Knowledge, the 💕! 981: 805: 311: 284:large language models 38: 1072:Hasta la vista, baby 957:from this article's 774:– via Google Books. 1220:prompt engineering 948:Discuss this story 903:On the bright side 868:Arbitration report 807: 794:"From the editors" 447:Justice GPT-Holmes 353:Hunter S. Thompson 343:arbitration report 313: 45:← Back to Contents 40: 972:purging the cache 928:From the archives 878:Discussion report 578:Obvious questions 564:Peter Paul Rubens 50:View Latest Issue 1392: 1369: 1274: 1246:algorithmic bias 1102: 1012: 975: 973: 967: 946: 888:Featured content 828:From the editors 825: 817: 810: 793: 779: 778: 756: 750: 749: 747: 735: 729: 726: 720: 716: 710: 707: 701: 697: 691: 688: 682: 665: 558:claims from the 544:1620×1620 pixels 539: 525: 464: 443: 422: 412:Image generation 350:Gonzo journalist 245:machine learning 194: 186: 172: 171: 162: 161: 152: 151: 142: 141: 132: 131: 122: 121: 112: 111: 88:From the editors 62: 60: 58: 1400: 1399: 1395: 1394: 1393: 1391: 1390: 1389: 1375: 1374: 1373: 1372: 1371: 1370: 1365: 1363: 1358: 1353: 1348: 1343: 1336: 1325: 1324: 1272: 1154:an IEEE article 1132:Bernina Railway 1100: 1010: 977: 969: 962: 951: 950: 944:+ Add a comment 942: 938: 937: 936: 913:Recent research 893:Tips and tricks 873:Deletion report 818: 813: 811: 808: 797: 796: 791: 785: 784: 783: 782: 772: 757: 753: 736: 732: 727: 723: 717: 713: 708: 704: 698: 694: 689: 685: 666: 662: 657: 580: 568:Lascaux cavemen 545: 540: 531: 526: 485: 465: 456: 444: 435: 423: 414: 339:deletion report 319: 318: 305: 304: 208: 195: 188: 187: 180: 179: 178: 169: 159: 149: 139: 129: 119: 109: 103: 100: 89: 85: 84: 81: 78: 75: 72: 69: 65: 63: 53: 52: 47: 41: 31: 26: 25: 24: 12: 11: 5: 1398: 1388: 1387: 1364: 1359: 1354: 1349: 1344: 1339: 1338: 1337: 1327: 1326: 1323: 1322: 1321: 1305: 1304: 1282: 1281: 1238: 1237: 1236: 1235: 1234: 1147: 1146: 1126: 1125: 1110: 1109: 1086: 1063:Martinevans123 1039: 1038: 1034: 1026: 952: 949: 941: 940: 935: 930: 925: 920: 915: 910: 905: 900: 895: 890: 885: 883:Traffic report 880: 875: 870: 865: 860: 858:Community view 855: 853:Election guide 850: 845: 840: 835: 833:News and notes 830: 824: 812: 800: 799: 798: 788: 787: 786: 781: 780: 770: 751: 730: 721: 711: 702: 692: 683: 659: 658: 656: 653: 652: 651: 647: 644: 637: 634: 630: 627: 619: 616: 612: 609: 605: 602: 598: 595: 591: 588: 584: 579: 576: 547: 546: 543: 541: 534: 532: 530:256×256 pixels 529: 527: 520: 487: 486: 475: 466: 459: 457: 450: 445: 438: 436: 429: 424: 417: 413: 410: 409: 408: 406:jurisprudence. 375: 374: 331: 330: 327: 320: 314: 306: 303: 300: 211: 210: 209: 203: 197: 196: 189: 177: 176: 166: 156: 146: 136: 126: 116: 105: 104: 101: 95: 94: 93: 92: 87: 86: 82: 79: 76: 73: 70: 67: 66: 64: 61: 48: 43: 42: 33: 32: 15: 9: 6: 4: 3: 2: 1397: 1386: 1383: 1382: 1380: 1368: 1362: 1357: 1352: 1347: 1342: 1334: 1330: 1320: 1316: 1312: 1307: 1306: 1303: 1299: 1295: 1291: 1287: 1284: 1283: 1280: 1276: 1275: 1268: 1264: 1260: 1255: 1251: 1247: 1243: 1239: 1233: 1229: 1225: 1221: 1216: 1215: 1214: 1211: 1210: 1209: 1205: 1201: 1195: 1191: 1190: 1189: 1186: 1185:🐶 EpicPupper 1181: 1180: 1179: 1178: 1175: 1174: 1173: 1169: 1165: 1159: 1155: 1151: 1145: 1141: 1137: 1133: 1128: 1127: 1124: 1120: 1116: 1112: 1111: 1108: 1104: 1103: 1096: 1092: 1087: 1085: 1081: 1077: 1073: 1068: 1064: 1060: 1056: 1055: 1054: 1053: 1049: 1045: 1035: 1032: 1027: 1023: 1022: 1021: 1017: 1016: 1013: 1005: 1002: 999: 996: 993: 990: 987: 984: 980: 974: 965: 960: 956: 945: 934: 931: 929: 926: 924: 921: 919: 916: 914: 911: 909: 906: 904: 901: 899: 896: 894: 891: 889: 886: 884: 881: 879: 876: 874: 871: 869: 866: 864: 861: 859: 856: 854: 851: 849: 846: 844: 841: 839: 836: 834: 831: 829: 826: 822: 816: 815:1 August 2022 809:In this issue 804: 795: 777: 773: 771:9780241958735 768: 764: 763: 755: 746: 741: 734: 725: 715: 706: 696: 687: 680: 676: 675: 670: 664: 660: 648: 645: 642: 638: 635: 631: 628: 625: 620: 617: 613: 610: 606: 603: 599: 596: 592: 589: 585: 582: 581: 575: 573: 569: 565: 561: 557: 554:does not get 553: 552:Billie Eilish 538: 533: 524: 519: 518: 517: 513: 511: 507: 503: 499: 494: 492: 483: 479: 473: 471: 463: 458: 454: 448: 442: 437: 433: 427: 421: 416: 415: 407: 403: 402: 401: 397: 395: 390: 387: 383: 378: 373: 371: 370:Rolling Stone 366: 365: 364: 360: 358: 354: 351: 346: 344: 340: 336: 328: 325: 324: 323: 317: 310: 299: 297: 293: 289: 285: 281: 277: 274:, similar to 273: 269: 265: 260: 258: 254: 250: 246: 241: 239: 234: 232: 228: 224: 220: 216: 207: 202: 199: 198: 193: 185: 175: 167: 165: 157: 155: 147: 145: 137: 135: 127: 125: 117: 115: 107: 106: 98: 59: 57:1 August 2022 51: 46: 37: 23: 19: 1329:The Signpost 1328: 1289: 1285: 1270: 1262: 1259:The Signpost 1258: 1253: 1249: 1198: 1197: 1162: 1161: 1157: 1149: 1148: 1098: 1090: 1040: 1018: 1007: 1003: 1000: 997: 994: 991: 988: 985: 982: 978: 838:In the media 827: 821:all comments 775: 761: 754: 733: 724: 714: 705: 695: 686: 678: 672: 663: 640: 623: 548: 514: 498:transformers 495: 488: 469: 467: 446: 426:GPT-Thompson 425: 404: 398: 391: 385: 382:occasionally 381: 379: 376: 367: 361: 347: 332: 321: 315: 268:transformers 266:pre-trained 261: 242: 235: 212: 200: 114:PDF download 1367:Suggestions 1115:Lectrician1 1059:cut corners 955:transcluded 918:Serendipity 476:Image from 451:Image from 430:Image from 302:The reports 164:X (Twitter) 1290:definitely 745:2205.11916 674:New Yorker 650:inflation. 510:Midjourney 453:Midjourney 264:generative 102:Share this 97:Contribute 71:Shambibble 22:2022-08-01 1361:Subscribe 1311:Omicron91 1136:Bahnfrend 1029:license ( 959:talk page 491:DeepDream 296:potential 288:sentience 1379:Category 1356:Newsroom 1351:Archives 1091:Signpost 898:In focus 792:Previous 679:Signpost 641:Signpost 601:content. 542:DALL-E 2 478:DALL-E 2 470:Signpost 341:and the 154:Facebook 144:LinkedIn 134:Mastodon 20:‎ | 1294:FeRDNYC 1263:curated 1067:Lugnuts 1011:ASUKITE 923:Gallery 863:Opinion 587:asides. 560:Beatles 528:Craiyon 506:Craiyon 432:Craiyon 1267:Bilorv 1261:using 1095:Bilorv 1076:Andrew 933:Humour 669:DALL-E 646:Damn!! 624:per se 608:again. 502:DALL-E 482:OpenAI 472:report 386:solely 337:: the 294:, and 272:OpenAI 174:Reddit 124:E-mail 1346:About 1254:could 1208:Light 1172:Light 908:Essay 843:Op-Ed 740:arXiv 655:Notes 335:GPT-3 280:XLnet 257:GPT-3 253:GPT-2 16:< 1341:Home 1315:talk 1298:talk 1273:talk 1250:will 1228:talk 1224:Yitz 1200:Tube 1194:JPxG 1164:Tube 1140:talk 1119:talk 1101:talk 1080:talk 1074:... 1065:and 1048:talk 1044:PAC2 767:ISBN 556:DMCA 292:bias 278:and 276:BERT 262:The 238:here 184:JPxG 1331:is 1078:🐉( 1037:do. 574:). 249:GPT 231:NLP 182:By 99:— 83:300 1381:: 1317:) 1300:) 1277:) 1230:) 1204:of 1168:of 1156:: 1142:) 1121:) 1105:) 1082:) 1050:) 790:← 626:). 345:. 290:, 229:, 227:ML 225:, 223:NN 221:, 219:DL 217:, 215:AI 204:– 74:PD 1335:. 1313:( 1296:( 1269:( 1226:( 1206:· 1202:· 1170:· 1166:· 1138:( 1117:( 1097:( 1046:( 976:. 966:. 823:) 819:( 748:. 742:: 681:. 484:. 474:. 449:. 428:. 206:J 80:0 77:0

Index

Knowledge:Knowledge Signpost
2022-08-01
The Signpost
← Back to Contents
View Latest Issue
1 August 2022
Contribute
PDF download
E-mail
Mastodon
LinkedIn
Facebook
X (Twitter)
Reddit
JPxG

J
AI
DL
NN
ML
NLP
here
machine learning
GPT
GPT-2
GPT-3
generative
transformers
OpenAI

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.