Knowledge

Belief–desire–intention software model

Source 📝

76:
which it is already committed. The BDI software model partially addresses these issues. Temporal persistence, in the sense of explicit reference to time, is not explored. The hierarchical nature of plans is more easily implemented: a plan consists of a number of steps, some of which may invoke other plans. The hierarchical definition of plans itself implies a kind of temporal persistence, since the overarching plan remains in effect while subsidiary plans are being executed.
247:: Plans are sequences of actions (recipes or knowledge areas) that an agent can perform to achieve one or more of its intentions. Plans may include other plans: my plan to go for a drive may include a plan to find my car keys. This reflects that in Bratman's model, plans are initially only partially conceived, with details being filled in as they progress. 366:: The architecture does not have (by design) any lookahead deliberation or forward planning. This may not be desirable because adopted plans may use up limited resources, actions may not be reversible, task execution may take longer than forward planning, and actions may have undesirable side effects if unsuccessful. 75:
For Bratman, desire and intention are both pro-attitudes (mental attitudes concerned with action). He identifies commitment as the distinguishing factor between desire and intention, noting that it leads to (1) temporal persistence in plans and (2) further plans being made on the basis of those to
255:: These are triggers for reactive activity by the agent. An event may update beliefs, trigger plans or modify goals. Events may be generated externally and received by sensors or integrated systems. Additionally, events may be generated internally to trigger decoupled updates or plans of activity. 110:
The BDI software model is closely associated with intelligent agents, but does not, of itself, ensure all the characteristics associated with such agents. For example, it allows agents to have private beliefs, but does not force them to be private. It also has nothing to say about agent
51:
or an external planner application) from the execution of currently active plans. Consequently, BDI agents are able to balance the time spent on deliberating about plans (choosing what to do) and executing those plans (doing it). A third activity, creating the plans in the first place
111:
communication. Ultimately, the BDI software model is an attempt to solve a problem that has more to do with plans and planning (the choice and execution thereof) than it has to do with the programming of intelligent agents. This approach has recently been proposed by
103:. More recently, Michael Wooldridge has extended BDICTL to define LORA (the Logic Of Rational Agents), by incorporating an action logic. In principle, LORA allows reasoning not only about individual agents, but also about communication and other interaction in a 79:
An important aspect of the BDI software model (in terms of its research relevance) is the existence of logical models through which it is possible to define and reason about BDI agents. Research in this area has led, for example, to the
350:: In addition to not explicitly supporting learning, the framework may not be appropriate to learning behavior. Further, the BDI model does not explicitly describe mechanisms for interaction with other agents and integration into a 229:
adds the further restriction that the set of active desires must be consistent. For example, one should not have concurrent goals to go to a party and to stay at home – even though they could both be desirable.
1093:
Fichera, Loris; Marletta, Daniele; Nicosia, Vincenzo; Santoro, Corrado (2011). "Flexible Robot Strategy Design Using Belief-Desire-Intention Model". In Obdržálek, David; Gottscheber, Achim (eds.).
259:
BDI was also extended with an obligations component, giving rise to the BOID agent architecture to incorporate obligations, norms and commitments of agents that act within a social environment.
931:. In Proceedings of Second Workshop on Languages, Methodologies and Development Tools for Multi-agent Systems (LADS2009). Turin, Italy. September 2009. CEUR Workshop Proceedings Vol-494. 47:, it actually uses these concepts to solve a particular problem in agent programming. In essence, it provides a mechanism for separating the activity of selecting a plan (from a plan 72:(also referred to as Belief-Desire-Intention, or BDI). That is to say, it implements the notions of belief, desire and (in particular) intention, in a manner inspired by Bratman. 1277:. In Proceedings of Second Workshop on Languages, Methodologies and Development Tools for Multi-agent Systems (LADS2009). CEUR Workshop Proceedings, Vol-494, Turin, Italy, 2009. 1076: 344:: The multi-modal logics that underlie BDI (that do not have complete axiomatizations and are not efficiently computable) have little relevance in practice. 893:. In Proceedings of the Tenth International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2011). Taipei, Taiwan. May 2011., pp. 397-404. 241:
to do. Intentions are desires to which the agent has to some extent committed. In implemented systems, this means the agent has begun executing a plan.
112: 1239: 1220: 157:: Beliefs represent the informational state of the agent–its beliefs about the world (including itself and other agents). Beliefs can also include 546:"Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles" 886: 924: 1300: 1042: 1202: 614: 396: 1195:. In Proceedings of the 2nd International Conference on Principles of Knowledge Representation and Reasoning, pages 473–484, 1991. 867: 1110: 794: 761: 699: 666: 504: 317:. This section bounds the scope of concerns for the BDI software model, highlighting known limitations of the architecture. 313:
The BDI software model is one example of a reasoning architecture for a single rational agent, and one concern in a broader
324:: BDI agents lack any specific mechanisms within the architecture to learn from past behavior and adapt to new situations. 335: 682:
Guerra-Hernández, Alejandro; Amal El Fallah-Seghrouchni; Henry Soldano (2004). "Learning in BDI Multi-agent Systems".
1253: 1228: 905: 53: 1017: 1290: 1097:. Communications in Computer and Information Science. Vol. 156. Berlin, Heidelberg: Springer. pp. 57–71. 1073: 446: 840: 203:: Desires represent the motivational state of the agent. They represent objectives or situations that the agent 649:
Phung, Toan; Michael Winikoff; Lin Padgham (2005). "Learning Within the BDI Framework: An Empirical Analysis".
598:
Proceedings of the fifth international conference on Autonomous agents, 2001, pages 9-16, ACM New York, NY, USA
1209:, In Proceedings of the First International Conference on Multiagent Systems (ICMAS'95), San Francisco, 1995. 1125: 69: 1015:
TAO: A JAUS-based High-Level Control System for Single and Multiple Robots Y. Elmaliach, CogniTeam, (2008)
173:
recognizes that what an agent believes may not necessarily be true (and in fact may change in the future).
781:. Multiagent Systems, Artificial Societies, and Simulated Organizations. Vol. 15. pp. 149–174. 381: 268: 1259: 1295: 816:
Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
725: 100: 499: 470: 417: 64:
In order to achieve this separation, the BDI software model implements the principal aspects of
883: 777:
Pokahr, Alexander; Lars Braubach; Winfried Lamersdorf (2005). "Jadex: A BDI Reasoning Engine".
720: 1214: 921: 715:
Rao, M. P. Georgeff. (1995). "Formal models and decision procedures for multi-agent systems".
225:: A goal is a desire that has been adopted for active pursuit by the agent. Usage of the term 951: 56:), is not within the scope of the model, and is left to the system designer and programmer. 1274: 1136: 744:; Milind Tambe; Michael Wooldridge (1999). "The Belief-Desire-Intention Model of Agency". 8: 1199: 625: 132: 48: 681: 351: 314: 104: 864: 1249: 1224: 1106: 1074:
The Living Systems Technology Suite: An Autonomous Middleware for Autonomic Computing
1036: 790: 757: 741: 695: 662: 596:
The BOID architecture: conflicts between beliefs, obligations, intentions and desires
577: 514: 135: 32: 28: 1058: 1098: 782: 749: 687: 654: 567: 557: 494: 267:
This section defines an idealized BDI interpreter that provides the basis of SRI's
162: 120: 116: 89: 81: 622:
Proceedings of the First International Conference on Multiagent Systems (ICMAS'95)
1206: 1147: 1080: 928: 909: 890: 871: 691: 509: 331: 65: 1102: 940: 828: 811: 739: 562: 545: 524: 192: 158: 97: 1284: 964:"Jason | a Java-based interpreter for an extended version of AgentSpeak" 812:"Hierarchical planning in BDI agent programming languages: a formal approach" 581: 334:
and planning research questions the necessity of having all three attitudes,
150:
This section defines the idealized architectural components of a BDI system.
1192: 786: 753: 460:
Gwendolen (Part of the Model Checking Agent Programming Languages Framework)
387:
IRMA (not implemented but can be considered as PRS with non-reconsideration)
237:: Intentions represent the deliberative state of the agent – what the agent 595: 360:: Most BDI implementations do not have an explicit representation of goals. 902: 1021: 93: 658: 572: 96:(with modalities representing beliefs, desires and intentions) with the 1158: 686:. Lecture Notes in Computer Science. Vol. 3259. pp. 218–233. 653:. Lecture Notes in Computer Science. Vol. 3683. pp. 282–288. 402: 1083:. International Conference on Autonomic and Autonomous Systems (ICAS). 776: 1245: 844: 519: 748:. Lecture Notes in Computer Science. Vol. 1555. pp. 1–10. 963: 809: 746:
Intelligent Agents V: Agents Theories, Architectures, and Languages
180: 594:
J. Broersen, M. Dastani, J. Hulstijn, Z. Huang, L. van der Torre
35:. Superficially characterized by the implementation of an agent's 1170: 987: 976: 648: 1054: 1052: 651:
Knowledge-Based Intelligent Information and Engineering Systems
428: 338:
research questions whether the three attitudes are sufficient.
1092: 85: 1049: 1001: 207:
to accomplish or bring about. Examples of desires might be:
810:
Sardina, Sebastian; Lavindra de Silva; Lin Padgham (2006).
441: 436: 1072:
Rimassa, G., Greenwood, D. and Kernland, M. E., (2006).
997: 995: 992: 544:
Umbrello, Steven; Yampolskiy, Roman V. (2021-05-15).
1193:Modeling Rational Agents within a BDI-Architecture 543: 1095:Research and Education in Robotics - EUROBOT 2010 1068: 1066: 1011: 1009: 920:Vikhorev, K., Alechina, N. and Logan, B. (2009). 884:"Agent programming with priorities and deadlines" 882:Vikhorev, K., Alechina, N. and Logan, B. (2011). 1282: 914: 876: 464: 1063: 1006: 142:, viz: Beliefs, Desires and Intentions (BDI). 308: 370: 1273:K. S. Vikhorev, N. Alechina, and B. Logan. 84:of some BDI implementations, as well as to 16:Model for designing artificial intelligence 1237: 1137:Model Checking Agent Programming Languages 684:Computational Logic in Multi-Agent Systems 608: 606: 604: 724: 571: 561: 550:International Journal of Social Robotics 397:Distributed Multi-Agent Reasoning System 1212: 922:"The ARTS Real-Time Agent Architecture" 601: 281:options: option-generator (event-queue) 165:to lead to new beliefs. Using the term 1283: 1216:Intention, Plans, and Practical Reason 1041:: CS1 maint: archived copy as title ( 21:belief–desire–intention software model 1275:The ARTS Real-Time Agent Architecture 615:"BDI-agents: From Theory to Practice" 284:selected-options: deliberate(options) 411:Agent Real-Time System (ARTS) (ARTS) 131:A BDI agent is a particular type of 1200:BDI-agents: From Theory to Practice 714: 612: 287:update-intentions(selected-options) 88:descriptions such as Anand Rao and 70:theory of human practical reasoning 13: 262: 14: 1312: 1301:Agent-based programming languages 92:'s BDICTL. The latter combines a 740:Georgeff, Michael; Barney Pell; 1241:Reasoning About Rational Agents 1163: 1152: 1141: 1130: 1119: 1086: 981: 970: 956: 945: 934: 896: 858: 833: 822: 447:GOAL agent programming language 145: 1198:A. S. Rao and M. P. Georgeff. 1191:A. S. Rao and M. P. Georgeff. 803: 770: 733: 708: 675: 642: 588: 537: 1: 1185: 613:Rao, M. P. Georgeff. (1995). 505:Belief–desire–intention model 465:Extensions and hybrid systems 375: 296:drop-unsuccessful-attitudes() 126: 1059:Living Systems Process Suite 692:10.1007/978-3-540-30200-1_12 478:Living Systems Process Suite 454:Living Systems Process Suite 7: 1103:10.1007/978-3-642-27272-1_5 488: 422:JADEX (open source project) 382:Procedural Reasoning System 299:drop-impossible-attitudes() 59: 10: 1317: 563:10.1007/s12369-021-00790-w 309:Limitations and criticisms 31:developed for programming 371:BDI agent implementations 293:get-new-external-events() 179:: Beliefs are stored in 138:, imbued with particular 1213:Bratman, M. E. (1999) . 1126:Gwendolen Semantics:2017 530: 271:lineage of BDI systems: 119:as a means of designing 1291:Artificial intelligence 1238:Wooldridge, M. (2000). 1002:CogniTAO (Think-As-One) 927:March 26, 2012, at the 889:March 26, 2012, at the 787:10.1007/0-387-26350-0_6 779:Multi-Agent Programming 754:10.1007/3-540-49057-4_1 500:Artificial intelligence 475:CogniTAO (Think-As-One) 451:CogniTAO (Think-As-One) 418:JACK Intelligent Agents 191:), although that is an 136:rational software agent 1175:jacamo.sourceforge.net 903:Agent Real-Time System 1079:May 16, 2008, at the 717:Technical Note, AAII 183:(sometimes called a 94:multiple-modal logic 659:10.1007/11553939_41 209:find the best price 121:autonomous vehicles 1205:2011-06-04 at the 908:2011-09-27 at the 870:2012-03-26 at the 352:multi-agent system 332:decision theorists 315:multi-agent system 123:for human values. 105:multi-agent system 33:intelligent agents 1221:CSLI Publications 1112:978-3-642-27272-1 796:978-0-387-24568-3 763:978-3-540-65713-2 742:Martha E. Pollack 701:978-3-540-24010-5 668:978-3-540-28896-1 515:Intelligent agent 405:– see Jason below 1308: 1270: 1268: 1267: 1258:. Archived from 1234: 1179: 1178: 1167: 1161: 1156: 1150: 1145: 1139: 1134: 1128: 1123: 1117: 1116: 1090: 1084: 1070: 1061: 1056: 1047: 1046: 1040: 1032: 1030: 1029: 1020:. Archived from 1013: 1004: 999: 990: 985: 979: 974: 968: 967: 960: 954: 949: 943: 938: 932: 918: 912: 900: 894: 880: 874: 862: 856: 855: 853: 852: 843:. Archived from 837: 831: 826: 820: 819: 807: 801: 800: 774: 768: 767: 737: 731: 730: 728: 712: 706: 705: 679: 673: 672: 646: 640: 639: 637: 636: 630: 624:. Archived from 619: 610: 599: 592: 586: 585: 575: 565: 541: 495:Action selection 275:initialize-state 163:forward chaining 140:mental attitudes 117:Roman Yampolskiy 90:Michael Georgeff 1316: 1315: 1311: 1310: 1309: 1307: 1306: 1305: 1296:Belief revision 1281: 1280: 1265: 1263: 1256: 1231: 1207:Wayback Machine 1188: 1183: 1182: 1169: 1168: 1164: 1157: 1153: 1146: 1142: 1135: 1131: 1124: 1120: 1113: 1091: 1087: 1081:Wayback Machine 1071: 1064: 1057: 1050: 1034: 1033: 1027: 1025: 1018:"Archived copy" 1016: 1014: 1007: 1000: 993: 986: 982: 975: 971: 962: 961: 957: 950: 946: 939: 935: 929:Wayback Machine 919: 915: 910:Wayback Machine 901: 897: 891:Wayback Machine 881: 877: 872:Wayback Machine 863: 859: 850: 848: 839: 838: 834: 827: 823: 808: 804: 797: 775: 771: 764: 738: 734: 713: 709: 702: 680: 676: 669: 647: 643: 634: 632: 628: 617: 611: 602: 593: 589: 542: 538: 533: 510:Belief revision 491: 467: 378: 373: 348:Multiple agents 328:Three attitudes 311: 265: 263:BDI interpreter 213:go to the party 159:inference rules 148: 129: 113:Steven Umbrello 66:Michael Bratman 62: 17: 12: 11: 5: 1314: 1304: 1303: 1298: 1293: 1279: 1278: 1271: 1254: 1235: 1229: 1210: 1196: 1187: 1184: 1181: 1180: 1162: 1151: 1148:MCAPL (Zenodo) 1140: 1129: 1118: 1111: 1085: 1062: 1048: 1005: 991: 980: 969: 955: 944: 933: 913: 895: 875: 865:AgentSpeak(RT) 857: 832: 821: 802: 795: 769: 762: 732: 726:10.1.1.52.7924 707: 700: 674: 667: 641: 600: 587: 556:(2): 313–322. 535: 534: 532: 529: 528: 527: 525:Software agent 522: 517: 512: 507: 502: 497: 490: 487: 486: 485: 482: 479: 476: 473: 466: 463: 462: 461: 458: 455: 452: 449: 444: 439: 434: 431: 426: 423: 420: 415: 412: 409: 408:AgentSpeak(RT) 406: 400: 394: 391: 388: 385: 377: 374: 372: 369: 368: 367: 361: 358:Explicit goals 355: 345: 339: 336:distributed AI 325: 310: 307: 306: 305: 302: 301: 300: 297: 294: 291: 288: 285: 282: 276: 264: 261: 257: 256: 250: 249: 248: 232: 231: 230: 198: 197: 196: 193:implementation 147: 144: 128: 125: 98:temporal logic 86:formal logical 82:axiomatization 61: 58: 29:software model 15: 9: 6: 4: 3: 2: 1313: 1302: 1299: 1297: 1294: 1292: 1289: 1288: 1286: 1276: 1272: 1262:on 2010-07-30 1261: 1257: 1255:0-262-23213-8 1251: 1247: 1246:The MIT Press 1243: 1242: 1236: 1232: 1230:1-57586-192-5 1226: 1222: 1218: 1217: 1211: 1208: 1204: 1201: 1197: 1194: 1190: 1189: 1176: 1172: 1166: 1160: 1155: 1149: 1144: 1138: 1133: 1127: 1122: 1114: 1108: 1104: 1100: 1096: 1089: 1082: 1078: 1075: 1069: 1067: 1060: 1055: 1053: 1044: 1038: 1024:on 2009-01-07 1023: 1019: 1012: 1010: 1003: 998: 996: 989: 984: 978: 973: 965: 959: 953: 948: 942: 937: 930: 926: 923: 917: 911: 907: 904: 899: 892: 888: 885: 879: 873: 869: 866: 861: 847:on 2014-10-21 846: 842: 836: 830: 825: 817: 813: 806: 798: 792: 788: 784: 780: 773: 765: 759: 755: 751: 747: 743: 736: 727: 722: 718: 711: 703: 697: 693: 689: 685: 678: 670: 664: 660: 656: 652: 645: 631:on 2011-06-04 627: 623: 616: 609: 607: 605: 597: 591: 583: 579: 574: 569: 564: 559: 555: 551: 547: 540: 536: 526: 523: 521: 518: 516: 513: 511: 508: 506: 503: 501: 498: 496: 493: 492: 483: 480: 477: 474: 472: 469: 468: 459: 456: 453: 450: 448: 445: 443: 440: 438: 435: 432: 430: 427: 424: 421: 419: 416: 413: 410: 407: 404: 403:AgentSpeak(L) 401: 398: 395: 392: 389: 386: 383: 380: 379: 365: 362: 359: 356: 353: 349: 346: 343: 340: 337: 333: 329: 326: 323: 320: 319: 318: 316: 303: 298: 295: 292: 289: 286: 283: 280: 279: 277: 274: 273: 272: 270: 260: 254: 251: 246: 243: 242: 240: 236: 233: 228: 224: 221: 220: 218: 214: 210: 206: 202: 199: 194: 190: 186: 182: 178: 175: 174: 172: 168: 164: 160: 156: 153: 152: 151: 143: 141: 137: 134: 124: 122: 118: 114: 108: 106: 102: 99: 95: 91: 87: 83: 77: 73: 71: 67: 57: 55: 50: 46: 42: 38: 34: 30: 26: 22: 1264:. Retrieved 1260:the original 1240: 1215: 1174: 1165: 1154: 1143: 1132: 1121: 1094: 1088: 1026:. Retrieved 1022:the original 983: 972: 958: 947: 936: 916: 898: 878: 860: 849:. Retrieved 845:the original 835: 824: 815: 805: 778: 772: 745: 735: 716: 710: 683: 677: 650: 644: 633:. Retrieved 626:the original 621: 590: 573:2318/1788856 553: 549: 539: 363: 357: 347: 341: 330:: Classical 327: 321: 312: 266: 258: 252: 244: 238: 234: 226: 222: 216: 212: 208: 204: 200: 188: 184: 176: 170: 169:rather than 166: 154: 149: 146:Architecture 139: 130: 109: 78: 74: 63: 44: 40: 36: 24: 20: 18: 217:become rich 185:belief base 161:, allowing 1285:Categories 1266:2006-06-15 1186:References 1028:2008-11-03 851:2014-10-23 635:2009-07-09 471:JACK Teams 376:'Pure' BDI 304:end repeat 239:has chosen 235:Intentions 205:would like 189:belief set 127:BDI agents 45:intentions 841:"OpenPRS" 721:CiteSeerX 582:1875-4805 520:Reasoning 364:Lookahead 290:execute() 195:decision. 177:Beliefset 171:knowledge 1203:Archived 1077:Archived 1037:cite web 925:Archived 906:Archived 887:Archived 868:Archived 489:See also 322:Learning 181:database 60:Overview 54:planning 457:PROFETA 399:(dMARS) 393:OpenPRS 278:repeat 201:Desires 155:Beliefs 133:bounded 49:library 41:desires 37:beliefs 27:) is a 1252:  1227:  1171:"Home" 1159:Brahms 1109:  829:UM-PRS 793:  760:  723:  698:  665:  580:  484:JaCaMo 481:Brahms 429:GORITE 390:UM-PRS 342:Logics 253:Events 167:belief 977:SPARK 952:JADEX 629:(PDF) 618:(PDF) 531:Notes 433:SPARK 425:JASON 384:(PRS) 245:Plans 227:goals 223:Goals 187:or a 1250:ISBN 1225:ISBN 1107:ISBN 1043:link 988:2APL 791:ISBN 758:ISBN 696:ISBN 663:ISBN 578:ISSN 442:2APL 437:3APL 115:and 101:CTL* 43:and 19:The 1099:doi 941:JAM 783:doi 750:doi 688:doi 655:doi 568:hdl 558:doi 414:JAM 269:PRS 215:or 68:'s 25:BDI 1287:: 1248:. 1244:. 1223:. 1219:. 1173:. 1105:. 1065:^ 1051:^ 1039:}} 1035:{{ 1008:^ 994:^ 814:. 789:. 756:. 719:. 694:. 661:. 620:. 603:^ 576:. 566:. 554:14 552:. 548:. 219:. 211:, 107:. 39:, 1269:. 1233:. 1177:. 1115:. 1101:: 1045:) 1031:. 966:. 854:. 818:. 799:. 785:: 766:. 752:: 729:. 704:. 690:: 671:. 657:: 638:. 584:. 570:: 560:: 354:. 52:( 23:(

Index

software model
intelligent agents
library
planning
Michael Bratman
theory of human practical reasoning
axiomatization
formal logical
Michael Georgeff
multiple-modal logic
temporal logic
CTL*
multi-agent system
Steven Umbrello
Roman Yampolskiy
autonomous vehicles
bounded
rational software agent
inference rules
forward chaining
database
implementation
PRS
multi-agent system
decision theorists
distributed AI
multi-agent system
Procedural Reasoning System
Distributed Multi-Agent Reasoning System
AgentSpeak(L)

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.