Knowledge

Belief–desire–intention software model

Source 📝

87:
which it is already committed. The BDI software model partially addresses these issues. Temporal persistence, in the sense of explicit reference to time, is not explored. The hierarchical nature of plans is more easily implemented: a plan consists of a number of steps, some of which may invoke other plans. The hierarchical definition of plans itself implies a kind of temporal persistence, since the overarching plan remains in effect while subsidiary plans are being executed.
258:: Plans are sequences of actions (recipes or knowledge areas) that an agent can perform to achieve one or more of its intentions. Plans may include other plans: my plan to go for a drive may include a plan to find my car keys. This reflects that in Bratman's model, plans are initially only partially conceived, with details being filled in as they progress. 377:: The architecture does not have (by design) any lookahead deliberation or forward planning. This may not be desirable because adopted plans may use up limited resources, actions may not be reversible, task execution may take longer than forward planning, and actions may have undesirable side effects if unsuccessful. 86:
For Bratman, desire and intention are both pro-attitudes (mental attitudes concerned with action). He identifies commitment as the distinguishing factor between desire and intention, noting that it leads to (1) temporal persistence in plans and (2) further plans being made on the basis of those to
266:: These are triggers for reactive activity by the agent. An event may update beliefs, trigger plans or modify goals. Events may be generated externally and received by sensors or integrated systems. Additionally, events may be generated internally to trigger decoupled updates or plans of activity. 121:
The BDI software model is closely associated with intelligent agents, but does not, of itself, ensure all the characteristics associated with such agents. For example, it allows agents to have private beliefs, but does not force them to be private. It also has nothing to say about agent
62:
or an external planner application) from the execution of currently active plans. Consequently, BDI agents are able to balance the time spent on deliberating about plans (choosing what to do) and executing those plans (doing it). A third activity, creating the plans in the first place
122:
communication. Ultimately, the BDI software model is an attempt to solve a problem that has more to do with plans and planning (the choice and execution thereof) than it has to do with the programming of intelligent agents. This approach has recently been proposed by
114:. More recently, Michael Wooldridge has extended BDICTL to define LORA (the Logic Of Rational Agents), by incorporating an action logic. In principle, LORA allows reasoning not only about individual agents, but also about communication and other interaction in a 90:
An important aspect of the BDI software model (in terms of its research relevance) is the existence of logical models through which it is possible to define and reason about BDI agents. Research in this area has led, for example, to the
361:: In addition to not explicitly supporting learning, the framework may not be appropriate to learning behavior. Further, the BDI model does not explicitly describe mechanisms for interaction with other agents and integration into a 240:
adds the further restriction that the set of active desires must be consistent. For example, one should not have concurrent goals to go to a party and to stay at home – even though they could both be desirable.
1104:
Fichera, Loris; Marletta, Daniele; Nicosia, Vincenzo; Santoro, Corrado (2011). "Flexible Robot Strategy Design Using Belief-Desire-Intention Model". In Obdržálek, David; Gottscheber, Achim (eds.).
270:
BDI was also extended with an obligations component, giving rise to the BOID agent architecture to incorporate obligations, norms and commitments of agents that act within a social environment.
942:. In Proceedings of Second Workshop on Languages, Methodologies and Development Tools for Multi-agent Systems (LADS2009). Turin, Italy. September 2009. CEUR Workshop Proceedings Vol-494. 58:, it actually uses these concepts to solve a particular problem in agent programming. In essence, it provides a mechanism for separating the activity of selecting a plan (from a plan 83:(also referred to as Belief-Desire-Intention, or BDI). That is to say, it implements the notions of belief, desire and (in particular) intention, in a manner inspired by Bratman. 1288:. In Proceedings of Second Workshop on Languages, Methodologies and Development Tools for Multi-agent Systems (LADS2009). CEUR Workshop Proceedings, Vol-494, Turin, Italy, 2009. 1087: 355:: The multi-modal logics that underlie BDI (that do not have complete axiomatizations and are not efficiently computable) have little relevance in practice. 904:. In Proceedings of the Tenth International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2011). Taipei, Taiwan. May 2011., pp. 397-404. 252:
to do. Intentions are desires to which the agent has to some extent committed. In implemented systems, this means the agent has begun executing a plan.
123: 1250: 1231: 168:: Beliefs represent the informational state of the agent–its beliefs about the world (including itself and other agents). Beliefs can also include 557:"Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles" 897: 935: 1311: 1053: 1213: 625: 407: 1206:. In Proceedings of the 2nd International Conference on Principles of Knowledge Representation and Reasoning, pages 473–484, 1991. 878: 1121: 805: 772: 710: 677: 515: 328:. This section bounds the scope of concerns for the BDI software model, highlighting known limitations of the architecture. 324:
The BDI software model is one example of a reasoning architecture for a single rational agent, and one concern in a broader
335:: BDI agents lack any specific mechanisms within the architecture to learn from past behavior and adapt to new situations. 346: 17: 693:
Guerra-Hernández, Alejandro; Amal El Fallah-Seghrouchni; Henry Soldano (2004). "Learning in BDI Multi-agent Systems".
1264: 1239: 916: 64: 1028: 1301: 1108:. Communications in Computer and Information Science. Vol. 156. Berlin, Heidelberg: Springer. pp. 57–71. 1084: 457: 851: 214:: Desires represent the motivational state of the agent. They represent objectives or situations that the agent 660:
Phung, Toan; Michael Winikoff; Lin Padgham (2005). "Learning Within the BDI Framework: An Empirical Analysis".
609:
Proceedings of the fifth international conference on Autonomous agents, 2001, pages 9-16, ACM New York, NY, USA
1220:, In Proceedings of the First International Conference on Multiagent Systems (ICMAS'95), San Francisco, 1995. 1136: 80: 1026:
TAO: A JAUS-based High-Level Control System for Single and Multiple Robots Y. Elmaliach, CogniTeam, (2008)
184:
recognizes that what an agent believes may not necessarily be true (and in fact may change in the future).
792:. Multiagent Systems, Artificial Societies, and Simulated Organizations. Vol. 15. pp. 149–174. 392: 279: 1270: 1306: 827:
Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
736: 111: 510: 481: 428: 75:
In order to achieve this separation, the BDI software model implements the principal aspects of
894: 788:
Pokahr, Alexander; Lars Braubach; Winfried Lamersdorf (2005). "Jadex: A BDI Reasoning Engine".
731: 1225: 932: 726:
Rao, M. P. Georgeff. (1995). "Formal models and decision procedures for multi-agent systems".
236:: A goal is a desire that has been adopted for active pursuit by the agent. Usage of the term 962: 67:), is not within the scope of the model, and is left to the system designer and programmer. 1285: 1147: 755:; Milind Tambe; Michael Wooldridge (1999). "The Belief-Desire-Intention Model of Agency". 8: 1210: 636: 143: 59: 692: 362: 325: 115: 875: 1260: 1235: 1117: 1085:
The Living Systems Technology Suite: An Autonomous Middleware for Autonomic Computing
1047: 801: 768: 752: 706: 673: 607:
The BOID architecture: conflicts between beliefs, obligations, intentions and desires
588: 525: 146: 43: 39: 1069: 1109: 793: 760: 698: 665: 578: 568: 505: 278:
This section defines an idealized BDI interpreter that provides the basis of SRI's
173: 131: 127: 100: 92: 633:
Proceedings of the First International Conference on Multiagent Systems (ICMAS'95)
1217: 1158: 1091: 939: 920: 901: 882: 702: 520: 342: 76: 1113: 951: 839: 822: 750: 573: 556: 535: 203: 169: 108: 1295: 975:"Jason | a Java-based interpreter for an extended version of AgentSpeak" 823:"Hierarchical planning in BDI agent programming languages: a formal approach" 592: 345:
and planning research questions the necessity of having all three attitudes,
161:
This section defines the idealized architectural components of a BDI system.
1203: 797: 764: 471:
Gwendolen (Part of the Model Checking Agent Programming Languages Framework)
398:
IRMA (not implemented but can be considered as PRS with non-reconsideration)
248:: Intentions represent the deliberative state of the agent – what the agent 606: 371:: Most BDI implementations do not have an explicit representation of goals. 913: 1032: 104: 669: 583: 107:(with modalities representing beliefs, desires and intentions) with the 1169: 697:. Lecture Notes in Computer Science. Vol. 3259. pp. 218–233. 664:. Lecture Notes in Computer Science. Vol. 3683. pp. 282–288. 413: 1094:. International Conference on Autonomic and Autonomous Systems (ICAS). 787: 1256: 855: 530: 759:. Lecture Notes in Computer Science. Vol. 1555. pp. 1–10. 974: 820: 757:
Intelligent Agents V: Agents Theories, Architectures, and Languages
191: 605:
J. Broersen, M. Dastani, J. Hulstijn, Z. Huang, L. van der Torre
46:. Superficially characterized by the implementation of an agent's 1181: 998: 987: 659: 1065: 1063: 662:
Knowledge-Based Intelligent Information and Engineering Systems
439: 349:
research questions whether the three attitudes are sufficient.
1103: 96: 1060: 1012: 218:
to accomplish or bring about. Examples of desires might be:
821:
Sardina, Sebastian; Lavindra de Silva; Lin Padgham (2006).
452: 447: 1083:
Rimassa, G., Greenwood, D. and Kernland, M. E., (2006).
1008: 1006: 1003: 555:
Umbrello, Steven; Yampolskiy, Roman V. (2021-05-15).
1204:Modeling Rational Agents within a BDI-Architecture 554: 1106:Research and Education in Robotics - EUROBOT 2010 1079: 1077: 1022: 1020: 931:Vikhorev, K., Alechina, N. and Logan, B. (2009). 895:"Agent programming with priorities and deadlines" 893:Vikhorev, K., Alechina, N. and Logan, B. (2011). 1293: 925: 887: 475: 1074: 1017: 153:, viz: Beliefs, Desires and Intentions (BDI). 319: 381: 1284:K. S. Vikhorev, N. Alechina, and B. Logan. 95:of some BDI implementations, as well as to 27:Model for designing artificial intelligence 1248: 1148:Model Checking Agent Programming Languages 695:Computational Logic in Multi-Agent Systems 619: 617: 615: 735: 582: 572: 561:International Journal of Social Robotics 408:Distributed Multi-Agent Reasoning System 1223: 933:"The ARTS Real-Time Agent Architecture" 612: 292:options: option-generator (event-queue) 176:to lead to new beliefs. Using the term 14: 1294: 1227:Intention, Plans, and Practical Reason 1052:: CS1 maint: archived copy as title ( 32:belief–desire–intention software model 1286:The ARTS Real-Time Agent Architecture 626:"BDI-agents: From Theory to Practice" 295:selected-options: deliberate(options) 422:Agent Real-Time System (ARTS) (ARTS) 142:A BDI agent is a particular type of 1211:BDI-agents: From Theory to Practice 725: 623: 298:update-intentions(selected-options) 99:descriptions such as Anand Rao and 81:theory of human practical reasoning 24: 273: 25: 1323: 1312:Agent-based programming languages 103:'s BDICTL. The latter combines a 751:Georgeff, Michael; Barney Pell; 1252:Reasoning About Rational Agents 1174: 1163: 1152: 1141: 1130: 1097: 992: 981: 967: 956: 945: 907: 869: 844: 833: 458:GOAL agent programming language 156: 1209:A. S. Rao and M. P. Georgeff. 1202:A. S. Rao and M. P. Georgeff. 814: 781: 744: 719: 686: 653: 599: 548: 13: 1: 1196: 624:Rao, M. P. Georgeff. (1995). 516:Belief–desire–intention model 476:Extensions and hybrid systems 386: 307:drop-unsuccessful-attitudes() 137: 1070:Living Systems Process Suite 703:10.1007/978-3-540-30200-1_12 489:Living Systems Process Suite 465:Living Systems Process Suite 7: 1114:10.1007/978-3-642-27272-1_5 499: 433:JADEX (open source project) 393:Procedural Reasoning System 310:drop-impossible-attitudes() 70: 10: 1328: 574:10.1007/s12369-021-00790-w 320:Limitations and criticisms 42:developed for programming 382:BDI agent implementations 304:get-new-external-events() 190:: Beliefs are stored in 149:, imbued with particular 1224:Bratman, M. E. (1999) . 1137:Gwendolen Semantics:2017 541: 282:lineage of BDI systems: 130:as a means of designing 1302:Artificial intelligence 1249:Wooldridge, M. (2000). 1013:CogniTAO (Think-As-One) 938:March 26, 2012, at the 900:March 26, 2012, at the 798:10.1007/0-387-26350-0_6 790:Multi-Agent Programming 765:10.1007/3-540-49057-4_1 511:Artificial intelligence 486:CogniTAO (Think-As-One) 462:CogniTAO (Think-As-One) 429:JACK Intelligent Agents 202:), although that is an 147:rational software agent 1186:jacamo.sourceforge.net 914:Agent Real-Time System 1090:May 16, 2008, at the 728:Technical Note, AAII 194:(sometimes called a 105:multiple-modal logic 670:10.1007/11553939_41 220:find the best price 132:autonomous vehicles 1216:2011-06-04 at the 919:2011-09-27 at the 881:2012-03-26 at the 363:multi-agent system 343:decision theorists 326:multi-agent system 134:for human values. 116:multi-agent system 44:intelligent agents 18:BDI software agent 1232:CSLI Publications 1123:978-3-642-27272-1 807:978-0-387-24568-3 774:978-3-540-65713-2 753:Martha E. Pollack 712:978-3-540-24010-5 679:978-3-540-28896-1 526:Intelligent agent 416:– see Jason below 16:(Redirected from 1319: 1281: 1279: 1278: 1269:. Archived from 1245: 1190: 1189: 1178: 1172: 1167: 1161: 1156: 1150: 1145: 1139: 1134: 1128: 1127: 1101: 1095: 1081: 1072: 1067: 1058: 1057: 1051: 1043: 1041: 1040: 1031:. Archived from 1024: 1015: 1010: 1001: 996: 990: 985: 979: 978: 971: 965: 960: 954: 949: 943: 929: 923: 911: 905: 891: 885: 873: 867: 866: 864: 863: 854:. Archived from 848: 842: 837: 831: 830: 818: 812: 811: 785: 779: 778: 748: 742: 741: 739: 723: 717: 716: 690: 684: 683: 657: 651: 650: 648: 647: 641: 635:. Archived from 630: 621: 610: 603: 597: 596: 586: 576: 552: 506:Action selection 286:initialize-state 174:forward chaining 151:mental attitudes 128:Roman Yampolskiy 101:Michael Georgeff 21: 1327: 1326: 1322: 1321: 1320: 1318: 1317: 1316: 1307:Belief revision 1292: 1291: 1276: 1274: 1267: 1242: 1218:Wayback Machine 1199: 1194: 1193: 1180: 1179: 1175: 1168: 1164: 1157: 1153: 1146: 1142: 1135: 1131: 1124: 1102: 1098: 1092:Wayback Machine 1082: 1075: 1068: 1061: 1045: 1044: 1038: 1036: 1029:"Archived copy" 1027: 1025: 1018: 1011: 1004: 997: 993: 986: 982: 973: 972: 968: 961: 957: 950: 946: 940:Wayback Machine 930: 926: 921:Wayback Machine 912: 908: 902:Wayback Machine 892: 888: 883:Wayback Machine 874: 870: 861: 859: 850: 849: 845: 838: 834: 819: 815: 808: 786: 782: 775: 749: 745: 724: 720: 713: 691: 687: 680: 658: 654: 645: 643: 639: 628: 622: 613: 604: 600: 553: 549: 544: 521:Belief revision 502: 478: 389: 384: 359:Multiple agents 339:Three attitudes 322: 276: 274:BDI interpreter 224:go to the party 170:inference rules 159: 140: 124:Steven Umbrello 77:Michael Bratman 73: 28: 23: 22: 15: 12: 11: 5: 1325: 1315: 1314: 1309: 1304: 1290: 1289: 1282: 1265: 1246: 1240: 1221: 1207: 1198: 1195: 1192: 1191: 1173: 1162: 1159:MCAPL (Zenodo) 1151: 1140: 1129: 1122: 1096: 1073: 1059: 1016: 1002: 991: 980: 966: 955: 944: 924: 906: 886: 876:AgentSpeak(RT) 868: 843: 832: 813: 806: 780: 773: 743: 737:10.1.1.52.7924 718: 711: 685: 678: 652: 611: 598: 567:(2): 313–322. 546: 545: 543: 540: 539: 538: 536:Software agent 533: 528: 523: 518: 513: 508: 501: 498: 497: 496: 493: 490: 487: 484: 477: 474: 473: 472: 469: 466: 463: 460: 455: 450: 445: 442: 437: 434: 431: 426: 423: 420: 419:AgentSpeak(RT) 417: 411: 405: 402: 399: 396: 388: 385: 383: 380: 379: 378: 372: 369:Explicit goals 366: 356: 350: 347:distributed AI 336: 321: 318: 317: 316: 313: 312: 311: 308: 305: 302: 299: 296: 293: 287: 275: 272: 268: 267: 261: 260: 259: 243: 242: 241: 209: 208: 207: 204:implementation 158: 155: 139: 136: 109:temporal logic 97:formal logical 93:axiomatization 72: 69: 40:software model 26: 9: 6: 4: 3: 2: 1324: 1313: 1310: 1308: 1305: 1303: 1300: 1299: 1297: 1287: 1283: 1273:on 2010-07-30 1272: 1268: 1266:0-262-23213-8 1262: 1258: 1257:The MIT Press 1254: 1253: 1247: 1243: 1241:1-57586-192-5 1237: 1233: 1229: 1228: 1222: 1219: 1215: 1212: 1208: 1205: 1201: 1200: 1187: 1183: 1177: 1171: 1166: 1160: 1155: 1149: 1144: 1138: 1133: 1125: 1119: 1115: 1111: 1107: 1100: 1093: 1089: 1086: 1080: 1078: 1071: 1066: 1064: 1055: 1049: 1035:on 2009-01-07 1034: 1030: 1023: 1021: 1014: 1009: 1007: 1000: 995: 989: 984: 976: 970: 964: 959: 953: 948: 941: 937: 934: 928: 922: 918: 915: 910: 903: 899: 896: 890: 884: 880: 877: 872: 858:on 2014-10-21 857: 853: 847: 841: 836: 828: 824: 817: 809: 803: 799: 795: 791: 784: 776: 770: 766: 762: 758: 754: 747: 738: 733: 729: 722: 714: 708: 704: 700: 696: 689: 681: 675: 671: 667: 663: 656: 642:on 2011-06-04 638: 634: 627: 620: 618: 616: 608: 602: 594: 590: 585: 580: 575: 570: 566: 562: 558: 551: 547: 537: 534: 532: 529: 527: 524: 522: 519: 517: 514: 512: 509: 507: 504: 503: 494: 491: 488: 485: 483: 480: 479: 470: 467: 464: 461: 459: 456: 454: 451: 449: 446: 443: 441: 438: 435: 432: 430: 427: 424: 421: 418: 415: 414:AgentSpeak(L) 412: 409: 406: 403: 400: 397: 394: 391: 390: 376: 373: 370: 367: 364: 360: 357: 354: 351: 348: 344: 340: 337: 334: 331: 330: 329: 327: 314: 309: 306: 303: 300: 297: 294: 291: 290: 288: 285: 284: 283: 281: 271: 265: 262: 257: 254: 253: 251: 247: 244: 239: 235: 232: 231: 229: 225: 221: 217: 213: 210: 205: 201: 197: 193: 189: 186: 185: 183: 179: 175: 171: 167: 164: 163: 162: 154: 152: 148: 145: 135: 133: 129: 125: 119: 117: 113: 110: 106: 102: 98: 94: 88: 84: 82: 78: 68: 66: 61: 57: 53: 49: 45: 41: 37: 33: 19: 1275:. Retrieved 1271:the original 1251: 1226: 1185: 1176: 1165: 1154: 1143: 1132: 1105: 1099: 1037:. Retrieved 1033:the original 994: 983: 969: 958: 947: 927: 909: 889: 871: 860:. Retrieved 856:the original 846: 835: 826: 816: 789: 783: 756: 746: 727: 721: 694: 688: 661: 655: 644:. Retrieved 637:the original 632: 601: 584:2318/1788856 564: 560: 550: 374: 368: 358: 352: 341:: Classical 338: 332: 323: 277: 269: 263: 255: 249: 245: 237: 233: 227: 223: 219: 215: 211: 199: 195: 187: 181: 180:rather than 177: 165: 160: 157:Architecture 150: 141: 120: 89: 85: 74: 55: 51: 47: 35: 31: 29: 228:become rich 196:belief base 172:, allowing 1296:Categories 1277:2006-06-15 1197:References 1039:2008-11-03 862:2014-10-23 646:2009-07-09 482:JACK Teams 387:'Pure' BDI 315:end repeat 250:has chosen 246:Intentions 216:would like 200:belief set 138:BDI agents 56:intentions 852:"OpenPRS" 732:CiteSeerX 593:1875-4805 531:Reasoning 375:Lookahead 301:execute() 206:decision. 188:Beliefset 182:knowledge 1214:Archived 1088:Archived 1048:cite web 936:Archived 917:Archived 898:Archived 879:Archived 500:See also 333:Learning 192:database 71:Overview 65:planning 468:PROFETA 410:(dMARS) 404:OpenPRS 289:repeat 212:Desires 166:Beliefs 144:bounded 60:library 52:desires 48:beliefs 38:) is a 1263:  1238:  1182:"Home" 1170:Brahms 1120:  840:UM-PRS 804:  771:  734:  709:  676:  591:  495:JaCaMo 492:Brahms 440:GORITE 401:UM-PRS 353:Logics 264:Events 178:belief 988:SPARK 963:JADEX 640:(PDF) 629:(PDF) 542:Notes 444:SPARK 436:JASON 395:(PRS) 256:Plans 238:goals 234:Goals 198:or a 1261:ISBN 1236:ISBN 1118:ISBN 1054:link 999:2APL 802:ISBN 769:ISBN 707:ISBN 674:ISBN 589:ISSN 453:2APL 448:3APL 126:and 112:CTL* 54:and 30:The 1110:doi 952:JAM 794:doi 761:doi 699:doi 666:doi 579:hdl 569:doi 425:JAM 280:PRS 226:or 79:'s 36:BDI 1298:: 1259:. 1255:. 1234:. 1230:. 1184:. 1116:. 1076:^ 1062:^ 1050:}} 1046:{{ 1019:^ 1005:^ 825:. 800:. 767:. 730:. 705:. 672:. 631:. 614:^ 587:. 577:. 565:14 563:. 559:. 230:. 222:, 118:. 50:, 1280:. 1244:. 1188:. 1126:. 1112:: 1056:) 1042:. 977:. 865:. 829:. 810:. 796:: 777:. 763:: 740:. 715:. 701:: 682:. 668:: 649:. 595:. 581:: 571:: 365:. 63:( 34:( 20:)

Index

BDI software agent
software model
intelligent agents
library
planning
Michael Bratman
theory of human practical reasoning
axiomatization
formal logical
Michael Georgeff
multiple-modal logic
temporal logic
CTL*
multi-agent system
Steven Umbrello
Roman Yampolskiy
autonomous vehicles
bounded
rational software agent
inference rules
forward chaining
database
implementation
PRS
multi-agent system
decision theorists
distributed AI
multi-agent system
Procedural Reasoning System
Distributed Multi-Agent Reasoning System

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.