Drools 文档


Drools Documentation The JBoss Drools team [http://www.drools.org/community/team.html] Drools Documentation by Version 6.3.0.Final iii ...................................................................................................................................... xiv I. Welcome ........................................................................................................................ 1 1. Introduction ......................................................................................................... 2 1.1. Introduction ................................................................................................ 2 1.2. Getting Involved .......................................................................................... 2 1.2.1. Sign up to jboss.org ......................................................................... 3 1.2.2. Sign the Contributor Agreement ........................................................ 3 1.2.3. Submitting issues via JIRA ............................................................... 4 1.2.4. Fork GitHub ..................................................................................... 5 1.2.5. Writing Tests ................................................................................... 5 1.2.6. Commit with Correct Conventions ..................................................... 7 1.2.7. Submit Pull Requests ....................................................................... 8 1.3. Installation and Setup (Core and IDE) ........................................................ 10 1.3.1. Installing and using ........................................................................ 10 1.3.2. Building from source ....................................................................... 20 1.3.3. Eclipse ........................................................................................... 21 2. Release Notes .................................................................................................... 28 2.1. What is New and Noteworthy in Drools 6.3.0 .............................................. 28 2.1.1. Real Time Validation and Verification for the Decision Tables ............ 28 2.1.2. Improved DRL Editor ...................................................................... 28 2.1.3. Browsing graphs of objects with OOPath ......................................... 29 2.1.4. Kie Navigator View for Eclipse ........................................................ 31 2.2. New and Noteworthy in KIE Workbench 6.3.0 ............................................. 31 2.2.1. Asset locking ................................................................................. 31 2.2.2. Data Modeller Tool Windows .......................................................... 32 2.2.3. Generation of JPA enabled Data Models ......................................... 33 2.2.4. Data Set Authoring ......................................................................... 36 2.3. What is New and Noteworthy in Drools 6.2.0 .............................................. 36 2.3.1. Propagation modes ........................................................................ 36 2.4. New and Noteworthy in KIE Workbench 6.2.0 ............................................. 38 2.4.1. Download Repository or Part of the Repository as a ZIP ................... 38 2.4.2. Project Editor permissions .............................................................. 39 2.4.3. Unify validation style in Guided Decision Table Wizard. ..................... 40 2.4.4. Improved Wizards .......................................................................... 40 2.4.5. Consistent behaviour of XLS, Guided Decision Tables and Guided Templates ............................................................................................... 41 2.4.6. Improved Metadata Tab .................................................................. 42 2.4.7. Improved Data Objects Editor ......................................................... 45 2.4.8. Execution Server Management UI ................................................... 47 2.4.9. Social Activities .............................................................................. 48 2.4.10. Contributors Dashboard ................................................................ 50 2.4.11. Package selector .......................................................................... 51 2.4.12. Improved visual consistency .......................................................... 52 2.4.13. Guided Decision Tree Editor ......................................................... 54 Drools Documentation iv 2.4.14. Create Repository Wizard ............................................................. 55 2.4.15. Repository Structure Screen .......................................................... 56 2.5. New and Noteworthy in Integration 6.2.0 .................................................... 58 2.5.1. KIE Execution Server ..................................................................... 58 2.6. What is New and Noteworthy in Drools 6.1.0 .............................................. 60 2.6.1. JMX support for KieScanner ........................................................... 60 2.7. New and Noteworthy in KIE Workbench 6.1.0 ............................................. 60 2.7.1. Data Modeler - round trip and source code preservation .................... 60 2.7.2. Data Modeler - improved annotations .............................................. 60 2.7.3. Standardization of the display of tabular data ................................... 60 2.7.4. Generation of modify(x) {...} blocks ........................................... 61 2.8. New and Noteworthy in KIE API 6.0.0 ........................................................ 62 2.8.1. New KIE name ............................................................................... 62 2.8.2. Maven aligned projects and modules and Maven Deployment ............ 62 2.8.3. Configuration and convention based projects ................................... 63 2.8.4. KieBase Inclusion ........................................................................... 63 2.8.5. KieModules, KieContainer and KIE-CI .............................................. 64 2.8.6. KieScanner .................................................................................... 64 2.8.7. Hierarchical ClassLoader ................................................................ 65 2.8.8. Legacy API Adapter ....................................................................... 65 2.8.9. KIE Documentation ........................................................................ 65 2.9. What is New and Noteworthy in Drools 6.0.0 .............................................. 66 2.9.1. PHREAK - Lazy rule matching algorithm .......................................... 66 2.9.2. Automatically firing timed rule in passive mode ................................. 66 2.9.3. Expression Timers .......................................................................... 67 2.9.4. RuleFlowGroups and AgendaGroups are merged ............................. 68 2.10. New and Noteworthy in KIE Workbench 6.0.0 ........................................... 68 2.11. New and Noteworthy in Integration 6.0.0 .................................................. 71 2.11.1. CDI .............................................................................................. 71 2.11.2. Spring .......................................................................................... 72 2.11.3. Aries Blueprints ............................................................................ 72 2.11.4. OSGi Ready ................................................................................ 72 3. Compatibility matrix ........................................................................................... 73 II. KIE ............................................................................................................................. 74 4. KIE ..................................................................................................................... 75 4.1. Overview .................................................................................................. 75 4.1.1. Anatomy of Projects ....................................................................... 75 4.1.2. Lifecycles ....................................................................................... 76 4.2. Build, Deploy, Utilize and Run ................................................................... 77 4.2.1. Introduction .................................................................................... 77 4.2.2. Building ......................................................................................... 80 4.2.3. Deploying ...................................................................................... 97 4.2.4. Running ....................................................................................... 102 4.2.5. Installation and Deployment Cheat Sheets ..................................... 117 Drools Documentation v 4.2.6. Build, Deploy and Utilize Examples ................................................ 118 4.3. Security .................................................................................................. 129 4.3.1. Security Manager ......................................................................... 129 III. Drools Runtime and Language .................................................................................. 132 5. Hybrid Reasoning ............................................................................................ 133 5.1. Artificial Intelligence ................................................................................. 133 5.1.1. A Little History ............................................................................. 133 5.1.2. Knowledge Representation and Reasoning .................................... 134 5.1.3. Rule Engines and Production Rule Systems (PRS) ......................... 135 5.1.4. Hybrid Reasoning Systems (HRS) ................................................. 137 5.1.5. Expert Systems ............................................................................ 140 5.1.6. Recommended Reading ................................................................ 141 5.2. Rete Algorithm ........................................................................................ 144 5.3. ReteOO Algorithm ................................................................................... 151 5.4. PHREAK Algorithm ................................................................................. 152 6. User Guide ....................................................................................................... 161 6.1. The Basics ............................................................................................. 161 6.1.1. Stateless Knowledge Session ........................................................ 161 6.1.2. Stateful Knowledge Session .......................................................... 164 6.1.3. Methods versus Rules .................................................................. 169 6.1.4. Cross Products ............................................................................ 169 6.2. Execution Control .................................................................................... 171 6.2.1. Agenda ........................................................................................ 171 6.2.2. Rule Matches and Conflict Sets. .................................................... 172 6.2.3. Declarative Agenda ...................................................................... 178 6.3. Inference ................................................................................................ 181 6.3.1. Bus Pass Example ....................................................................... 181 6.4. Truth Maintenance with Logical Objects .................................................... 183 6.4.1. Overview ...................................................................................... 183 6.5. Decision Tables in Spreadsheets ............................................................. 188 6.5.1. When to Use Decision Tables ....................................................... 188 6.5.2. Overview ...................................................................................... 189 6.5.3. How Decision Tables Work ........................................................... 191 6.5.4. Spreadsheet Syntax ..................................................................... 194 6.5.5. Creating and integrating Spreadsheet based Decision Tables .......... 204 6.5.6. Managing Business Rules in Decision Tables ................................. 205 6.5.7. Rule Templates ............................................................................ 206 6.6. Logging .................................................................................................. 209 7. Running ........................................................................................................... 210 7.1. KieRuntime ............................................................................................. 210 7.1.1. EntryPoint .................................................................................... 210 7.1.2. RuleRuntime ................................................................................ 212 7.1.3. StatefulRuleSession ...................................................................... 213 7.2. Agenda ................................................................................................... 214 Drools Documentation vi 7.2.1. Conflict Resolution ........................................................................ 215 7.2.2. AgendaGroup ............................................................................... 215 7.2.3. ActivationGroup ............................................................................ 216 7.2.4. RuleFlowGroup ............................................................................ 216 7.3. Event Model ........................................................................................... 217 7.4. StatelessKieSession ................................................................................ 218 7.4.1. Sequential Mode .......................................................................... 220 7.5. Propagation modes ................................................................................. 221 7.6. Commands and the CommandExecutor .................................................... 222 8. Rule Language Reference ................................................................................ 228 8.1. Overview ................................................................................................ 228 8.1.1. A rule file ..................................................................................... 228 8.1.2. What makes a rule ....................................................................... 229 8.2. Keywords ................................................................................................ 229 8.3. Comments .............................................................................................. 231 8.3.1. Single line comment ..................................................................... 231 8.3.2. Multi-line comment ....................................................................... 232 8.4. Error Messages ...................................................................................... 232 8.4.1. Message format ........................................................................... 232 8.4.2. Error Messages Description .......................................................... 233 8.4.3. Other Messages ........................................................................... 237 8.5. Package ................................................................................................. 237 8.5.1. import .......................................................................................... 238 8.5.2. global ........................................................................................... 238 8.6. Function ................................................................................................. 240 8.7. Type Declaration ..................................................................................... 241 8.7.1. Declaring New Types ................................................................... 242 8.7.2. Declaring Metadata ...................................................................... 245 8.7.3. Declaring Metadata for Existing Types ........................................... 251 8.7.4. Parametrized constructors for declared types ................................. 252 8.7.5. Non Typesafe Classes .................................................................. 252 8.7.6. Accessing Declared Types from the Application Code ..................... 253 8.7.7. Type Declaration 'extends' ............................................................ 254 8.7.8. Traits ........................................................................................... 255 8.8. Rule ....................................................................................................... 261 8.8.1. Rule Attributes ............................................................................. 262 8.8.2. Timers and Calendars .................................................................. 266 8.8.3. Left Hand Side (when) syntax ....................................................... 269 8.8.4. The Right Hand Side (then) .......................................................... 322 8.8.5. Conditional named consequences ................................................. 324 8.8.6. A Note on Auto-boxing and Primitive Types .................................... 326 8.9. Query ..................................................................................................... 327 8.10. Domain Specific Languages ................................................................... 330 8.10.1. When to Use a DSL ................................................................... 330 Drools Documentation vii 8.10.2. DSL Basics ................................................................................ 330 8.10.3. Adding Constraints to Facts ........................................................ 333 8.10.4. Developing a DSL ...................................................................... 334 8.10.5. DSL and DSLR Reference .......................................................... 335 9. Complex Event Processing .............................................................................. 339 9.1. Complex Event Processing ...................................................................... 339 9.2. Drools Fusion ......................................................................................... 340 9.3. Event Semantics ..................................................................................... 342 9.4. Event Processing Modes ......................................................................... 343 9.4.1. Cloud Mode ................................................................................. 344 9.4.2. Stream Mode ............................................................................... 345 9.5. Session Clock ......................................................................................... 347 9.5.1. Available Clock Implementations ................................................... 348 9.6. Sliding Windows ...................................................................................... 349 9.6.1. Sliding Time Windows .................................................................. 349 9.6.2. Sliding Length Windows ................................................................ 350 9.6.3. Window Declaration ...................................................................... 352 9.7. Streams Support ..................................................................................... 352 9.7.1. Declaring and Using Entry Points .................................................. 353 9.8. Memory Management for Events .............................................................. 355 9.8.1. Explicit expiration offset ................................................................ 355 9.8.2. Inferred expiration offset ............................................................... 356 9.9. Temporal Reasoning ............................................................................... 356 9.9.1. Temporal Operators ...................................................................... 357 10. Experimental Features ................................................................................... 371 10.1. Browsing graphs of objects with OOPath ................................................ 371 10.1.1. Reactive OOPath ........................................................................ 372 IV. Drools Integration ..................................................................................................... 373 11. Drools Commands ......................................................................................... 374 11.1. API ....................................................................................................... 374 11.1.1. XStream ..................................................................................... 374 11.1.2. JSON ......................................................................................... 374 11.1.3. JAXB ......................................................................................... 374 11.2. Commands supported ............................................................................ 375 11.2.1. BatchExecutionCommand ........................................................... 377 11.2.2. InsertObjectCommand ................................................................. 378 11.2.3. RetractCommand ........................................................................ 380 11.2.4. ModifyCommand ......................................................................... 381 11.2.5. GetObjectCommand ................................................................... 382 11.2.6. InsertElementsCommand ............................................................ 383 11.2.7. FireAllRulesCommand ................................................................. 385 11.2.8. StartProcessCommand ............................................................... 386 11.2.9. SignalEventCommand ................................................................. 387 11.2.10. CompleteWorkItemCommand .................................................... 388 Drools Documentation viii 11.2.11. AbortWorkItemCommand ........................................................... 389 11.2.12. QueryCommand ........................................................................ 390 11.2.13. SetGlobalCommand .................................................................. 391 11.2.14. GetGlobalCommand .................................................................. 393 11.2.15. GetObjectsCommand ................................................................ 394 12. CDI ................................................................................................................. 396 12.1. Introduction ........................................................................................... 396 12.2. Annotations ........................................................................................... 396 12.2.1. @KReleaseId ............................................................................. 396 12.2.2. @KContainer .............................................................................. 396 12.2.3. @KBase .................................................................................... 397 12.2.4. @KSession for KieSession .......................................................... 398 12.2.5. @KSession for StatelessKieSession ............................................ 399 12.3. API Example Comparison ...................................................................... 400 13. Integration with Spring .................................................................................. 401 13.1. Important Changes for Drools 6.0 ........................................................... 401 13.2. Integration with Drools Expert ................................................................ 401 13.2.1. KieModule .................................................................................. 401 13.2.2. KieBase ..................................................................................... 402 13.2.3. IMPORTANT NOTE .................................................................... 403 13.2.4. KieSessions ............................................................................... 404 13.2.5. Kie:ReleaseId ............................................................................. 405 13.2.6. Kie:Import .................................................................................. 405 13.2.7. Annotations ................................................................................ 407 13.2.8. Event Listeners .......................................................................... 411 13.2.9. Loggers ...................................................................................... 415 13.2.10. Defining Batch Commands ........................................................ 416 13.2.11. Persistence .............................................................................. 417 13.2.12. Leveraging Other Spring Features ............................................. 418 13.3. Integration with jBPM Human Task ......................................................... 420 13.3.1. How to configure Spring with jBPM Human task ............................ 420 14. Android Integration ........................................................................................ 424 14.1. Integration with Drools Expert ................................................................ 424 14.1.1. Pre-serialized Rules .................................................................... 424 14.1.2. KieContainer API with drools-compiler dependency ....................... 426 14.2. Integration with Roboguice ..................................................................... 428 14.2.1. Pre-serialized Rules with Roboguice ............................................ 428 14.2.2. KieContainer with drools-compiler dependency and Roboguice ...... 429 15. Apache Camel Integration .............................................................................. 432 15.1. Camel ................................................................................................... 432 16. Drools Camel Server ...................................................................................... 435 16.1. Introduction ........................................................................................... 435 16.2. Deployment ........................................................................................... 435 16.3. Configuration ......................................................................................... 435 Drools Documentation ix 16.3.1. REST/Camel Services configuration ............................................. 435 17. JMX monitoring with RHQ/JON ...................................................................... 441 17.1. Introduction ........................................................................................... 441 17.1.1. Enabling JMX monitoring in a Drools application ........................... 441 17.1.2. Installing and running the RHQ/JON plugin ................................... 441 V. Drools Workbench ..................................................................................................... 443 18. Workbench ..................................................................................................... 444 18.1. Installation ............................................................................................ 444 18.1.1. War installation ........................................................................... 444 18.1.2. Workbench data ......................................................................... 444 18.1.3. System properties ....................................................................... 445 18.2. Quick Start ............................................................................................ 446 18.2.1. Add repository ............................................................................ 446 18.2.2. Add project ................................................................................ 449 18.2.3. Define Data Model ...................................................................... 453 18.2.4. Define Rule ................................................................................ 456 18.2.5. Build and Deploy ........................................................................ 459 18.3. Administration ....................................................................................... 460 18.3.1. Administration overview .............................................................. 460 18.3.2. Organizational unit ...................................................................... 460 18.3.3. Repositories ............................................................................... 461 18.4. Configuration ......................................................................................... 463 18.4.1. User management ...................................................................... 463 18.4.2. Roles ......................................................................................... 463 18.4.3. Restricting access to repositories ................................................. 465 18.4.4. Command line config tool ............................................................ 465 18.5. Introduction ........................................................................................... 466 18.5.1. Log in and log out ...................................................................... 466 18.5.2. Home screen .............................................................................. 467 18.5.3. Workbench concepts .................................................................. 467 18.5.4. Initial layout ................................................................................ 467 18.6. Changing the layout .............................................................................. 468 18.6.1. Resizing ..................................................................................... 469 18.6.2. Repositioning .............................................................................. 469 18.7. Authoring .............................................................................................. 471 18.7.1. Artifact Repository ...................................................................... 471 18.7.2. Asset Editor ............................................................................... 473 18.7.3. Tags Editor ................................................................................ 477 18.7.4. Project Explorer .......................................................................... 479 18.7.5. Project Editor ............................................................................. 492 18.7.6. Validation ................................................................................... 495 18.7.7. Data Modeller ............................................................................. 497 18.7.8. Data Sets ................................................................................... 537 18.8. Embedding Workbench In Your Application ............................................. 551 Drools Documentation x 18.9. Asset Management ................................................................................ 552 18.9.1. Asset Management Overview ...................................................... 552 18.9.2. Managed vs Unmanaged Repositories ......................................... 552 18.9.3. Asset Management Processes .................................................... 552 18.9.4. Usage Flow ................................................................................ 554 18.9.5. Repository Structure ................................................................... 556 18.9.6. Managed Repositories Operations ............................................... 557 18.9.7. Remote APIs .............................................................................. 563 19. Authoring Assets ........................................................................................... 564 19.1. Creating a package ............................................................................... 564 19.1.1. Empty package .......................................................................... 565 19.1.2. Copy, Rename and Delete Packages ........................................... 566 19.2. Business rules with the guided editor ...................................................... 568 19.2.1. Parts of the Guided Rule Editor ................................................... 568 19.2.2. The "WHEN" (left-hand side) of a Rule ......................................... 569 19.2.3. The "THEN" (right-hand side) of a Rule ........................................ 573 19.2.4. Optional attributes ...................................................................... 576 19.2.5. Pattern/Action toolbar .................................................................. 576 19.2.6. User driven drop down lists ......................................................... 576 19.2.7. Augmenting with DSL sentences ................................................. 577 19.2.8. A more complex example: ........................................................... 578 19.3. Templates of assets/rules ...................................................................... 579 19.3.1. Creating a rule template .............................................................. 580 19.3.2. Define the template .................................................................... 580 19.3.3. Defining the template data .......................................................... 581 19.3.4. Generated DRL .......................................................................... 585 19.4. Guided decision tables (web based) ....................................................... 587 19.4.1. Types of decision table ............................................................... 587 19.4.2. Main components\concepts ......................................................... 588 19.4.3. Defining a web based decision table ............................................ 591 19.4.4. Rule definition ............................................................................ 606 19.4.5. Audit Log ................................................................................... 607 19.4.6. Real Time Validation and Verification ........................................... 609 19.5. Guided Decision Trees .......................................................................... 610 19.5.1. The initial editor layout ................................................................ 610 19.5.2. First steps .................................................................................. 612 19.5.3. Editing Data Object nodes ........................................................... 613 19.5.4. Editing Field Constraint nodes ..................................................... 614 19.5.5. Editing Action nodes ................................................................... 615 19.5.6. Managing the tree ...................................................................... 618 19.6. Spreadsheet decision tables .................................................................. 620 19.7. Scorecards ............................................................................................ 621 19.7.1. (a) Setup Parameters ................................................................. 622 19.7.2. (b) Characteristics ...................................................................... 623 Drools Documentation xi 19.8. Test Scenario ........................................................................................ 625 19.8.1. Knowledge Session Selector ....................................................... 627 19.8.2. Given Section ............................................................................. 628 19.8.3. Expect Section ........................................................................... 628 19.8.4. Global Section ............................................................................ 629 19.8.5. New Input Section ...................................................................... 629 19.9. Functions .............................................................................................. 629 19.10. DSL editor .......................................................................................... 630 19.11. Data enumerations (drop down list configurations) ................................. 631 19.11.1. Advanced enumeration concepts ............................................... 632 19.12. Technical rules (DRL) .......................................................................... 633 20. Workbench Integration ................................................................................... 635 20.1. REST ................................................................................................... 635 20.1.1. Job calls .................................................................................... 635 20.1.2. Repository calls .......................................................................... 636 20.1.3. Organizational unit calls .............................................................. 638 20.1.4. Maven calls ................................................................................ 638 20.1.5. REST summary .......................................................................... 639 21. Workbench High Availability .......................................................................... 641 21.1. .............................................................................................................. 641 21.1.1. VFS clustering ............................................................................ 641 21.1.2. jBPM clustering .......................................................................... 644 VI. KIE Server ............................................................................................................... 645 22. KIE Execution Server ..................................................................................... 646 22.1. Overview .............................................................................................. 646 22.1.1. Glossary .................................................................................... 646 22.2. Installing the KIE Server ........................................................................ 647 22.2.1. Bootstrap switches ...................................................................... 648 22.2.2. Installation details for different containers ..................................... 650 22.3. Kie Server setup ................................................................................... 652 22.3.1. Managed Kie Server ................................................................... 652 22.3.2. Unmanaged KIE Execution Server ............................................... 654 22.4. Creating a Kie Container ....................................................................... 654 22.5. Managing Containers ............................................................................. 655 22.5.1. Starting a Container .................................................................... 655 22.5.2. Stopping and Deleting a Container .............................................. 656 22.5.3. Updating a Container .................................................................. 656 22.6. REST API ............................................................................................. 656 22.6.1. [GET] / ....................................................................................... 657 22.6.2. [POST] / .................................................................................... 657 22.6.3. [GET] /containers ........................................................................ 657 22.6.4. [GET] /containers/{id} .................................................................. 658 22.6.5. [PUT] /containers/{id} .................................................................. 658 22.6.6. [DELETE] /containers/{id} ............................................................ 659 Drools Documentation xii 22.6.7. [POST] /containers/{id} ................................................................ 659 22.6.8. [GET] /containers/{id}/release-id ................................................... 660 22.6.9. [POST] /containers/{id}/release-id ................................................ 660 22.6.10. [GET] /containers/{id}/scanner .................................................... 661 22.6.11. [POST] /containers/{id}/scanner ................................................. 661 22.6.12. Native REST client for Execution Server ..................................... 662 VII. Drools Examples ..................................................................................................... 664 23. Examples ........................................................................................................ 665 23.1. Getting the Examples ............................................................................ 665 23.2. Hello World ........................................................................................... 665 23.3. State Example ...................................................................................... 671 23.3.1. Understanding the State Example ................................................ 671 23.4. Fibonacci Example ................................................................................ 678 23.5. Banking Tutorial .................................................................................... 684 23.6. Pricing Rule Decision Table Example ..................................................... 697 23.6.1. Executing the example ................................................................ 697 23.6.2. The decision table ...................................................................... 698 23.7. Pet Store Example ................................................................................ 700 23.8. Honest Politician Example ..................................................................... 711 23.9. Sudoku Example ................................................................................... 715 23.9.1. Sudoku Overview ....................................................................... 715 23.9.2. Running the Example ................................................................. 715 23.9.3. Java Source and Rules Overview ................................................ 721 23.9.4. Sudoku Validator Rules (validate.drl) ............................................ 721 23.9.5. Sudoku Solving Rules (sudoku.drl) .............................................. 722 23.10. Number Guess .................................................................................... 723 23.11. Conway's Game Of Life ....................................................................... 730 23.12. Invaders .............................................................................................. 737 23.12.1. Invaders1Main .......................................................................... 738 23.12.2. Invaders2Main .......................................................................... 739 23.12.3. Invaders3Main .......................................................................... 739 23.12.4. Invaders4Main .......................................................................... 740 23.12.5. Invaders5Main .......................................................................... 740 23.12.6. Invaders6Main .......................................................................... 740 23.12.7. Invaders4Main .......................................................................... 741 23.13. Adventures with Drools ........................................................................ 741 23.13.1. Using the game. ....................................................................... 742 23.13.2. The code .................................................................................. 744 23.14. Pong ................................................................................................... 746 23.15. Wumpus World .................................................................................... 747 23.16. Miss Manners and Benchmarking ......................................................... 750 23.16.1. Introduction .............................................................................. 751 23.16.2. In depth Discussion .................................................................. 754 23.16.3. Output Summary ...................................................................... 760 Drools Documentation xiii 23.17. Backward-Chaining .............................................................................. 763 23.17.1. Backward-Chaining Systems ..................................................... 764 23.17.2. Cloning Transitive Closures ....................................................... 765 23.17.3. Defining a Query ...................................................................... 766 23.17.4. Transitive Closure Example ....................................................... 767 23.17.5. Reactive Transitive Queries ....................................................... 769 23.17.6. Queries with Unbound Arguments .............................................. 770 23.17.7. Multiple Unbound Arguments ..................................................... 771 xiv Part I. Welcome Welcome and Release Notes 2 Chapter 1. Introduction 1.1. Introduction It's been a busy year since the last 5.x series release and so much has change. One of the biggest complaints during the 5.x series was the lack of defined methodology for de- ployment. The mechanism used by Drools and jBPM was very flexible, but it was too flexible. A big focus for 6.0 was streamlining the build, deploy and loading(utilization) aspects of the system. Building and deploying now align with Maven and the utilization is now convention and configura- tion oriented, instead of programmatic, with sane default to minimise the configuration. The workbench has been rebuilt from the ground up, inspired by Eclipse, to provide a flexible and better integrated solution; with panels and perspectives via plugins. The base workbench has been spun off into a standalone project called UberFire, so that anyone now can build high quality web based workbenches. In the longer term it will facilitate user customised Drools and jBPM installations. Git replaces JCR as the content repository, offering a fast and scalable back-end storage for con- tent that has strong tooling support. There has been a refocus on simplicity away from databases with an aim of storing everything as text file, even meta data is just a file. The database is just there to provide fast indexing and search via Lucene. This will allow repositories now to be synced and published with established infrastructure, like GitHub. jBPM has been dramatically beefed up, thanks to the Polymita acquisition, with human tasks, form builders, class modellers, execution servers and runtime management. All fully integrated into the new workbench. OptaPlanner is now a top level project and getting full time attention. A new umbrella name, KIE (Knowledge Is Everything), has been introduced to bring our related technologies together under one roof. It also acts as the core shared around for our projects. So expect to see it a lot. 1.2. Getting Involved We are often asked "How do I get involved". Luckily the answer is simple, just write some code and submit it :) There are no hoops you have to jump through or secret handshakes. We have a very minimal "overhead" that we do request to allow for scalable project development. Below we provide a general overview of the tools and "workflow" we request, along with some general advice. If you contribute some good work, don't forget to blog about it :) Introduction 3 1.2.1. Sign up to jboss.org Signing to jboss.org will give you access to the JBoss wiki, forums and JIRA. Go to http:// www.jboss.org/ and click "Register". 1.2.2. Sign the Contributor Agreement The only form you need to sign is the contributor agreement, which is fully automated via the web. As the image below says "This establishes the terms and conditions for your contributions and ensures that source code can be licensed appropriately" https://cla.jboss.org/ Introduction 4 1.2.3. Submitting issues via JIRA To be able to interact with the core development team you will need to use JIRA, the issue tracker. This ensures that all requests are logged and allocated to a release schedule and all discussions captured in one place. Bug reports, bug fixes, feature requests and feature submissions should all go here. General questions should be undertaken at the mailing lists. Minor code submissions, like format or documentation fixes do not need an associated JIRA issue created. https://issues.jboss.org/browse/JBRULES [https://issues.jboss.org/browse/JBRULES](Drools) https://issues.jboss.org/browse/JBPM https://issues.jboss.org/browse/GUVNOR Introduction 5 1.2.4. Fork GitHub With the contributor agreement signed and your requests submitted to JIRA you should now be ready to code :) Create a GitHub account and fork any of the Drools, jBPM or Guvnor repositories. The fork will create a copy in your own GitHub space which you can work on at your own pace. If you make a mistake, don't worry blow it away and fork again. Note each GitHub repository provides you the clone (checkout) URL, GitHub will provide you URLs specific to your fork. https://github.com/droolsjbpm 1.2.5. Writing Tests When writing tests, try and keep them minimal and self contained. We prefer to keep the DRL fragments within the test, as it makes for quicker reviewing. If their are a large number of rules Introduction 6 then using a String is not practical so then by all means place them in separate DRL files instead to be loaded from the classpath. If your tests need to use a model, please try to use those that already exist for other unit tests; such as Person, Cheese or Order. If no classes exist that have the fields you need, try and update fields of existing classes before adding a new class. There are a vast number of tests to look over to get an idea, MiscTest is a good place to start. https://github.com/droolsjbpm/drools/blob/master/drools-compiler/src/test/java/org/drools/ integrationtests/MiscTest.java [https://github.com/droolsjbpm] Introduction 7 1.2.6. Commit with Correct Conventions When you commit, make sure you use the correct conventions. The commit must start with the JIRA issue id, such as JBRULES-220. This ensures the commits are cross referenced via JIRA, so we can see all commits for a given issue in the same place. After the id the title of the issue should come next. Then use a newline, indented with a dash, to provide additional information Introduction 8 related to this commit. Use an additional new line and dash for each separate point you wish to make. You may add additional JIRA cross references to the same commit, if it's appropriate. In general try to avoid combining unrelated issues in the same commit. Don't forget to rebase your local fork from the original master and then push your commits back to your fork. 1.2.7. Submit Pull Requests With your code rebased from original master and pushed to your personal GitHub area, you can now submit your work as a pull request. If you look at the top of the page in GitHub for your work area their will be a "Pull Request" button. Selecting this will then provide a gui to automate the submission of your pull request. Introduction 9 The pull request then goes into a queue for everyone to see and comment on. Below you can see a typical pull request. The pull requests allow for discussions and it shows all associated commits and the diffs for each commit. The discussions typically involve code reviews which provide helpful suggestions for improvements, and allows for us to leave inline comments on specific parts of the code. Don't be disheartened if we don't merge straight away, it can often take several revisions before we accept a pull request. Luckily GitHub makes it very trivial to go back to your code, do some more commits and then update your pull request to your latest and greatest. It can take time for us to get round to responding to pull requests, so please be patient. Submitted tests that come with a fix will generally be applied quite quickly, where as just tests will often way until we get time to also submit that with a fix. Don't forget to rebase and resubmit your request from time to time, otherwise over time it will have merge conflicts and core developers will general ignore those. Introduction 10 1.3. Installation and Setup (Core and IDE) 1.3.1. Installing and using Drools provides an Eclipse-based IDE (which is optional), but at its core only Java 1.5 (Java SE) is required. A simple way to get started is to download and install the Eclipse plug-in - this will also require the Eclipse GEF framework to be installed (see below, if you don't have it installed already). This will provide you with all the dependencies you need to get going: you can simply create a new rule project and everything will be done for you. Refer to the chapter on the Rule Workbench and IDE for detailed instructions on this. Installing the Eclipse plug-in is generally as simple as unzipping a file into your Eclipse plug-in directory. Use of the Eclipse plug-in is not required. Rule files are just textual input (or spreadsheets as the case may be) and the IDE (also known as the Rule Workbench) is just a convenience. People have integrated the rule engine in many ways, there is no "one size fits all". Alternatively, you can download the binary distribution, and include the relevant JARs in your projects classpath. 1.3.1.1. Dependencies and JARs Drools is broken down into a few modules, some are required during rule development/compiling, and some are required at runtime. In many cases, people will simply want to include all the de- pendencies at runtime, and this is fine. It allows you to have the most flexibility. However, some may prefer to have their "runtime" stripped down to the bare minimum, as they will be deploying rules in binary form - this is also possible. The core runtime engine can be quite compact, and only requires a few 100 kilobytes across 3 JAR files. The following is a description of the important libraries that make up JBoss Drools • knowledge-api.jar - this provides the interfaces and factories. It also helps clearly show what is intended as a user API and what is just an engine API. • knowledge-internal-api.jar - this provides internal interfaces and factories. • drools-core.jar - this is the core engine, runtime component. Contains both the RETE engine and the LEAPS engine. This is the only runtime dependency if you are pre-compiling rules (and deploying via Package or RuleBase objects). • drools-compiler.jar - this contains the compiler/builder components to take rule source, and build executable rule bases. This is often a runtime dependency of your application, but it need not be if you are pre-compiling your rules. This depends on drools-core. • drools-jsr94.jar - this is the JSR-94 compliant implementation, this is essentially a layer over the drools-compiler component. Note that due to the nature of the JSR-94 specification, not all Introduction 11 features are easily exposed via this interface. In some cases, it will be easier to go direct to the Drools API, but in some environments the JSR-94 is mandated. • drools-decisiontables.jar - this is the decision tables 'compiler' component, which uses the drools-compiler component. This supports both excel and CSV input formats. There are quite a few other dependencies which the above components require, most of which are for the drools-compiler, drools-jsr94 or drools-decisiontables module. Some key ones to note are "POI" which provides the spreadsheet parsing ability, and "antlr" which provides the parsing for the rule language itself. NOTE: if you are using Drools in J2EE or servlet containers and you come across classpath issues with "JDT", then you can switch to the janino compiler. Set the system property "drools.compiler": For example: -Ddrools.compiler=JANINO. For up to date info on dependencies in a release, consult the released POMs, which can be found on the Maven repository. 1.3.1.2. Use with Maven, Gradle, Ivy, Buildr or Ant The JARs are also available in the central Maven repository [http://search.maven.org/#search| ga|1|org.drools] (and also in the JBoss Maven repository [https://repository.jboss.org/nexus/ index.html#nexus-search;gav~org.drools~~~~]). If you use Maven, add KIE and Drools dependencies in your project's pom.xml like this: org.drools drools-bom pom ... import ... org.kie kie-api org.drools drools-compiler runtime ... This is similar for Gradle, Ivy and Buildr. To identify the latest version, check the Maven repository. Introduction 12 If you're still using Ant (without Ivy), copy all the JARs from the download zip's binaries directory and manually verify that your classpath doesn't contain duplicate JARs. 1.3.1.3. Runtime The "runtime" requirements mentioned here are if you are deploying rules as their binary form (either as KnowledgePackage objects, or KnowledgeBase objects etc). This is an optional feature that allows you to keep your runtime very light. You may use drools-compiler to produce rule packages "out of process", and then deploy them to a runtime system. This runtime system only requires drools-core.jar and knowledge-api for execution. This is an optional deployment pattern, and many people do not need to "trim" their application this much, but it is an ideal option for certain environments. 1.3.1.4. Installing IDE (Rule Workbench) The rule workbench (for Eclipse) requires that you have Eclipse 3.4 or greater, as well as Eclipse GEF 3.4 or greater. You can install it either by downloading the plug-in or using the update site. Another option is to use the JBoss IDE, which comes with all the plug-in requirements pre pack- aged, as well as a choice of other tools separate to rules. You can choose just to install rules from the "bundle" that JBoss IDE ships with. 1.3.1.4.1. Installing GEF (a required dependency) GEF is the Eclipse Graphical Editing Framework, which is used for graph viewing components in the plug-in. If you don't have GEF installed, you can install it using the built in update mechanism (or down- loading GEF from the Eclipse.org website not recommended). JBoss IDE has GEF already, as do many other "distributions" of Eclipse, so this step may be redundant for some people. Open the Help->Software updates...->Available Software->Add Site... from the help menu. Loca- tion is: http://download.eclipse.org/tools/gef/updates/releases/ Next you choose the GEF plug-in: Introduction 13 Press next, and agree to install the plug-in (an Eclipse restart may be required). Once this is completed, then you can continue on installing the rules plug-in. 1.3.1.4.2. Installing GEF from zip file To install from the zip file, download and unzip the file. Inside the zip you will see a plug-in direc- tory, and the plug-in JAR itself. You place the plug-in JAR into your Eclipse applications plug-in directory, and restart Eclipse. 1.3.1.4.3. Installing Drools plug-in from zip file Download the Drools Eclipse IDE plugin from the link below. Unzip the downloaded file in your main eclipse folder (do not just copy the file there, extract it so that the feature and plugin JARs end up in the features and plugin directory of eclipse) and (re)start Eclipse. http://www.drools.org/download/download.html To check that the installation was successful, try opening the Drools perspective: Click the 'Open Perspective' button in the top right corner of your Eclipse window, select 'Other...' and pick the Drools perspective. If you cannot find the Drools perspective as one of the possible perspectives, Introduction 14 the installation probably was unsuccessful. Check whether you executed each of the required steps correctly: Do you have the right version of Eclipse (3.4.x)? Do you have Eclipse GEF installed (check whether the org.eclipse.gef_3.4.*.jar exists in the plugins directory in your eclipse root fold- er)? Did you extract the Drools Eclipse plugin correctly (check whether the org.drools.eclipse_*.jar exists in the plugins directory in your eclipse root folder)? If you cannot find the problem, try con- tacting us (e.g. on irc or on the user mailing list), more info can be found no our homepage here: http://www.drools.org/ 1.3.1.4.4. Drools Runtimes A Drools runtime is a collection of JARs on your file system that represent one specific release of the Drools project JARs. To create a runtime, you must point the IDE to the release of your choice. If you want to create a new runtime based on the latest Drools project JARs included in the plugin itself, you can also easily do that. You are required to specify a default Drools runtime for your Eclipse workspace, but each individual project can override the default and select the appropriate runtime for that project specifically. 1.3.1.4.4.1. Defining a Drools runtime You are required to define one or more Drools runtimes using the Eclipse preferences view. To open up your preferences, in the menu Window select the Preferences menu item. A new prefer- ences dialog should show all your preferences. On the left side of this dialog, under the Drools category, select "Installed Drools runtimes". The panel on the right should then show the currently defined Drools runtimes. If you have not yet defined any runtimes, it should like something like the figure below. Introduction 15 To define a new Drools runtime, click on the add button. A dialog as shown below should pop up, requiring the name for your runtime and the location on your file system where it can be found. Introduction 16 In general, you have two options: 1. If you simply want to use the default JARs as included in the Drools Eclipse plugin, you can create a new Drools runtime automatically by clicking the "Create a new Drools 5 runtime ..." button. A file browser will show up, asking you to select the folder on your file system where you want this runtime to be created. The plugin will then automatically copy all required depen- dencies to the specified folder. After selecting this folder, the dialog should look like the figure shown below. 2. If you want to use one specific release of the Drools project, you should create a folder on your file system that contains all the necessary Drools libraries and dependencies. Instead of creating a new Drools runtime as explained above, give your runtime a name and select the location of this folder containing all the required JARs. Introduction 17 After clicking the OK button, the runtime should show up in your table of installed Drools runtimes, as shown below. Click on checkbox in front of the newly created runtime to make it the default Drools runtime. The default Drools runtime will be used as the runtime of all your Drools project that have not selected a project-specific runtime. You can add as many Drools runtimes as you need. For example, the screenshot below shows a configuration where three runtimes have been defined: a Drools 4.0.7 runtime, a Drools 5.0.0 Introduction 18 runtime and a Drools 5.0.0.SNAPSHOT runtime. The Drools 5.0.0 runtime is selected as the default one. Note that you will need to restart Eclipse if you changed the default runtime and you want to make sure that all the projects that are using the default runtime update their classpath accordingly. 1.3.1.4.4.2. Selecting a runtime for your Drools project Whenever you create a Drools project (using the New Drools Project wizard or by converting an existing Java project to a Drools project using the "Convert to Drools Project" action that is shown when you are in the Drools perspective and you right-click an existing Java project), the plugin will automatically add all the required JARs to the classpath of your project. When creating a new Drools project, the plugin will automatically use the default Drools runtime for that project, unless you specify a project-specific one. You can do this in the final step of the New Drools Project wizard, as shown below, by deselecting the "Use default Drools runtime" checkbox and selecting the appropriate runtime in the drop-down box. If you click the "Configure workspace settings ..." link, the workspace preferences showing the currently installed Drools runtimes will be opened, so you can add new runtimes there. Introduction 19 You can change the runtime of a Drools project at any time by opening the project properties (right-click the project and select Properties) and selecting the Drools category, as shown below. Check the "Enable project specific settings" checkbox and select the appropriate runtime from the drop-down box. If you click the "Configure workspace settings ..." link, the workspace preferences showing the currently installed Drools runtimes will be opened, so you can add new runtimes there. If you deselect the "Enable project specific settings" checkbox, it will use the default runtime as defined in your global preferences. Introduction 20 1.3.2. Building from source 1.3.2.1. Getting the sources The source code of each Maven artifact is available in the JBoss Maven repository as a source JAR. The same source JARs are also included in the download zips. However, if you want to build from source, it's highly recommended to get our sources from our source control. Drools and jBPM use Git [http://git-scm.com/] for source control. The blessed git repositories are hosted on GitHub [https://github.com]: • https://github.com/droolsjbpm Git allows you to fork our code, independently make personal changes on it, yet still merge in our latest changes regularly and optionally share your changes with us. To learn more about git, read the free book Git Pro [http://progit.org/book/]. 1.3.2.2. Building the sources In essense, building from source is very easy, for example if you want to build the guvnor project: Introduction 21 $ git clone git@github.com:droolsjbpm/guvnor.git ... $ cd guvnor $ mvn clean install -DskipTests -Dfull ... However, there are a lot potential pitfalls, so if you're serious about building from source and possibly contributing to the project, follow the instructions in the README file in droolsjbpm-build-bootstrap [https://github.com/droolsjbpm/droolsjbpm-build-boot- strap/blob/master/README.md]. 1.3.3. Eclipse 1.3.3.1. Importing Eclipse Projects With the Eclipse project files generated they can now be imported into Eclipse. When starting Eclipse open the workspace in the root of your subversion checkout. Introduction 22 Introduction 23 Introduction 24 When calling mvn install all the project dependencies were downloaded and added to the local Maven repository. Eclipse cannot find those dependencies unless you tell it where that repository is. To do this setup an M2_REPO classpath variable. Introduction 25 Introduction 26 Introduction 27 28 Chapter 2. Release Notes 2.1. What is New and Noteworthy in Drools 6.3.0 2.1.1. Real Time Validation and Verification for the Decision Ta- bles Decision tables used to have a Validation-button for validating the table. This is now removed and the table is validated after each cell value change. The validation and verification checks include: • Redundancy • Subsumption • Conflicts • Missing Columns These checks are explained in detail in the workbench documentation. 2.1.2. Improved DRL Editor The DRL Editor has undergone a face lift; moving from a plain TextArea to using ACE Editor and a custom DRL syntax highlighter. Figure 2.1. ACE Editor Release Notes 29 2.1.3. Browsing graphs of objects with OOPath Warning This feature is experimental When the field of a fact is a collection it is possible to bind and reason over all the items in that collection on by one using the from keyword. Nevertheless, when it is required to browse a graph of object the extensive use of the from conditional element may result in a verbose and cubersome syntax like in the following example: Example 2.1. Browsing a graph of objects with from rule "Find all grades for Big Data exam" when $student: Student( $plan: plan ) $exam: Exam( course == "Big Data" ) from $plan.exams $grade: Grade() from $exam.gradesthen /* RHS */ end when $student: Student( $plan: plan ) $exam: Exam( course == "Big Data" ) from $plan.exams $grade: Grade() from $exam.gradesthen /* RHS */ In this example it has been assumed to use a domain model consisting of a Student who has a Plan of study: a Plan can have zero or more Exams and an Exam zero or more Grades. Note that only the root object of the graph (the Student in this case) needs to be in the working memory in order to make this works. By borrowing ideas from XPath, this syntax can be made more succinct, as XPath has a com- pact notation for navigating through related elements while handling collections and filtering con- straints. This XPath-inspired notation has been called OOPath since it is explictly intended to browse graph of objects. Using this notation the former example can be rewritten as it follows: Example 2.2. Browsing a graph of objects with OOPath rule "Find all grades for Big Data exam" when Student( $grade: /plan/exams{course == "Big Data"}/grades )then /* RHS */ end when Student( $grade: /plan/exams{course == "Big Data"}/grades )then /* RHS */ Formally, the core grammar of an OOPath expression can be defined in EBNF notation in this way. OOPExpr = "/" OOPSegment { ( "/" | "." ) OOPSegment } ;OOPSegment = [ID ( ":" | ":=" )] ID ["[" Number "]"] ["{" Constraints "}"]; } ;OOPSegment = [ID ( ":" | ":=" )] ID ["[" Number "]"] ["{" Release Notes 30 In practice an OOPath expression has the following features. • It has to start with /. • It can dereference a single property of an object with the . operator • It can dereference a multiple property of an object using the / operator. If a collection is returned, it will iterate over the values in the collection • While traversing referenced objects it can filter away those not satisfying one or more con- straints, written as predicate expressions between curly brackets like in: Student( $grade: /plan/exams{course == "Big Data"}/grades ) • Items can also be accessed by their index by putting it between square brackets like in: Student( $grade: /plan/exams[0]/grades ) To adhere to Java convention OOPath indexes are 0-based, compared to XPath 1-based 2.1.3.1. Reactive OOPath At the moment Drools is not able to react to updates involving a deeply nested traversed during the evaluation of an OOPath expression. To make these objects reactive to changes at the moment it is necessary to make them extend the class org.drools.core.phreak.ReactiveObject. It is planned to overcome this limitation by implementing a mechanism that automatically instruments the classes belonging to a specific domain model. Having extendend that class, the domain objects can notify the engine when one of its field has been updated by invoking the inherited method notifyModification as in the following example: Example 2.3. Notifying the engine that an exam has been moved to a different course public void setCourse(String course) { this.course = course; notifyModification(this);} { this.course = course; notifyModification(this); In this way if an exam is moved to a different course, the rule is re-triggered and the list of grades matching the rule recomputed. Release Notes 31 2.1.4. Kie Navigator View for Eclipse A new viewer has been added to the Eclipse Tooling. This Kie Navigator View is used to manage Kie Server installations and projects. Please read the chapter Kie Navigator View for more information about this new feature 2.2. New and Noteworthy in KIE Workbench 6.3.0 2.2.1. Asset locking To avoid conflicts when editing assets, a new locking mechanism has been introduced that makes sure that only one user at a time can edit an asset. When a user begins to edit an asset, a lock will automatically be acquired. This is indicated by a lock symbol appearing on the asset title bar as well as in the project explorer view. If a user starts editing an already locked asset a pop-up notification will appear to inform the user that the asset can't currently be edited, as it is being worked on by another user. As long as the editing user holds the lock, changes by other users will be prevented. Locks will automatically be released when the editing user saves or closes the asset, or logs out of the workbench. Every user further has the option to force a lock release in the metadata tab, if required. Figure 2.2. Editing an asset automatically acquires a lock Release Notes 32 Figure 2.3. Locked assets cannot be edited by other users 2.2.2. Data Modeller Tool Windows Drools and jBPM configurations, Persistence (see Generation of JPA enabled Data Models) and Advanced configurations were moved into "Tool Windows". "Tool Windows" are a new concept introduced in latest Uberfire version that enables the development of context aware screens. Each "Tool Window" will contain a domain editor that will manage a set of related Data Object parame- ters. Figure 2.4. Drools and jBPM domain tool window Release Notes 33 Figure 2.5. Persistence tool window Figure 2.6. Advanced configurations tool window 2.2.3. Generation of JPA enabled Data Models Data modeller was extended to support the generation of persistable Data Objects. The per- sistable Data Objects are based on the JPA specification and all the underlying metadata are automatically generated. •"The New -> Data Object" Data Objects can be marked as persistable at creation time. Release Notes 34 Figure 2.7. New Data Object • The Persistence tool window contains the JPA Domain editors for both Data Object and Field. Each editor will manage the by default generated JPA metadata Figure 2.8. Data Object level JPA domain editor Release Notes 35 Figure 2.9. Field level JPA domain editor • Persistence configuration screen was added to the project editor. Figure 2.10. Persistence configuration Release Notes 36 2.2.4. Data Set Authoring A new perspective for authoring data set definitions has been added. Data set definitions make it possible to retrieve data from external systems like databases, CSV/Excel files or even use a Java class to generate the data. Once the data is available it can be used, for instance, to create charts and dashboards from the Perspective Editor just feeding the charts from any of the data sets available. Figure 2.11. Data Sets Authoring Perspective 2.3. What is New and Noteworthy in Drools 6.2.0 2.3.1. Propagation modes The introduction of PHREAK as default algorithm for the Drools engine made the rules' evaluation lazy. This new Drools lazy behavior allowed a relevant performance boost but, in some very spe- cific cases, breaks the semantic of a few Drools features. More precisely in some circumstances it is necessary to propagate the insertion of new fact into th session immediately. For instance Drools allows a query to be executed in pull only (or passive) mode by prepending a '?' symbol to its invocation as in the following example: Example 2.4. A passive query query Q (Integer i) Release Notes 37 String( this == i.toString() ) end rule R when $i : Integer() ?Q( $i; ) then System.out.println( $i ); end In this case, since the query is passive, it shouldn't react to the insertion of a String matching the join condition in the query itself. In other words this sequence of commands KieSession ksession = ... ksession.insert(1); ksession.insert("1"); ksession.fireAllRules(); shouldn't cause the rule R to fire because the String satisfying the query condition has been inserted after the Integer and the passive query shouldn't react to this insertion. Conversely the rule should fire if the insertion sequence is inverted because the insertion of the Integer, when the passive query can be satisfied by the presence of an already existing String, will trigger it. Unfortunately the lazy nature of PHREAK doesn't allow the engine to make any distinction regard- ing the insertion sequence of the two facts, so the rule will fire in both cases. In circumstances like this it is necessary to evaluate the rule eagerly as done by the old RETEOO-based engine. In other cases it is required that the propagation is eager, meaning that it is not immedate, but anyway has to happen before the engine/agenda starts scheduled evaluations. For instance this is necessary when a rule has the no-loop or the lock-on-active attribute and in fact when this happens this propagation mode is automatically enforced by the engine. To cover these use cases, and in all other situations where an immediate or eager rule eval- uation is required, it is possible to declaratively specify so by annotating the rule itself with @Propagation(Propagation.Type), where Propagation.Type is an enumeration with 3 possible values: • IMMEDIATE means that the propagation is performed immediately. • EAGER means that the propagation is performed lazily but eagerly evaluated before scheduled evaluations. • LAZY means that the propagation is totally lazy and this is default PHREAK behaviour This means that the following drl: Example 2.5. A data-driven rule using a passive query query Q (Integer i) Release Notes 38 String( this == i.toString() ) end rule R @Propagation(IMMEDIATE) when $i : Integer() ?Q( $i; ) then System.out.println( $i ); end will make the rule R to fire if and only if the Integer is inserted after the String, thus behaving in accordance with the semantic of the passive query. 2.4. New and Noteworthy in KIE Workbench 6.2.0 2.4.1. Download Repository or Part of the Repository as a ZIP This feature makes it possible to download a repository or a folder from the repository as a ZIP file. Figure 2.12. Download current repository or project Release Notes 39 Figure 2.13. Download a folder 2.4.2. Project Editor permissions The ability to configure role-based permissions for the Project Editor have been added. Permissions can be configured using the WEB-INF/classes/workbench-policy.properties file. The following permissions are supported: • Save button feature.wb_project_authoring_save • Delete button feature.wb_project_authoring_delete • Copy button feature.wb_project_authoring_copy • Rename button feature.wb_project_authoring_rename Release Notes 40 • Build & Deploy button feature.wb_project_authoring_buildAndDeploy 2.4.3. Unify validation style in Guided Decision Table Wizard. All of our new screens use GWT-Bootstrap widgets and alert users to input errors in a consistent way. One of the most noticable differences was the Guided Decision Table Wizard that alerted errors in a way inconsistent with our use of GWT-Bootstrap. This Wizard has been updated to use the new look and feel. Figure 2.14. New Guided Decision Table Wizard validation 2.4.4. Improved Wizards During the re-work of the Guided Decision Table's Wizard to make it's validation consistent with other areas of the application we took the opportunity to move the Wizard Framework to GWT- Bootstrap too. The resulting appearance is much more pleasing. We hope to migrate more legacy editors to GWT-Bootstrap as time and priorities permit. Release Notes 41 Figure 2.15. New Wizard Framework 2.4.5. Consistent behaviour of XLS, Guided Decision Tables and Guided Templates Consistency is a good thing for everybody. Users can expect different authoring metaphores to produce the same rule behaviour (and developers know when something is a bug!). There were a few inconsistencies in the way XLS Decision Tables, Guidied Decision Tables and Guided Rule Templates generated the underlying rules for empty cells. These have been elimi- nated making their operation consistent. • If all constraints have null values (empty cells) the Pattern is not created. Should you need the Pattern but no constraints; you will need to include the constraint this ! = null. This operation is consistent with how XLS and Guided Decision Tables have always worked. • You can define a constraint on a String field for an empty String or white-space by delimiting it with double-quotation marks. The enclosing quotation-marks are removed from the value when generating the rules. Release Notes 42 The use of quotation marks for other String values is not required and they can be omitted. Their use is however essential to differentiate a constraint for an empty String from an empty cell - in which case the constraint is omitted. 2.4.6. Improved Metadata Tab The Metadata tab provided in previous versions was redesigned to provide a better asset version- ing information browsing and recovery. Now every workbench editor will provide an "Overview tab" that will enable the user to manage the following information. Figure 2.16. Improved Metadata Tab • Versions history The versions history shows a tabular view of the asset versions and provides a "Select" button that will enable the user to load a previously created version. Release Notes 43 Figure 2.17. Versions history • Metadata The metadata section gets access to additional file attributes. Release Notes 44 Figure 2.18. Metadata section • Comments area The redesigned comments area enables much clearer discussions on a file. • Version selection dropdown The "Version selector dropdown" located at the menu bar provides the ability to load and restore previous versions from the "Editor tab", without having to open the "Overview tab" to load the "Version history". Release Notes 45 Figure 2.19. Version selection dropdown 2.4.7. Improved Data Objects Editor The Java editor was unified to the standard workbench editors functioning. It means that and now every data object is edited on his own editor window. Release Notes 46 Figure 2.20. Improved Data Object Editor •"New -> Data Object" option was added to create the data objects. • Overview tab was added for every file to manage the file metadata and have access to the file versions history. • Editable "Source Tab" tab was added. Now the Java code can be modified by administrators using the workbench. •"Editor" - "Source Tab" round trip is provided. This will let administrators to do manual changes on the generated Java code and go back to the editor tab to continue working. • Class usages detection. Whenever a Data Object is about to be deleted or renamed, the project will be scanned for the class usages. If usages are found (e.g. in drl files, decision tables, etc.) the user will receive an alert. This will prevent the user from breaking the project build. Release Notes 47 Figure 2.21. Usages detection 2.4.8. Execution Server Management UI A new perspective called Management has been added under Servers top level menu. This per- spective provides users the ability to manage multiple execution servers with multiple containers. Available features includes connect to already deployed execution servers; create new, start, stop, delete or upgrade containers. Release Notes 48 Figure 2.22. Management perspective Note Current version of Execution Server just supports rule based execution. 2.4.9. Social Activities A brand new feature called Social Activities has been added under a new top level menu item group called Activity. This new feature is divided in two different perspectives: Timeline Perspective and People Per- spective. The Timeline Perspective shows on left side the recent assets created or edited by the logged user. In the main window there is the "Latest Changes" screen, showing all the recent updated assets and an option to filter the recent updates by repository. Release Notes 49 Figure 2.23. Timeline Perspective The People Perspective is the home page of an user. Showing his infos (including a gravatar picture from user e-mail), user connections (people that user follow) and user recent activities. There is also a way to edit an user info. The search suggestion can be used to navigate to a user profile, follow him and see his updates on your timeline. Figure 2.24. People Perspective Release Notes 50 Figure 2.25. Edit User Info 2.4.10. Contributors Dashboard A brand new perspective called Contributors has been added under a new top level menu item group called Activity. The perspective itself is a dashboard which shows several indicators about the contributions made to the managed organizations / repositories within the workbench. Every time a organization/repository is added/removed from the workbench the dashboard itself is up- dated accordingly. This new perspective allows for the monitoring of the underlying activity on the managed repos- itories. Release Notes 51 Figure 2.26. Contributors perspective 2.4.11. Package selector The location of new assets whilst authoring was driven by the context of the Project Explorer. This has been replaced with a Package Selector in the New Resource Popup. The location defaults to the Project Explorer context but different packages can now be more easily chosen. Release Notes 52 Figure 2.27. Package selector 2.4.12. Improved visual consistency All Popups have been refactored to use GWT-Bootstrap widgets. Whilst a simple change it brings greater visual consistency to the application as a whole. Release Notes 53 Figure 2.28. Example Guided Decision Table Editor popup Release Notes 54 Figure 2.29. Example Guided Rule Editor popup 2.4.13. Guided Decision Tree Editor A new editor has been added to support modelling of simple decision trees. See the applicable section within the User Guide for more information about usage. Figure 2.30. Example Guided Decision Tree Release Notes 55 2.4.14. Create Repository Wizard A wizard has been created to guide the repository creation process. Now the user can decide at repository creation time if it should be a managed or unmanaged repository and configure all related parameters. Figure 2.31. Create Repository Wizard 1/2 Release Notes 56 Figure 2.32. Create Repository Wizard 2/2 2.4.15. Repository Structure Screen The new Repository Structure Screen will let users to manage the projects for a given repository, as well as other operations related to managed repositories like: branch creation, assets promotion and project release. Release Notes 57 Figure 2.33. Repository Structure Screen for a Managed Repository Release Notes 58 Figure 2.34. Repository Structure Screen for an Unmanaged Repository 2.5. New and Noteworthy in Integration 6.2.0 2.5.1. KIE Execution Server A new KIE Execution Server was created with the goal of supporting the deployment of kjars and the automatic creation of REST endpoints for remote rules execution. This initial implementation supports provisioning and execution of kjars via REST without any glue code. A user interface was also integrated into the workbench for remote provisioning. See the workbench's New&Noteworthy for details. @Path("/server") public interface KieServer { @GET @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) public Response getInfo(); @POST @Consumes({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) public Response execute( CommandScript command ); @GET Release Notes 59 @Path("containers") @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) public Response listContainers(); @GET @Path("containers/{id}") @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) public Response getContainerInfo( @PathParam("id") String id ); @PUT @Path("containers/{id}") @Consumes({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) public Response createContainer( @PathParam("id") String id, KieContainerResource container ); @DELETE @Path("containers/{id}") @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) public Response disposeContainer( @PathParam("id") String id ); @POST @Path("containers/{id}") @Consumes({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) public Response execute( @PathParam("id") String id, String cmdPayload ); @GET @Path("containers/{id}/release-id") @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) public Response getReleaseId( @PathParam("id") String id); @POST @Path("containers/{id}/release-id") @Consumes({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) public Response updateReleaseId( @PathParam("id") String id, ReleaseId releaseId ); @GET @Path("containers/{id}/scanner") @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) public Response getScannerInfo( @PathParam("id") String id ); @POST @Path("containers/{id}/scanner") @Consumes({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) public Response updateScanner( @PathParam("id") String id, KieScannerResource resource ); } Figure 2.35. Kie Server interface Release Notes 60 2.6. What is New and Noteworthy in Drools 6.1.0 2.6.1. JMX support for KieScanner Added support for JMX monitoring and management on KieScanner and KieContainer. To en- able, set the property kie.scanner.mbeans to enabled, for example via Java command line: - Dkie.scanner.mbeans=enabled . KieScannerMBean will register under the name: It exposes the following properties: • Scanner Release Id: the release ID the scanner was configured with. May include maven range versions and special keywords like LATEST, SNAPSHOT, etc. • Current Release Id: the actual release ID the artifact resolved to. • Status: STARTING, SCANNING, UPDATING, RUNNING, STOPPED, SHUTDOWN It also exposes the following operations: • scanNow(): forces an immediate scan of the maven repository looking for artifact updates • start(): starts polling the maven repository for artifact updates based on the polling interval parameter • stop(): stops automatically polling the maven repository 2.7. New and Noteworthy in KIE Workbench 6.1.0 2.7.1. Data Modeler - round trip and source code preservation Full round trip between Data modeler and Java source code is now supported. No matter where the Java code was generated (e.g. Eclipse, Data modeller), data modeler will only update the necessary code blocks to maintain the model updated. 2.7.2. Data Modeler - improved annotations New annotations @TypeSafe, @ClassReactive, @PropertyReactive, @Timestamp, @Duration and @Expires were added in order enrich current Drools annotations manged by the data modeler. 2.7.3. Standardization of the display of tabular data We have standardized the display of tabular data with a new table widget. The new table supports the following features: • Selection of visible columns • Resizable columns Release Notes 61 • Moveable columns Figure 2.36. New table The table is used in the following scenarios: • Inbox (Incoming changes) • Inbox (Recently edited) • Inbox (Recently opened) • Project Problems summary • Artifact Repository browser • Project Editor Dependency grid • Project Editor KSession grid • Project Editor Work Item Handlers Configuration grid • Project Editor Listeners Configuration grid • Search Results grid 2.7.4. Generation of modify(x) {...} blocks The Guided Rule Editor, Guided Template Editor and Guided Decision Table Editor have been changed to generate modify(x){...} Release Notes 62 Historically these editors supported the older update(x) syntax and hence rules created within the Workbench would not respond correctly to @PropertyReactive and associated annotations within a model. This has now been rectified with the use of modify(x){...} blocks. 2.8. New and Noteworthy in KIE API 6.0.0 2.8.1. New KIE name KIE is the new umbrella name used to group together our related projects; as the family continues to grow. KIE is also used for the generic parts of unified API; such as building, deploying and loading. This replaces the droolsjbpm and knowledge keywords that would have been used before. Figure 2.37. KIE Anatomy 2.8.2. Maven aligned projects and modules and Maven Deploy- ment One of the biggest complaints during the 5.x series was the lack of defined methodology for de- ployment. The mechanism used by Drools and jBPM was very flexible, but it was too flexible. A big focus for 6.0 was streamlining the build, deploy and loading (utilization) aspects of the sys- Release Notes 63 tem. Building and deploying activities are now aligned with Maven and Maven repositories. The utilization for loading rules and processess is now convention and configuration oriented, instead of programmatic, with sane defaults to minimise the configuration. Projects can be built with Maven and installed to the local M2_REPO or remote Maven reposito- ries. Maven is then used to declare and build the classpath of dependencies, for KIE to access. 2.8.3. Configuration and convention based projects The 'kmodule.xml' provides declarative configuration for KIE projects. Conventions and defaults are used to reduce the amount of configuration needed. Example 2.6. Declare KieBases and KieSessions Example 2.7. Utilize the KieSession KieServices ks = KieServices.Factory.get(); KieContainer kContainer = ks.getKieClasspathContainer(); KieSession kSession = kContainer.newKieSession("ksession1"); kSession.insert(new Message("Dave", "Hello, HAL. Do you read me, HAL?")); kSession.fireAllRules(); 2.8.4. KieBase Inclusion It is possible to include all the KIE artifacts belonging to a KieBase into a second KieBase. This means that the second KieBase, in addition to all the rules, function and processes directly defined into it, will also contain the ones created in the included KieBase. This inclusion can be done declaratively in the kmodule.xml file Example 2.8. Including a KieBase into another declaratively or programmatically using the KieModuleModel. Release Notes 64 Example 2.9. Including a KieBase into another programmatically KieModuleModel kmodule = KieServices.Factory.get().newKieModuleModel(); KieBaseModel kieBaseModel1 = kmodule.newKieBaseModel("KBase2").addInclude("KBase1"); 2.8.5. KieModules, KieContainer and KIE-CI Any Maven produced JAR with a 'kmodule.xml' in it is considered a KieModule. This can be loaded from the classpath or dynamically at runtime from a Resource location. If the kie-ci dependency is on the classpath it embeds Maven and all resolving is done automatically using Maven and can access local or remote repositories. Settings.xml is obeyed for Maven configuration. The KieContainer provides a runtime to utilize the KieModule, versioning is built in throughout, via Maven. Kie-ci will create a classpath dynamically from all the Maven declared dependencies for the artifact being loaded. Maven LATEST, SNAPSHOT, RELEASE and version ranges are supported. Example 2.10. Utilize and Run - Java KieServices ks = KieServices.Factory.get(); KieContainer kContainer = ks.newKieContainer( ks.newReleaseId("org.mygroup", "myartefact", "1.0") ); KieSession kSession = kContainer.newKieSession("ksession1"); kSession.insert(new Message("Dave", "Hello, HAL. Do you read me, HAL?")); kSession.fireAllRules(); KieContainers can be dynamically updated to a specific version, and resolved through Maven if KIE-CI is on the classpath. For stateful KieSessions the existing sessions are incrementally updated. Example 2.11. Dynamically Update - Java KieContainer kContainer.updateToVersion( ks.newReleaseId("org.mygroup", "myartefact", "1.1") ); 2.8.6. KieScanner The KieScanner is a Maven-oriented replacement of the KnowledgeAgent present in Drools 5. It continuously monitors your Maven repository to check if a new release of a Kie project has been installed and if so, deploys it in the KieContainer wrapping that project. The use of the KieScanner requires kie-ci.jar to be on the classpath. A KieScanner can be registered on a KieContainer as in the following example. Release Notes 65 Example 2.12. Registering and starting a KieScanner on a KieContainer KieServices kieServices = KieServices.Factory.get(); ReleaseId releaseId = kieServices.newReleaseId( "org.acme", "myartifact", "1.0-SNAPSHOT" ); KieContainer kContainer = kieServices.newKieContainer( releaseId ); KieScanner kScanner = kieServices.newKieScanner( kContainer ); // Start the KieScanner polling the Maven repository every 10 seconds kScanner.start( 10000L ); In this example the KieScanner is configured to run with a fixed time interval, but it is also pos- sible to run it on demand by invoking the scanNow() method on it. If the KieScanner finds, in the Maven repository, an updated version of the Kie project used by that KieContainer it auto- matically downloads the new version and triggers an incremental build of the new project. From this moment all the new KieBases and KieSessions created from that KieContainer will use the new project version. 2.8.7. Hierarchical ClassLoader The CompositeClassLoader is no longer used; as it was a constant source of performance prob- lems and bugs. Traditional hierarchical classloaders are now used. The root classloader is at the KieContext level, with one child ClassLoader per namespace. This makes it cleaner to add and remove rules, but there can now be no referencing between namespaces in DRL files; i.e. func- tions can only be used by the namespaces that declared them. The recommendation is to use static Java methods in your project, which is visible to all namespaces; but those cannot (like other classes on the root KieContainer ClassLoader) be dynamically updated. 2.8.8. Legacy API Adapter The 5.x API for building and running with Drools and jBPM is still available through Maven de- pendency "knowledge-api-legacy5-adapter". Because the nature of deployment has significantly changed in 6.0, it was not possible to provide an adapter bridge for the KnowledgeAgent. If any other methods are missing or problematic, please open a JIRA, and we'll fix for 6.1 2.8.9. KIE Documentation While a lot of new documentation has been added for working with the new KIE API, the entire documentation has not yet been brought up to date. For this reason there will be continued ref- erences to old terminologies. Apologies in advance, and thank you for your patience. We hope those in the community will work with us to get the documentation updated throughout, for 6.1 Release Notes 66 2.9. What is New and Noteworthy in Drools 6.0.0 2.9.1. PHREAK - Lazy rule matching algorithm The main work done for Drools in 6.0 involves the new PREAK algorithm. This is a lazy algorithm that should enable Drools to handle a larger number of rules and facts. AngendaGroups can now help improvement performance, as rules are not evaluated until it attempts to fire them. Sequential mode continues to be supported for PHREAK but now 'modify' is allowed. While there is no 'inference' with sequential configuration, as rules are lazily evaluated, any rule not yet evaluated will see the more recent data as a result of 'modify'. This is more inline with how people intuitively think sequential works. The conflict resolution order has been tweaked for PHREAK, and now is ordered by salience and then rule order; based on the rule position in the file.. Prior to Drools 6.0.0, after salience, it was considered arbitrary. When KieModules and updateToVersion are used for dynamic deployment, the rule order in the file is preserved via the diff processing. 2.9.2. Automatically firing timed rule in passive mode When the rule engine runs in passive mode (i.e.: using fireAllRules) by default it doesn't fire con- sequences of timed rules unless fireAllRules isn't invoked again. Now it is possible to change this default behavior by configuring the KieSession with a TimedRuleExectionOption as shown in the following example. Example 2.13. Configuring a KieSession to automatically execute timed rules KieSessionConfiguration ksconf = KieServices.Factory.get().newKieSessionConfiguration(); ksconf.setOption( TimedRuleExectionOption.YES ); KSession ksession = kbase.newKieSession(ksconf, null); It is also possible to have a finer grained control on the timed rules that have to be automatically executed. To do this it is necessary to set a FILTERED TimedRuleExectionOption that allows to define a callback to filter those rules, as done in the next example. Example 2.14. Configuring a filter to choose which timed rules should be automatically executed KieSessionConfiguration ksconf = KieServices.Factory.get().newKieSessionConfiguration(); conf.setOption( new TimedRuleExectionOption.FILTERED(new TimedRuleExecutionFilter() { public boolean accept(Rule[] rules) { return rules[0].getName().equals("MyRule"); } Release Notes 67 }) ); 2.9.3. Expression Timers It is now possible to define both the delay and interval of an interval timer as an expression instead of a fixed value. To do that it is necessary to declare the timer as an expression one (indicated by "expr:") as in the following example: Example 2.15. An Expression Timer Example declare Bean delay : String = "30s" period : long = 60000 end rule "Expression timer" timer( expr: $d, $p ) when Bean( $d : delay, $p : period ) then end The expressions, $d and $p in this case, can use any variable defined in the pattern matching part of the rule and can be any String that can be parsed in a time duration or any numeric value that will be internally converted in a long representing a duration expressed in milliseconds. Both interval and expression timers can have 3 optional parameters named "start", "end" and "repeat-limit". When one or more of these parameters are used the first part of the timer definition must be followed by a semicolon ';' and the parameters have to be separated by a comma ',' as in the following example: Example 2.16. An Interval Timer with a start and an end timer (int: 30s 10s; start=3-JAN-2010, end=5-JAN-2010) The value for start and end parameters can be a Date, a String representing a Date or a long, or more in general any Number, that will be transformed in a Java Date applying the following conversion: new Date( ((Number) n).longValue() ) Conversely the repeat-limit can be only an integer and it defines the maximum number of repeti- tions allowed by the timer. If both the end and the repeat-limit parameters are set the timer will stop when the first of the two will be matched. Release Notes 68 The using of the start parameter implies the definition of a phase for the timer, where the beginning of the phase is given by the start itself plus the eventual delay. In other words in this case the timed rule will then be scheduled at times: start + delay + n*period for up to repeat-limit times and no later than the end timestamp (whichever first). For instance the rule having the following interval timer timer ( int: 30s 1m; start="3-JAN-2010" ) will be scheduled at the 30th second of every minute after the midnight of the 3-JAN-2010. This also means that if for example you turn the system on at midnight of the 3-FEB-2010 it won't be scheduled immediately but will preserve the phase defined by the timer and so it will be scheduled for the first time 30 seconds after the midnight. If for some reason the system is paused (e.g. the session is serialized and then deserialized after a while) the rule will be scheduled only once to recover from missing activations (regardless of how many activations we missed) and subse- quently it will be scheduled again in phase with the timer. 2.9.4. RuleFlowGroups and AgendaGroups are merged These two groups have been merged and now RuleFlowGroups behave the same as Agenda- Groups. The get methods have been left, for deprecation reasons, but both return the same un- derlying data. When jBPM activates a group it now just calls setFocus. RuleFlowGroups and AgendaGroups when used together was a continued source of errors. It also aligns the codebase, towards PHREAK and the multi-core explotation that is planned in the future. 2.10. New and Noteworthy in KIE Workbench 6.0.0 The workbench has had a big overhaul using a new base project called UberFire. UberFire is inspired by Eclipse and provides a clean, extensible and flexible framework for the workbench. The end result is not only a richer experience for our end users, but we can now develop more rapidly with a clean component based architecture. If you like he Workbench experience you can use UberFire today to build your own web based dashboard and console efforts. As well as the move to a UberFire the other biggest change is the move from JCR to Git; there is an utility project to help with migration. Git is the most scalable and powerful source repository bar none. JGit provides a solid OSS implementation for Git. This addresses the continued perfor- mance problems with the various JCR implementations, which would slow down once the number of files and number of versions become too high. There has been a big "low tech" drive, to remove complexity. Everything is now stored as a file, including meta data. The database is only there to provide fast indexing and search. So importing and exporting is all standard Git and external sites, like GitHub, can be used to exchange repositories. Release Notes 69 In 5.x developers would work with their own source repository and then push JCR, via the team provider. This team provider was not full featured and not available outside Eclipse. Git enables our repository to work any existing Git tool or team provider. While not yet supported in the UI, this will be added over time, it is possible to connect to the repo and tag and branch and restore things. Figure 2.38. Workbench The Guvnor brand leaked too much from its intended role; such as the authoring metaphors, like Decision Tables, being considered Guvnor components instead of Drools components. This wasn't helped by the monolithic projects structure used in 5.x for Guvnor. In 6.0 Guvnor 's focus has been narrowed to encapsulates the set of UberFire plugins that provide the basis for building a web based IDE. Such as Maven integration for building and deploying, management of Maven repositories and activity notifications via inboxes. Drools and jBPM build workbench distributions using Uberfire as the base and including a set of plugins, such as Guvnor, along with their own plugins for things like decision tables, guided editors, BPMN2 designer, human tasks. The "Model Structure" diagram outlines the new project anatomy. The Drools workbench is called KIE-Drools-WB. KIE-WB is the uber workbench that combines all the Guvnor, Drools and jBPM plugins. The jBPM-WB is ghosted out, as it doesn't actually exist, being made redundant by KIE- WB. Release Notes 70 Figure 2.39. Module Structure Important KIE Drools Workbench and KIE Workbench share a common set of components for generic workbench functionality such as Project navigation, Project definitions, Maven based Projects, Maven Artifact Repository. These common features are described in more detail throughout this documentation. The two primary distributions consist of: • KIE Drools Workbench • Drools Editors, for rules and supporting assets. • jBPM Designer, for Rule Flow and supporting assets. • KIE Workbench Release Notes 71 • Drools Editors, for rules and supporting assets. • jBPM Designer, for BPMN2 and supporting assets. • jBPM Console, runtime and Human Task support. • jBPM Form Builder. • BAM. Workbench highlights: • New flexible Workbench environment, with perspectives and panels. • New packaging and build system following KIE API. • Maven based projects. • Maven Artifact Repository replaces Global Area, with full dependency support. • New Data Modeller replaces the declarative Fact Model Editor; bringing authoring of Java class- es to the authoring environment. Java classes are packaged into the project and can be used within rules, processes etc and externally in your own applications. • Virtual File System replaces JCR with a default Git based implementation. • Default Git based implementation supports remote operations. • External modifications appear within the Workbench. • Incremental Build system showing, near real-time validation results of your project and assets. The editors themselves are largely unchanged; however of note imports have moved from the package definition to individual editors so you need only import types used for an asset and not the package as a whole. 2.11. New and Noteworthy in Integration 6.0.0 2.11.1. CDI CDI is now tightly integrated into the KIE API. It can be used to inject versioned KieSession and KieBases. @Inject @KSession("kbase1") @KReleaseId( groupId = "jar1", rtifactId = "art1", version = "1.0") private KieBase kbase1v10; @Inject @KBase("kbase1") Release Notes 72 @KReleaseId( groupId = "jar1", rtifactId = "art1", version = "1.1") private KieBase kbase1v10; Figure 2.40. Side by side version loading for 'jar1.KBase1' KieBase @Inject @KSession("ksession1") @KReleaseId( groupId = "jar1", rtifactId = "art1", version = "1.0") private KieSession ksessionv10; @Inject @KSession("ksession1") @KReleaseId( groupId = "jar1", rtifactId = "art1", version = "1.1") private KieSession ksessionv11; Figure 2.41. Side by side version loading for 'jar1.KBase1' KieBase 2.11.2. Spring Spring has been revamped and now integrated with KIE. Spring can replace the 'kmodule.xml' with a more powerful spring version. The aim is for consistency with kmodule.xml 2.11.3. Aries Blueprints Aries blueprints is now also supported, and follows the work done for spring. The aim is for con- sistency with spring and kmodule.xml 2.11.4. OSGi Ready All modules have been refactored to avoid package splitting, which was a problem in 5.x. Testing has been moved to PAX. 73 Chapter 3. Compatibility matrix Starting from KIE 6.0, Drools (including workbench), jBPM (including designer and console) and OptaPlanner follow the same version numbering. Part II. KIE KIE is the shared core for Drools and jBPM. It provides a unified methodology and programming model for building, deploying and utilizing resources. 75 Chapter 4. KIE 4.1. Overview 4.1.1. Anatomy of Projects The process of researching an integration knowledge solution for Drools and jBPM has simply used the "droolsjbpm" group name. This name permeates GitHub accounts and Maven POMs. As scopes broadened and new projects were spun KIE, an acronym for Knowledge Is Everything, was chosen as the new group name. The KIE name is also used for the shared aspects of the system; such as the unified build, deploy and utilization. KIE currently consists of the following subprojects: Figure 4.1. KIE Anatomy OptaPlanner, a local search and optimization tool, has been spun off from Drools Planner and is now a top level project with Drools and jBPM. This was a natural evolution as Optaplanner, while having strong Drools integration, has long been independant of Drools. KIE 76 From the Polymita acquisition, along with other things, comes the powerful Dashboard Builder which provides powerful reporting capabilities. Dashboard Builder is currently a temporary name and after the 6.0 release a new name will be chosen. Dashboard Builder is completely independant of Drools and jBPM and will be used by many projects at JBoss, and hopefully outside of JBoss :) UberFire is the new base workbench project, spun off from the ground up rewrite. UberFire pro- vides Eclipse-like workbench capabilities, with panels and perspectives from plugins. The project is independant of Drools and jBPM and anyone can use it as a basis of building flexible and pow- erful workbenches. UberFire will be used for console and workbench development throughout JBoss. It was determined that the Guvnor brand leaked too much from its intended role; such as the au- thoring metaphors, like Decision Tables, being considered Guvnor components instead of Drools components. This wasn't helped by the monolithic projects structure used in 5.x for Guvnor. In 6.0 Guvnor's focus has been narrowed to encapsulate the set of UberFire plugins that provide the basis for building a web based IDE. Such as Maven integration for building and deploying, management of Maven repositories and activity notifications via inboxes. Drools and jBPM build workbench distributions using Uberfire as the base and including a set of plugins, such as Gu- vnor, along with their own plugins for things like decision tables, guided editors, BPMN2 designer, human tasks. The Drools workbench is called Drools-WB. KIE-WB is the uber workbench that combined all the Guvnor, Drools and jBPM plugins. The jBPM-WB is ghosted out, as it doesn't actually exist, being made redundant by KIE-WB. 4.1.2. Lifecycles The different aspects, or life cycles, of working with KIE system, whether it's Drools or jBPM, can typically be broken down into the following: • Author • Authoring of knowledge using a UI metaphor, such as: DRL, BPMN2, decision table, class models. • Build • Builds the authored knowledge into deployable units. • For KIE this unit is a JAR. • Test • Test KIE knowedge before it's deployed to the application. • Deploy • Deploys the unit to a location where applications may utilize (consume) them. KIE 77 • KIE uses Maven style repository. • Utilize • The loading of a JAR to provide a KIE session (KieSession), for which the application can interact with. • KIE exposes the JAR at runtime via a KIE container (KieContainer). • KieSessions, for the runtime's to interact with, are created from the KieContainer. • Run • System interaction with the KieSession, via API. • Work • User interaction with the KieSession, via command line or UI. • Manage • Manage any KieSession or KieContainer. 4.2. Build, Deploy, Utilize and Run 4.2.1. Introduction 6.0 introduces a new configuration and convention approach to building knowledge bases, instead of using the programmatic builder approach in 5.x. The builder is still available to fall back on, as it's used for the tooling integration. Building now uses Maven, and aligns with Maven practices. A KIE project or module is simply a Maven Java project or module; with an additional metadata file META-INF/kmodule.xml. The kmodule.xml file is the descriptor that selects resources to knowledge bases and configures those knowledge bases and sessions. There is also alternative XML support via Spring and OSGi Blue- Prints. While standard Maven can build and package KIE resources, it will not provide validation at build time. There is a Maven plugin which is recommended to use to get build time validation. The plugin also generates many classes, making the runtime loading faster too. The example project layout and Maven POM descriptor is illustrated in the screenshot KIE 78 Figure 4.2. Example project layout and Maven POM KIE uses defaults to minimise the amount of configuration. With an empty kmodule.xml being the simplest configuration. There must always be a kmodule.xml file, even if empty, as it's used for discovery of the JAR and its contents. Maven can either 'mvn install' to deploy a KieModule to the local machine, where all other appli- cations on the local machine use it. Or it can 'mvn deploy' to push the KieModule to a remote Maven repository. Building the Application will pull in the KieModule and populate the local Maven repository in the process. KIE 79 Figure 4.3. Example project layout and Maven POM JARs can be deployed in one of two ways. Either added to the classpath, like any other JAR in a Maven dependency listing, or they can be dynamically loaded at runtime. KIE will scan the classpath to find all the JARs with a kmodule.xml in it. Each found JAR is represented by the KieModule interface. The terms classpath KieModule and dynamic KieModule are used to refer to the two loading approaches. While dynamic modules supports side by side versioning, classpath modules do not. Further once a module is on the classpath, no other version may be loaded dynamically. Detailed references for the API are included in the next sections, the impatient can jump straight to the examples section, which is fairly self-explanatory on the different use cases. KIE 80 4.2.2. Building Figure 4.4. org.kie.api.core.builder 4.2.2.1. Creating and building a Kie Project A Kie Project has the structure of a normal Maven project with the only peculiarity of including a kmodule.xml file defining in a declaratively way the KieBases and KieSessions that can be created from it. This file has to be placed in the resources/META-INF folder of the Maven project while all the other Kie artifacts, such as DRL or a Excel files, must be stored in the resources folder or in any other subfolder under it. KIE 81 Since meaningful defaults have been provided for all configuration aspects, the simplest kmodule.xml file can contain just an empty kmodule tag like the following: Example 4.1. An empty kmodule.xml file In this way the kmodule will contain one single default KieBase. All Kie assets stored under the resources folder, or any of its subfolders, will be compiled and added to it. To trigger the building of these artifacts it is enough to create a KieContainer for them. Figure 4.5. KieContainer For this simple case it is enough to create a KieContainer that reads the files to be built from the classpath: Example 4.2. Creating a KieContainer from the classpath KieServices kieServices = KieServices.Factory.get(); KIE 82 KieContainer kContainer = kieServices.getKieClasspathContainer(); KieServices is the interface from where it possible to access all the Kie building and runtime facilities: KIE 83 Figure 4.6. KieServices KIE 84 In this way all the Java sources and the Kie resources are compiled and deployed into the KieCon- tainer which makes its contents available for use at runtime. 4.2.2.2. The kmodule.xml file As explained in the former section, the kmodule.xml file is the place where it is possible to declar- atively configure the KieBase(s) and KieSession(s) that can be created from a KIE project. In particular a KieBase is a repository of all the application's knowledge definitions. It will contain rules, processes, functions, and type models. The KieBase itself does not contain data; instead, sessions are created from the KieBase into which data can be inserted and from which process instances may be started. Creating the KieBase can be heavy, whereas session creation is very light, so it is recommended that KieBase be cached where possible to allow for repeated session creation. However end-users usually shouldn't worry about it, because this caching mechanism is already automatically provided by the KieContainer. KIE 85 Figure 4.7. KieBase Conversely the KieSession stores and executes on the runtime data. It is created from the KieBase or more easily can be created directly from the KieContainer if it has been defined in the kmodule.xml file KIE 86 Figure 4.8. KieSession The kmodule.xml allows to define and configure one or more KieBases and for each KieBase all the different KieSessions that can be created from it, as showed by the follwing example: Example 4.3. A sample kmodule.xml file Here the tag contains a list of key-value pairs that are the optional proper- ties used to configure the KieBases building process. For instance this sample kmodule.xml KIE 87 file defines an additional custom operator named supersetOf and implemented by the org.mycompany.SupersetOfEvaluatorDefinition class. After this 2 KieBases have been defined and it is possible to instance 2 different types of KieSes- sions from the first one, while only one from the second. A list of the attributes that can be defined on the kbase tag, together with their meaning and default values follows: Table 4.1. kbase Attributes Attribute name Default value Admitted values Meaning name none any The name with which retrieve this KieBase from the KieContain- er. This is the only mandatory attribute. includes none any comma separated list A comma separated list of other KieBas- es contained in this kmodule. The artifacts of all these KieBases will be also included in this one. packages all any comma separated list By default all the Drools artifacts un- der the resources folder, at any lev- el, are included into the KieBase. This at- tribute allows to lim- it the artifacts that will be compiled in this KieBase to only the ones belonging to the list of packages. default false true, false Defines if this KieBase is the default one for this module, so it can be created from the KieContainer with- out passing any name to it. There can be at most one default KieBase in each mod- ule. KIE 88 Attribute name Default value Admitted values Meaning equalsBehavior identity identity, equality Defines the behav- ior of Drools when a new fact is insert- ed into the Working Memory. With identi- ty it always create a new FactHandle un- less the same object isn't already present in the Working Memory, while with equality on- ly if the newly insert- ed object is not equal (according to its equal method) to an already existing fact. eventProcessing- Mode cloud cloud, stream When compiled in cloud mode the KieBase treats events as normal facts, while in stream mode allow temporal reasoning on them. declarativeAgenda disabled disabled, enabled Defines if the Declar- ative Agenda is en- abled or not. Similarly all attributes of the ksession tag (except of course the name) have meaningful default. They are listed and described in the following table: Table 4.2. ksession Attributes Attribute name Default value Admitted values Meaning name none any Unique name of this KieSession. Used to fetch the KieSession from the KieContain- er. This is the only mandatory attribute. type stateful stateful, stateless A stateful session allows to iteratively work with the Working Memory, while a state- KIE 89 Attribute name Default value Admitted values Meaning less one is a one-off execution of a Work- ing Memory with a pro- vided data set. default false true, false Defines if this KieSes- sion is the default one for this module, so it can be created from the KieContainer with- out passing any name to it. In each module there can be at most one default KieSes- sion for each type. clockType realtime realtime, pseudo Defines if events time- stamps are deter- mined by the system clock or by a psuedo clock controlled by the application. This clock is specially useful for unit testing temporal rules. beliefSystem simple simple, jtms, defeasi- ble Defines the type of be- lief system used by the KieSession. As outlined in the former kmodule.xml sample, it is also possible to declaratively create on each KieSession a file (or a console) logger, one or more WorkItemHandlers and some listeners that can be of 3 different types: ruleRuntimeEventListener, agendaEventListener and processEv- entListener Having defined a kmodule.xml like the one in the former sample, it is now possible to simply retrieve the KieBases and KieSessions from the KieContainer using their names. Example 4.4. Retriving KieBases and KieSessions from the KieContainer KieServices kieServices = KieServices.Factory.get(); KieContainer kContainer = kieServices.getKieClasspathContainer(); KieBase kBase1 = kContainer.getKieBase("KBase1"); KieSession kieSession1 = kContainer.newKieSession("KSession2_1"); StatelessKieSession kieSession2 = kContainer.newStatelessKieSession("KSession2_2"); KIE 90 It has to be noted that since KSession2_1 and KSession2_2 are of 2 different types (the first is stateful, while the second is stateless) it is necessary to invoke 2 different methods on the KieContainer according to their declared type. If the type of the KieSession requested to the KieContainer doesn't correspond with the one declared in the kmodule.xml file the KieContainer will throw a RuntimeException. Also since a KieBase and a KieSession have been flagged as default is it possible to get them from the KieContainer without passing any name. Example 4.5. Retriving default KieBases and KieSessions from the KieContainer KieContainer kContainer = ... KieBase kBase1 = kContainer.getKieBase(); // returns KBase1 KieSession kieSession1 = kContainer.newKieSession(); // returns KSession2_1 Since a Kie project is also a Maven project the groupId, artifactId and version declared in the pom.xml file are used to generate a ReleaseId that uniquely identifies this project inside your application. This allows creation of a new KieContainer from the project by simply passing its ReleaseId to the KieServices. Example 4.6. Creating a KieContainer of an existing project by ReleaseId KieServices kieServices = KieServices.Factory.get(); ReleaseId releaseId = kieServices.newReleaseId( "org.acme", "myartifact", "1.0" ); KieContainer kieContainer = kieServices.newKieContainer( releaseId ); 4.2.2.3. Building with Maven The KIE plugin for Maven ensures that artifact resources are validated and pre-compiled, it is recommended that this is used at all times. To use the plugin simply add it to the build section of the Maven pom.xml Example 4.7. Adding the KIE plugin to a Maven pom.xml org.kie kie-maven-plugin ${project.version} true KIE 91 Building a KIE module without the Maven plugin will copy all the resources, as is, into the resulting JAR. When that JAR is loaded by the runtime, it will attempt to build all the resources then. If there are compilation issues it will return a null KieContainer. It also pushes the compilation overhead to the runtime. In general this is not recommended, and the Maven plugin should always be used. 4.2.2.4. Defining a KieModule programmatically It is also possible to define the KieBases and KieSessions belonging to a KieModule program- matically instead of the declarative definition in the kmodule.xml file. The same programmatic API also allows in explicitly adding the file containing the Kie artifacts instead of automatically read them from the resources folder of your project. To do that it is necessary to create a KieFileSys- tem, a sort of virtual file system, and add all the resources contained in your project to it. Figure 4.9. KieFileSystem Like all other Kie core components you can obtain an instance of the KieFileSystem from the KieServices. The kmodule.xml configuration file must be added to the filesystem. This is a mandatory step. Kie also provides a convenient fluent API, implemented by the KieModuleModel, to programmatically create this file. KIE 92 Figure 4.10. KieModuleModel To do this in practice it is necessary to create a KieModuleModel from the KieServices, config- ure it with the desired KieBases and KieSessions, convert it in XML and add the XML to the KieFileSystem. This process is shown by the following example: Example 4.8. Creating a kmodule.xml programmatically and adding it to a KieFileSystem KieServices kieServices = KieServices.Factory.get(); KieModuleModel kieModuleModel = kieServices.newKieModuleModel(); KieBaseModel kieBaseModel1 = kieModuleModel.newKieBaseModel( "KBase1 ") .setDefault( true ) .setEqualsBehavior( EqualityBehaviorOption.EQUALITY ) .setEventProcessingMode( EventProcessingOption.STREAM ); KieSessionModel ksessionModel1 = kieBaseModel1.newKieSessionModel( "KSession1" ) .setDefault( true ) .setType( KieSessionModel.KieSessionType.STATEFUL ) .setClockType( ClockTypeOption.get("realtime") ); KieFileSystem kfs = kieServices.newKieFileSystem(); kfs.writeKModuleXML(kieModuleModel.toXML()); At this point it is also necessary to add to the KieFileSystem, through its fluent API, all others Kie artifacts composing your project. These artifacts have to be added in the same position of a corresponding usual Maven project. KIE 93 Example 4.9. Adding Kie artifacts to a KieFileSystem KieFileSystem kfs = ... kfs.write( "src/main/resources/KBase1/ruleSet1.drl", stringContainingAValidDRL ) .write( "src/main/resources/dtable.xls", kieServices.getResources().newInputStreamResource( dtableFileStream ) ); This example shows that it is possible to add the Kie artifacts both as plain Strings and as Re- sources. In the latter case the Resources can be created by the KieResources factory, also provided by the KieServices. The KieResources provides many convenient factory methods to convert an InputStream, a URL, a File, or a String representing a path of your file system to a Resource that can be managed by the KieFileSystem. KIE 94 Figure 4.11. KieResources Normally the type of a Resource can be inferred from the extension of the name used to add it to the KieFileSystem. However it also possible to not follow the Kie conventions about file extensions and explicitly assign a specific ResourceType to a Resource as shown below: KIE 95 Example 4.10. Creating and adding a Resource with an explicit type KieFileSystem kfs = ... kfs.write( "src/main/resources/myDrl.txt", kieServices.getResources().newInputStreamResource( drlStream ) .setResourceType(ResourceType.DRL) ); Add all the resources to the KieFileSystem and build it by passing the KieFileSystem to a KieBuilder Figure 4.12. KieBuilder When the contents of a KieFileSystem are successfully built, the resulting KieModule is auto- matically added to the KieRepository. The KieRepository is a singleton acting as a repository for all the available KieModules. KIE 96 Figure 4.13. KieRepository After this it is possible to create through the KieServices a new KieContainer for that KieModule using its ReleaseId. However, since in this case the KieFileSystem doesn't contain any pom.xml file (it is possible to add one using the KieFileSystem.writePomXML method), Kie cannot deter- mine the ReleaseId of the KieModule and assign to it a default one. This default ReleaseId can be obtained from the KieRepository and used to identify the KieModule inside the KieReposi- tory itself. The following example shows this whole process. Example 4.11. Building the contents of a KieFileSystem and creating a KieContainer KieServices kieServices = KieServices.Factory.get(); KieFileSystem kfs = ... kieServices.newKieBuilder( kfs ).buildAll(); KieContainer kieContainer = kieServices.newKieContainer(kieServices.getRepository().getDefaultReleaseId()); At this point it is possible to get KieBases and create new KieSessions from this KieContainer exactly in the same way as in the case of a KieContainer created directly from the classpath. It is a best practice to check the compilation results. The KieBuilder reports compilation results of 3 different severities: ERROR, WARNING and INFO. An ERROR indicates that the compila- tion of the project failed and in the case no KieModule is produced and nothing is added to the KieRepository. WARNING and INFO results can be ignored, but are available for inspection. Example 4.12. Checking that a compilation didn't produce any error KieBuilder kieBuilder = kieServices.newKieBuilder( kfs ).buildAll(); assertEquals( 0, kieBuilder.getResults().getMessages( Message.Level.ERROR ).size() ); KIE 97 4.2.2.5. Changing the Default Build Result Severity In some cases, it is possible to change the default severity of a type of build result. For instance, when a new rule with the same name of an existing rule is added to a package, the default behavior is to replace the old rule by the new rule and report it as an INFO. This is probably ideal for most use cases, but in some deployments the user might want to prevent the rule update and report it as an error. Changing the default severity for a result type, configured like any other option in Drools, can be done by API calls, system properties or configuration files. As of this version, Drools supports configurable result severity for rule updates and function updates. To configure it using system properties or configuration files, the user has to use the following properties: Example 4.13. Setting the severity using properties // sets the severity of rule updatesdrools.kbuilder.severity.duplicateRule = // sets the severity of function updatesdrools.kbuilder.severity.duplicateFunction = updatesdrools.kbuilder.severity.duplicateRule = // sets the severity of function updatesdrools.kbuilder.severity.duplicateFunction = 4.2.3. Deploying 4.2.3.1. KieBase The KieBase is a repository of all the application's knowledge definitions. It will contain rules, processes, functions, and type models. The KieBase itself does not contain data; instead, ses- sions are created from the KieBase into which data can be inserted and from which process in- stances may be started. The KieBase can be obtained from the KieContainer containing the KieModule where the KieBase has been defined. KIE 98 Figure 4.14. KieBase Sometimes, for instance in a OSGi environment, the KieBase needs to resolve types that are not in the default class loader. In this case it will be necessary to create a KieBaseConfiguration with an additional class loader and pass it to KieContainer when creating a new KieBase from it. KIE 99 Example 4.14. Creating a new KieBase with a custom ClassLoader KieServices kieServices = KieServices.Factory.get(); KieBaseConfiguration kbaseConf = kieServices.newKieBaseConfiguration( null, MyType.class.getClassLoader() ); KieBase kbase = kieContainer.newKieBase( kbaseConf ); 4.2.3.2. KieSessions and KieBase Modifications KieSessions will be discussed in more detail in section "Running". The KieBase creates and re- turns KieSession objects, and it may optionally keep references to those. When KieBase modi- fications occur those modifications are applied against the data in the sessions. This reference is a weak reference and it is also optional, which is controlled by a boolean flag. 4.2.3.3. KieScanner The KieScanner allows continuous monitoring of your Maven repository to check whether a new release of a Kie project has been installed. A new release is deployed in the KieContainer wrap- ping that project. The use of the KieScanner requires kie-ci.jar to be on the classpath. Figure 4.15. KieScanner A KieScanner can be registered on a KieContainer as in the following example. Example 4.15. Registering and starting a KieScanner on a KieContainer KieServices kieServices = KieServices.Factory.get(); ReleaseId releaseId = kieServices.newReleaseId( "org.acme", "myartifact", "1.0-SNAPSHOT" ); KieContainer kContainer = kieServices.newKieContainer( releaseId ); KieScanner kScanner = kieServices.newKieScanner( kContainer ); // Start the KieScanner polling the Maven repository every 10 seconds kScanner.start( 10000L ); KIE 100 In this example the KieScanner is configured to run with a fixed time interval, but it is also possi- ble to run it on demand by invoking the scanNow() method on it. If the KieScanner finds in the Maven repository an updated version of the Kie project used by that KieContainer it automati- cally downloads the new version and triggers an incremental build of the new project. From this moment all the new KieBases and KieSessions created from that KieContainer will use the new project version. The KieScanner will only pickup changes to deployed jars if it is using a SNAPSHOT, version range, the LATEST, or the RELEASE setting. Fixed versions will not automatically update at run- time. 4.2.3.4. Maven Versions and Dependencies Maven supports a number of mechanisms to manage versioning and dependencies within appli- cations. Modules can be published with specific version numbers, or they can use the SNAPSHOT suffix. Dependencies can specify version ranges to consume, or take avantage of SNAPSHOT mechanism. StackOverflow provides a very good description for this, which is reproduced below. http://stackoverflow.com/questions/30571/how-do-i-tell-maven-to-use-the-latest-version-of-a- dependency [http://stackoverflow.com/questions/30571/how-do-i-tell-maven-to-use-the-lat- est-version-of-a-dependency] If you always want to use the newest version, Maven has two keywords you can use as an alter- native to version ranges. You should use these options with care as you are no longer in control of the plugins/dependencies you are using. When you depend on a plugin or a dependency, you can use the a version value of LATEST or RELEASE. LATEST refers to the latest released or snapshot version of a particular artifact, the most recently deployed artifact in a particular repository. RELEASE refers to the last non- snapshot release in the repository. In general, it is not a best practice to design software which depends on a non-specific version of an artifact. If you are developing software, you might want to use RELEASE or LATEST as a convenience so that you don't have to update version numbers when a new release of a third-party library is released. When you release software, you should always make sure that your project depends on specific versions to reduce the chances of your build or your project being affected by a software release not under your control. Use LATEST and RELEASE with caution, if at all. See the POM Syntax section of the Maven book for more details. http://books.sonatype.com/mvnref-book/reference/pom-relationships-sect-pom-syntax.html [http://books.sonatype.com/mvnref-book/reference/pom-relationships-sect-pom-syntax.html] http://books.sonatype.com/mvnref-book/reference/pom-relationships-sect-project- dependencies.html Here's an example illustrating the various options. In the Maven repository, com.foo:my-foo has the following metadata: KIE 101 com.foo my-foo 2.0.0 1.1.1 1.0 1.0.1 1.1 1.1.1 2.0.0 20090722140000 If a dependency on that artifact is required, you have the following options (other version ranges can be specified of course, just showing the relevant ones here): Declare an exact version (will always resolve to 1.0.1): [1.0.1] Declare an explicit version (will always resolve to 1.0.1 unless a collision occurs, when Maven will select a matching version): 1.0.1 Declare a version range for all 1.x (will currently resolve to 1.1.1): [1.0.0,2.0.0) Declare an open-ended version range (will resolve to 2.0.0): [1.0.0,) Declare the version as LATEST (will resolve to 2.0.0): LATEST Declare the version as RELEASE (will resolve to 1.1.1): KIE 102 RELEASE Note that by default your own deployments will update the "latest" entry in the Maven metadata, but to update the "release" entry, you need to activate the "release-profile" from the Maven super POM. You can do this with either "-Prelease-profile" or "-DperformRelease=true" 4.2.3.5. Settings.xml and Remote Repository Setup The maven settings.xml is used to configure Maven execution. Detailed instructions can be found at the Maven website: http://maven.apache.org/settings.html The settings.xml file can be located in 3 locations, the actual settings used is a merge of those 3 locations. • The Maven install: $M2_HOME/conf/settings.xml • A user's install: ${user.home}/.m2/settings.xml • Folder location specified by the system property kie.maven.settings.custom The settings.xml is used to specify the location of remote repositories. It is important that you activate the profile that specifies the remote repository, typically this can be done using "active- ByDefault": profile-1 true ... Maven provides detailed documentation on using multiple remote repositories: http://maven.apache.org/guides/mini/guide-multiple-repositories.html 4.2.4. Running 4.2.4.1. KieBase The KieBase is a repository of all the application's knowledge definitions. It will contain rules, processes, functions, and type models. The KieBase itself does not contain data; instead, ses- KIE 103 sions are created from the KieBase into which data can be inserted and from which process in- stances may be started. The KieBase can be obtained from the KieContainer containing the KieModule where the KieBase has been defined. Example 4.16. Getting a KieBase from a KieContainer KieBase kBase = kContainer.getKieBase(); 4.2.4.2. KieSession The KieSession stores and executes on the runtime data. It is created from the KieBase. Figure 4.16. KieSession Example 4.17. Create a KieSession from a KieBase KieSession ksession = kbase.newKieSession(); 4.2.4.3. KieRuntime 4.2.4.3.1. KieRuntime The KieRuntime provides methods that are applicable to both rules and processes, such as setting globals and registering channels. ("Exit point" is an obsolete synonym for "channel".) KIE 104 Figure 4.17. KieRuntime 4.2.4.3.1.1. Globals Globals are named objects that are made visible to the rule engine, but in a way that is funda- mentally different from the one for facts: changes in the object backing a global do not trigger reevaluation of rules. Still, globals are useful for providing static information, as an object offering services that are used in the RHS of a rule, or as a means to return objects from the rule engine. When you use a global on the LHS of a rule, make sure it is immutable, or, at least, don't expect changes to have any effect on the behavior of your rules. A global must be declared in a rules file, and then it needs to be backed up with a Java object. global java.util.List list KIE 105 With the Knowledge Base now aware of the global identifier and its type, it is now possible to call ksession.setGlobal() with the global's name and an object, for any session, to associate the object with the global. Failure to declare the global type and identifier in DRL code will result in an exception being thrown from this call. List list = new ArrayList(); ksession.setGlobal("list", list); Make sure to set any global before it is used in the evaluation of a rule. Failure to do so results in a NullPointerException. 4.2.4.4. Event Model The event package provides means to be notified of rule engine events, including rules firing, objects being asserted, etc. This allows separation of logging and auditing activities from the main part of your application (and the rules). The KieRuntimeEventManager interface is implemented by the KieRuntime which provides two interfaces, RuleRuntimeEventManager and ProcessEventManager. We will only cover the RuleRuntimeEventManager here. Figure 4.18. KieRuntimeEventManager The RuleRuntimeEventManager allows for listeners to be added and removed, so that events for the working memory and the agenda can be listened to. KIE 106 Figure 4.19. RuleRuntimeEventManager The following code snippet shows how a simple agenda listener is declared and attached to a session. It will print matches after they have fired. Example 4.18. Adding an AgendaEventListener ksession.addEventListener( new DefaultAgendaEventListener() { public void afterMatchFired(AfterMatchFiredEvent event) { super.afterMatchFired( event ); System.out.println( event ); } }); Drools also provides DebugRuleRuntimeEventListener and DebugAgendaEventListener which implement each method with a debug print statement. To print all Working Memory events, you add a listener like this: Example 4.19. Adding a DebugRuleRuntimeEventListener ksession.addEventListener( new DebugRuleRuntimeEventListener() ); All emitted events implement the KieRuntimeEvent interface which can be used to retrieve the actual KnowlegeRuntime the event originated from. KIE 107 Figure 4.20. KieRuntimeEvent The events currently supported are: • MatchCreatedEvent • MatchCancelledEvent • BeforeMatchFiredEvent • AfterMatchFiredEvent • AgendaGroupPushedEvent • AgendaGroupPoppedEvent • ObjectInsertEvent • ObjectDeletedEvent • ObjectUpdatedEvent • ProcessCompletedEvent • ProcessNodeLeftEvent • ProcessNodeTriggeredEvent • ProcessStartEvent 4.2.4.5. KieRuntimeLogger The KieRuntimeLogger uses the comprehensive event system in Drools to create an audit log that can be used to log the execution of an application for later inspection, using tools such as the Eclipse audit viewer. KIE 108 Figure 4.21. KieLoggers Example 4.20. FileLogger KieRuntimeLogger logger = KieServices.Factory.get().getLoggers().newFileLogger(ksession, "logdir/mylogfile"); ... logger.close(); 4.2.4.6. Commands and the CommandExecutor KIE has the concept of stateful or stateless sessions. Stateful sessions have already been cov- ered, which use the standard KieRuntime, and can be worked with iteratively over time. Stateless is a one-off execution of a KieRuntime with a provided data set. It may return some results, with the session being disposed at the end, prohibiting further iterative interactions. You can think of stateless as treating an engine like a function call with optional return results. The foundation for this is the CommandExecutor interface, which both the stateful and stateless interfaces extend. This returns an ExecutionResults: Figure 4.22. CommandExecutor KIE 109 Figure 4.23. ExecutionResults The CommandExecutor allows for commands to be executed on those sessions, the only difference being that the StatelessKieSession executes fireAllRules() at the end before disposing the session. The commands can be created using the CommandExecutor .The Javadocs provide the full list of the allowed comands using the CommandExecutor. setGlobal and getGlobal are two commands relevant to both Drools and jBPM. Set Global calls setGlobal underneath. The optional boolean indicates whether the command should return the global's value as part of the ExecutionResults. If true it uses the same name as the global name. A String can be used instead of the boolean, if an alternative name is desired. Example 4.21. Set Global Command StatelessKieSession ksession = kbase.newStatelessKieSession(); ExecutionResults bresults = ksession.execute( CommandFactory.newSetGlobal( "stilton", new Cheese( "stilton" ), true); Cheese stilton = bresults.getValue( "stilton" ); Allows an existing global to be returned. The second optional String argument allows for an alter- native return name. Example 4.22. Get Global Command StatelessKieSession ksession = kbase.newStatelessKieSession(); ExecutionResults bresults = ksession.execute( CommandFactory.getGlobal( "stilton" ); KIE 110 Cheese stilton = bresults.getValue( "stilton" ); All the above examples execute single commands. The BatchExecution represents a composite command, created from a list of commands. It will iterate over the list and execute each command in turn. This means you can insert some objects, start a process, call fireAllRules and execute a query, all in a single execute(...) call, which is quite powerful. The StatelessKieSession will execute fireAllRules() automatically at the end. However the keen-eyed reader probably has already noticed the FireAllRules command and wondered how that works with a StatelessKieSession. The FireAllRules command is allowed, and using it will disable the automatic execution at the end; think of using it as a sort of manual override function. Any command, in the batch, that has an out identifier set will add its results to the returned Ex- ecutionResults instance. Let's look at a simple example to see how this works. The example presented includes command from the Drools and jBPM, for the sake of illustration. They are covered in more detail in the Drool and jBPM specific sections. Example 4.23. BatchExecution Command StatelessKieSession ksession = kbase.newStatelessKieSession(); List cmds = new ArrayList(); cmds.add( CommandFactory.newInsertObject( new Cheese( "stilton", 1), "stilton") ); cmds.add( CommandFactory.newStartProcess( "process cheeses" ) ); cmds.add( CommandFactory.newQuery( "cheeses" ) ); ExecutionResults bresults = ksession.execute( CommandFactory.newBatchExecution( cmds ) ); Cheese stilton = ( Cheese ) bresults.getValue( "stilton" ); QueryResults qresults = ( QueryResults ) bresults.getValue( "cheeses" ); In the above example multiple commands are executed, two of which populate the Execution- Results. The query command defaults to use the same identifier as the query name, but it can also be mapped to a different identifier. All commands support XML and jSON marshalling using XStream, as well as JAXB marshalling. This is covered in section Commands API. 4.2.4.7. StatelessKieSession The StatelessKieSession wraps the KieSession, instead of extending it. Its main focus is on the decision service type scenarios. It avoids the need to call dispose(). Stateless sessions do not support iterative insertions and the method call fireAllRules() from Java code; the act of calling execute() is a single-shot method that will internally instantiate a KieSession, add all the user data and execute user commands, call fireAllRules(), and then call dispose(). While the main way to work with this class is via the BatchExecution (a subinterface of Command) as supported by the CommandExecutor interface, two convenience methods are provided for when simple object insertion is all that's required. The CommandExecutor and BatchExecution are talked about in detail in their own section. KIE 111 Figure 4.24. StatelessKieSession Our simple example shows a stateless session executing a given collection of Java objects using the convenience API. It will iterate the collection, inserting each element in turn. Example 4.24. Simple StatelessKieSession execution with a Collection StatelessKieSession ksession = kbase.newStatelessKieSession(); ksession.execute( collection ); If this was done as a single Command it would be as follows: Example 4.25. Simple StatelessKieSession execution with InsertElements Command ksession.execute( CommandFactory.newInsertElements( collection ) ); If you wanted to insert the collection itself, and the collection's individual elements, then CommandFactory.newInsert(collection) would do the job. Methods of the CommandFactory create the supported commands, all of which can be marshalled using XStream and the BatchExecutionHelper. BatchExecutionHelper provides details on the KIE 112 XML format as well as how to use Drools Pipeline to automate the marshalling of BatchExecution and ExecutionResults. StatelessKieSession supports globals, scoped in a number of ways. We cover the non-com- mand way first, as commands are scoped to a specific execution call. Globals can be resolved in three ways. • The StatelessKieSession method getGlobals() returns a Globals instance which provides access to the session's globals. These are shared for all execution calls. Exercise caution re- garding mutable globals because execution calls can be executing simultaneously in different threads. Example 4.26. Session scoped global StatelessKieSession ksession = kbase.newStatelessKieSession(); // Set a global hbnSession, that can be used for DB interactions in the rules. ksession.setGlobal( "hbnSession", hibernateSession ); // Execute while being able to resolve the "hbnSession" identifier. ksession.execute( collection ); • Using a delegate is another way of global resolution. Assigning a value to a global (with setGlobal(String, Object)) results in the value being stored in an internal collection map- ping identifiers to values. Identifiers in this internal collection will have priority over any supplied delegate. Only if an identifier cannot be found in this internal collection, the delegate global (if any) will be used. • The third way of resolving globals is to have execution scoped globals. Here, a Command to set a global is passed to the CommandExecutor. The CommandExecutor interface also offers the ability to export data via "out" parameters. Inserted facts, globals and query results can all be returned. Example 4.27. Out identifiers // Set up a list of commands List cmds = new ArrayList(); cmds.add( CommandFactory.newSetGlobal( "list1", new ArrayList(), true ) ); cmds.add( CommandFactory.newInsert( new Person( "jon", 102 ), "person" ) ); cmds.add( CommandFactory.newQuery( "Get People" "getPeople" ); // Execute the list ExecutionResults results = ksession.execute( CommandFactory.newBatchExecution( cmds ) ); // Retrieve the ArrayList results.getValue( "list1" ); // Retrieve the inserted Person fact results.getValue( "person" ); // Retrieve the query as a QueryResults instance. KIE 113 results.getValue( "Get People" ); 4.2.4.8. Marshalling The KieMarshallers are used to marshal and unmarshal KieSessions. Figure 4.25. KieMarshallers An instance of the KieMarshallers can be retrieved from the KieServices. A simple example is shown below: Example 4.28. Simple Marshaller Example // ksession is the KieSession // kbase is the KieBase ByteArrayOutputStream baos = new ByteArrayOutputStream(); Marshaller marshaller = KieServices.Factory.get().getMarshallers().newMarshaller( kbase ); marshaller.marshall( baos, ksession ); baos.close(); However, with marshalling, you will need more flexibility when dealing with referenced user data. To achieve this use the ObjectMarshallingStrategy interface. Two implementations are provid- ed, but users can implement their own. The two supplied strategies are IdentityMarshallingS- trategy and SerializeMarshallingStrategy. SerializeMarshallingStrategy is the default, as shown in the example above, and it just calls the Serializable or Externalizable methods on a user instance. IdentityMarshallingStrategy creates an integer id for each user object and stores them in a Map, while the id is written to the stream. When unmarshalling it accesses the IdentityMarshallingStrategy map to retrieve the instance. This means that if you use the IdentityMarshallingStrategy, it is stateful for the life of the Marshaller instance and will create KIE 114 ids and keep references to all objects that it attempts to marshal. Below is the code to use an Identity Marshalling Strategy. Example 4.29. IdentityMarshallingStrategy ByteArrayOutputStream baos = new ByteArrayOutputStream(); KieMarshallers kMarshallers = KieServices.Factory.get().getMarshallers() ObjectMarshallingStrategy oms = kMarshallers.newIdentityMarshallingStrategy() Marshaller marshaller = kMarshallers.newMarshaller( kbase, new ObjectMarshallingStrategy[]{ oms } ); marshaller.marshall( baos, ksession ); baos.close(); Im most cases, a single strategy is insufficient. For added flexibility, the ObjectMarshallingS- trategyAcceptor interface can be used. This Marshaller has a chain of strategies, and while reading or writing a user object it iterates the strategies asking if they accept responsibility for marshalling the user object. One of the provided implementations is ClassFilterAcceptor. This allows strings and wild cards to be used to match class names. The default is "*.*", so in the above example the Identity Marshalling Strategy is used which has a default "*.*" acceptor. Assuming that we want to serialize all classes except for one given package, where we will use identity lookup, we could do the following: Example 4.30. IdentityMarshallingStrategy with Acceptor ByteArrayOutputStream baos = new ByteArrayOutputStream(); KieMarshallers kMarshallers = KieServices.Factory.get().getMarshallers() ObjectMarshallingStrategyAcceptor identityAcceptor = kMarshallers.newClassFilterAcceptor( new String[] { "org.domain.pkg1.*" } ); ObjectMarshallingStrategy identityStrategy = kMarshallers.newIdentityMarshallingStrategy( identityAcceptor ); ObjectMarshallingStrategy sms = kMarshallers.newSerializeMarshallingStrategy(); Marshaller marshaller = kMarshallers.newMarshaller( kbase, new ObjectMarshallingStrategy[]{ identityStrategy, sms } ); marshaller.marshall( baos, ksession ); baos.close(); Note that the acceptance checking order is in the natural order of the supplied elements. Also note that if you are using scheduled matches (i.e. some of your rules use timers or calendars) they are marshallable only if, before you use it, you configure your KieSession to use a trackable timer job factory manager as follows: Example 4.31. Configuring a trackable timer job factory manager KieSessionConfiguration ksconf = KieServices.Factory.get().newKieSessionConfiguration(); ksconf.setOption(TimerJobFactoryOption.get("trackable")); KIE 115 KSession ksession = kbase.newKieSession(ksconf, null); 4.2.4.9. Persistence and Transactions Longterm out of the box persistence with Java Persistence API (JPA) is possible with Drools. It is necessary to have some implementation of the Java Transaction API (JTA) installed. For development purposes the Bitronix Transaction Manager is suggested, as it's simple to set up and works embedded, but for production use JBoss Transactions is recommended. Example 4.32. Simple example using transactions KieServices kieServices = KieServices.Factory.get(); Environment env = kieServices.newEnvironment(); env.set( EnvironmentName.ENTITY_MANAGER_FACTORY, Persistence.createEntityManagerFactory( "emf-name" ) ); env.set( EnvironmentName.TRANSACTION_MANAGER, TransactionManagerServices.getTransactionManager() ); // KieSessionConfiguration may be null, and a default will be used KieSession ksession = kieServices.getStoreServices().newKieSession( kbase, null, env ); int sessionId = ksession.getId(); UserTransaction ut = (UserTransaction) new InitialContext().lookup( "java:comp/UserTransaction" ); ut.begin(); ksession.insert( data1 ); ksession.insert( data2 ); ksession.startProcess( "process1" ); ut.commit(); To use a JPA, the Environment must be set with both the EntityManagerFactory and the Trans- actionManager. If rollback occurs the ksession state is also rolled back, hence it is possible to continue to use it after a rollback. To load a previously persisted KieSession you'll need the id, as shown below: Example 4.33. Loading a KieSession KieSession ksession = kieServices.getStoreServices().loadKieSession( sessionId, kbase, null, env ); To enable persistence several classes must be added to your persistence.xml, as in the example below: Example 4.34. Configuring JPA KIE 116 org.hibernate.ejb.HibernatePersistence jdbc/BitronixJTADataSource org.drools.persistence.info.SessionInfo org.drools.persistence.info.WorkItemInfo The jdbc JTA data source would have to be configured first. Bitronix provides a number of ways of doing this, and its documentation should be consulted for details. For a quick start, here is the programmatic approach: Example 4.35. Configuring JTA DataSource PoolingDataSource ds = new PoolingDataSource(); ds.setUniqueName( "jdbc/BitronixJTADataSource" ); ds.setClassName( "org.h2.jdbcx.JdbcDataSource" ); ds.setMaxPoolSize( 3 ); ds.setAllowLocalTransactions( true ); ds.getDriverProperties().put( "user", "sa" ); ds.getDriverProperties().put( "password", "sasa" ); ds.getDriverProperties().put( "URL", "jdbc:h2:mem:mydb" ); ds.init(); Bitronix also provides a simple embedded JNDI service, ideal for testing. To use it, add a jndi.properties file to your META-INF folder and add the following line to it: Example 4.36. JNDI properties java.naming.factory.initial=bitronix.tm.jndi.BitronixInitialContextFactory KIE 117 4.2.5. Installation and Deployment Cheat Sheets Figure 4.26. Installation Overview KIE 118 Figure 4.27. Deployment Overview 4.2.6. Build, Deploy and Utilize Examples The best way to learn the new build system is by example. The source project "drools-exam- ples-api" contains a number of examples, and can be found at GitHub: KIE 119 https://github.com/droolsjbpm/drools/tree/6.0.x/drools-examples-api Each example is described below, the order starts with the simplest (most of the options are defaulted) and working its way up to more complex use cases. The Deploy use cases shown below all involve mvn install. Remote deployment of JARs in Maven is well covered in Maven literature. Utilize refers to the initial act of loading the resources and providing access to the KIE runtimes. Where as Run refers to the act of interacting with those runtimes. 4.2.6.1. Default KieSession • Project: default-kesession. • Summary: Empty kmodule.xml KieModule on the classpath that includes all resources in a sin- gle default KieBase. The example shows the retrieval of the default KieSession from the class- path. An empty kmodule.xml will produce a single KieBase that includes all files found under resources path, be it DRL, BPMN2, XLS etc. That single KieBase is the default and also includes a single default KieSession. Default means they can be created without knowing their names. Example 4.37. Author - kmodule.xml Example 4.38. Build and Install - Maven mvn install ks.getKieClasspathContainer() returns the KieContainer that contains the KieBases deployed on- to the environment classpath. kContainer.newKieSession() creates the default KieSession. Notice that you no longer need to look up the KieBase, in order to create the KieSession. The KieSession knows which KieBase it's associated with, and use that, which in this case is the default KieBase. Example 4.39. Utilize and Run - Java KieServices ks = KieServices.Factory.get(); KieContainer kContainer = ks.getKieClasspathContainer(); KieSession kSession = kContainer.newKieSession(); KIE 120 kSession.setGlobal("out", out); kSession.insert(new Message("Dave", "Hello, HAL. Do you read me, HAL?")); kSession.fireAllRules(); 4.2.6.2. Named KieSession • Project: named-kiesession. • Summary: kmodule.xml that has one named KieBase and one named KieSession. The exam- ples shows the retrieval of the named KieSession from the classpath. kmodule.xml will produce a single named KieBase, 'kbase1' that includes all files found under re- sources path, be it DRL, BPMN2, XLS etc. KieSession 'ksession1' is associated with that KieBase and can be created by name. Example 4.40. Author - kmodule.xml Example 4.41. Build and Install - Maven mvn install ks.getKieClasspathContainer() returns the KieContainer that contains the KieBases deployed on- to the environment classpath. This time the KieSession uses the name 'ksession1'. You do not need to lookup the KieBase first, as it knows which KieBase 'ksession1' is assocaited with. Example 4.42. Utilize and Run - Java KieServices ks = KieServices.Factory.get(); KieContainer kContainer = ks.getKieClasspathContainer(); KieSession kSession = kContainer.newKieSession("ksession1"); kSession.setGlobal("out", out); kSession.insert(new Message("Dave", "Hello, HAL. Do you read me, HAL?")); kSession.fireAllRules(); KIE 121 4.2.6.3. KieBase Inheritence • Project: kiebase-inclusion. • Summary: 'kmodule.xml' demonstrates that one KieBase can include the resources from an- other KieBase, from another KieModule. In this case it inherits the named KieBase from the 'name-kiesession' example. The included KieBase can be from the current KieModule or any other KieModule that is in the pom.xml dependency list. kmodule.xml will produce a single named KieBase, 'kbase2' that includes all files found under resources path, be it DRL, BPMN2, XLS etc. Further it will include all the resources found from the KieBase 'kbase1', due to the use of the 'includes' attribute. KieSession 'ksession2' is associated with that KieBase and can be created by name. Example 4.43. Author - kmodule.xml This example requires that the previous example, 'named-kiesession', is built and installed to the local Maven repository first. Once installed it can be included as a dependency, using the standard Maven element. Example 4.44. Author - pom.xml 4.0.0 org.drools drools-examples-api 6.0.0/version> kiebase-inclusion Drools API examples - KieBase Inclusion org.drools drools-compiler org.drools named-kiesession 6.0.0 KIE 122 Once 'named-kiesession' is built and installed this example can be built and installed as normal. Again the act of installing, will force the unit tests to run, demonstrating the use case. Example 4.45. Build and Install - Maven mvn install ks.getKieClasspathContainer() returns the KieContainer that contains the KieBases deployed on- to the environment classpath. This time the KieSession uses the name 'ksession2'. You do not need to lookup the KieBase first, as it knows which KieBase 'ksession1' is assocaited with. No- tice two rules fire this time, showing that KieBase 'kbase2' has included the resources from the dependency KieBase 'kbase1'. Example 4.46. Utilize and Run - Java KieServices ks = KieServices.Factory.get(); KieContainer kContainer = ks.getKieClasspathContainer(); KieSession kSession = kContainer.newKieSession("ksession2"); kSession.setGlobal("out", out); kSession.insert(new Message("Dave", "Hello, HAL. Do you read me, HAL?")); kSession.fireAllRules(); kSession.insert(new Message("Dave", "Open the pod bay doors, HAL.")); kSession.fireAllRules(); 4.2.6.4. Multiple KieBases • Project: 'multiple-kbases. • Summary: Demonstrates that the 'kmodule.xml' can contain any number of KieBase or KieSes- sion declarations. Introduces the 'packages' attribute to select the folders for the resources to be included in the KieBase. kmodule.xml produces 6 different named KieBases. 'kbase1' includes all resources from the KieModule. The other KieBases include resources from other selected folders, via the 'packages' attribute. Note the use of wildcard '*', to select this package and all packages below it. Example 4.47. Author - kmodule.xml KIE 123 Example 4.48. Build and Install - Maven mvn install Only part of the example is included below, as there is a test method per KieSession, but each one is a repetition of the other, with different list expectations. Example 4.49. Utilize and Run - Java @Test public void testSimpleKieBase() { List list = useKieSession("ksession1"); // no packages imported means import everything assertEquals(4, list.size()); assertTrue( list.containsAll( asList(0, 1, 2, 3) ) ); } //.. other tests for ksession2 to ksession6 here private List useKieSession(String name) { KieServices ks = KieServices.Factory.get(); KieContainer kContainer = ks.getKieClasspathContainer(); KieSession kSession = kContainer.newKieSession(name); List list = new ArrayList(); kSession.setGlobal("list", list); kSession.insert(1); KIE 124 kSession.fireAllRules(); return list; } 4.2.6.5. KieContainer from KieRepository • Project: kcontainer-from-repository • Summary: The project does not contain a kmodule.xml, nor does the pom.xml have any depen- dencies for other KieModules. Instead the Java code demonstrates the loading of a dynamic KieModule from a Maven repository. The pom.xml must include kie-ci as a depdency, to ensure Maven is available at runtime. As this uses Maven under the hood you can also use the standard Maven settings.xml file. Example 4.50. Author - pom.xml 4.0.0 org.drools drools-examples-api 6.0.0 kiecontainer-from-kierepo Drools API examples - KieContainer from KieRepo org.kie kie-ci Example 4.51. Build and Install - Maven mvn install In the previous examples the classpath KieContainer used. This example creates a dynamic KieContainer as specified by the ReleaseId. The ReleaseId uses Maven conventions for group id, artifact id and version. It also obeys LATEST and SNAPSHOT for versions. KIE 125 Example 4.52. Utilize and Run - Java KieServices ks = KieServices.Factory.get(); // Install example1 in the local Maven repo before to do this KieContainer kContainer = ks.newKieContainer(ks.newReleaseId("org.drools", "named- kiesession", "6.0.0-SNAPSHOT")); KieSession kSession = kContainer.newKieSession("ksession1"); kSession.setGlobal("out", out); Object msg1 = createMessage(kContainer, "Dave", "Hello, HAL. Do you read me, HAL?"); kSession.insert(msg1); kSession.fireAllRules(); 4.2.6.6. Default KieSession from File • Project: default-kiesession-from-file • Summary: Dynamic KieModules can also be loaded from any Resource location. The loaded KieModule provides default KieBase and KieSession definitions. No kmodue.xml file exists. The project 'default-kiesession' must be built first, so that the resulting JAR, in the target folder, can be referenced as a File. Example 4.53. Build and Install - Maven mvn install Any KieModule can be loaded from a Resource location and added to the KieRepository. Once deployed in the KieRepository it can be resolved via its ReleaseId. Note neither Maven or kie-ci are needed here. It will not set up a transitive dependency parent classloader. Example 4.54. Utilize and Run - Java KieServices ks = KieServices.Factory.get(); KieRepository kr = ks.getRepository(); KieModule kModule = kr.addKieModule(ks.getResources().newFileSystemResource(getFile("default- kiesession"))); KieContainer kContainer = ks.newKieContainer(kModule.getReleaseId()); KieSession kSession = kContainer.newKieSession(); kSession.setGlobal("out", out); Object msg1 = createMessage(kContainer, "Dave", "Hello, HAL. Do you read me, HAL?"); KIE 126 kSession.insert(msg1); kSession.fireAllRules(); 4.2.6.7. Named KieSession from File • Project: named-kiesession-from-file • Summary: Dynamic KieModules can also be loaded from any Resource location. The loaded KieModule provides named KieBase and KieSession definitions. No kmodue.xml file exists. The project 'named-kiesession' must be built first, so that the resulting JAR, in the target folder, can be referenced as a File. Example 4.55. Build and Install - Maven mvn install Any KieModule can be loaded from a Resource location and added to the KieRepository. Once in the KieRepository it can be resolved via its ReleaseId. Note neither Maven or kie-ci are needed here. It will not setup a transitive dependency parent classloader. Example 4.56. Utilize and Run - Java KieServices ks = KieServices.Factory.get(); KieRepository kr = ks.getRepository(); KieModule kModule = kr.addKieModule(ks.getResources().newFileSystemResource(getFile("named- kiesession"))); KieContainer kContainer = ks.newKieContainer(kModule.getReleaseId()); KieSession kSession = kContainer.newKieSession("ksession1"); kSession.setGlobal("out", out); Object msg1 = createMessage(kContainer, "Dave", "Hello, HAL. Do you read me, HAL?"); kSession.insert(msg1); kSession.fireAllRules(); 4.2.6.8. KieModule with Dependent KieModule • Project: kie-module-form-multiple-files • Summary: Programmatically provide the list of dependant KieModules, without using Maven to resolve anything. KIE 127 No kmodue.xml file exists. The projects 'named-kiesession' and 'kiebase-include' must be built first, so that the resulting JARs, in the target folders, can be referenced as Files. Example 4.57. Build and Install - Maven mvn install Creates two resources. One is for the main KieModule 'exRes1' the other is for the dependency 'exRes2'. Even though kie-ci is not present and thus Maven is not available to resolve the depen- dencies, this shows how you can manually specify the dependent KieModules, for the vararg. Example 4.58. Utilize and Run - Java KieServices ks = KieServices.Factory.get(); KieRepository kr = ks.getRepository(); Resource ex1Res = ks.getResources().newFileSystemResource(getFile("kiebase-inclusion")); Resource ex2Res = ks.getResources().newFileSystemResource(getFile("named-kiesession")); KieModule kModule = kr.addKieModule(ex1Res, ex2Res); KieContainer kContainer = ks.newKieContainer(kModule.getReleaseId()); KieSession kSession = kContainer.newKieSession("ksession2"); kSession.setGlobal("out", out); Object msg1 = createMessage(kContainer, "Dave", "Hello, HAL. Do you read me, HAL?"); kSession.insert(msg1); kSession.fireAllRules(); Object msg2 = createMessage(kContainer, "Dave", "Open the pod bay doors, HAL."); kSession.insert(msg2); kSession.fireAllRules(); 4.2.6.9. Programmaticaly build a Simple KieModule with Defaults • Project: kiemoduelmodel-example • Summary: Programmaticaly buid a KieModule from just a single file. The POM and models are all defaulted. This is the quickest out of the box approach, but should not be added to a Maven repository. Example 4.59. Build and Install - Maven mvn install This programmatically builds a KieModule. It populates the model that represents the ReleaseId and kmodule.xml, and it adds the relevant resources. A pom.xml is generated from the ReleaseId. KIE 128 Example 4.60. Utilize and Run - Java KieServices ks = KieServices.Factory.get(); KieRepository kr = ks.getRepository(); KieFileSystem kfs = ks.newKieFileSystem(); kfs.write("src/main/resources/org/kie/example5/HAL5.drl", getRule()); KieBuilder kb = ks.newKieBuilder(kfs); kb.buildAll(); // kieModule is automatically deployed to KieRepository if successfully built. if (kb.getResults().hasMessages(Level.ERROR)) { throw new RuntimeException("Build Errors:\n" + kb.getResults().toString()); } KieContainer kContainer = ks.newKieContainer(kr.getDefaultReleaseId()); KieSession kSession = kContainer.newKieSession(); kSession.setGlobal("out", out); kSession.insert(new Message("Dave", "Hello, HAL. Do you read me, HAL?")); kSession.fireAllRules(); 4.2.6.10. Programmaticaly build a KieModule using Meta Models • Project: kiemoduelmodel-example • Summary: Programmaticaly build a KieModule, by creating its kmodule.xml meta model re- sources. Example 4.61. Build and Install - Maven mvn install This programmatically builds a KieModule. It populates the model that represents the ReleaseId and kmodule.xml, as well as add the relevant resources. A pom.xml is generated from the Re- leaseId. Example 4.62. Utilize and Run - Java KieServices ks = KieServices.Factory.get(); KieFileSystem kfs = ks.newKieFileSystem(); Resource ex1Res = ks.getResources().newFileSystemResource(getFile("named-kiesession")); Resource ex2Res = ks.getResources().newFileSystemResource(getFile("kiebase-inclusion")); ReleaseId rid = ks.newReleaseId("org.drools", "kiemodulemodel-example", "6.0.0-SNAPSHOT"); kfs.generateAndWritePomXML(rid); KIE 129 KieModuleModel kModuleModel = ks.newKieModuleModel(); kModuleModel.newKieBaseModel("kiemodulemodel") .addInclude("kiebase1") .addInclude("kiebase2") .newKieSessionModel("ksession6"); kfs.writeKModuleXML(kModuleModel.toXML()); kfs.write("src/main/resources/kiemodulemodel/HAL6.drl", getRule()); KieBuilder kb = ks.newKieBuilder(kfs); kb.setDependencies(ex1Res, ex2Res); kb.buildAll(); // kieModule is automatically deployed to KieRepository if successfully built. if (kb.getResults().hasMessages(Level.ERROR)) { throw new RuntimeException("Build Errors:\n" + kb.getResults().toString()); } KieContainer kContainer = ks.newKieContainer(rid); KieSession kSession = kContainer.newKieSession("ksession6"); kSession.setGlobal("out", out); Object msg1 = createMessage(kContainer, "Dave", "Hello, HAL. Do you read me, HAL?"); kSession.insert(msg1); kSession.fireAllRules(); Object msg2 = createMessage(kContainer, "Dave", "Open the pod bay doors, HAL."); kSession.insert(msg2); kSession.fireAllRules(); Object msg3 = createMessage(kContainer, "Dave", "What's the problem?"); kSession.insert(msg3); kSession.fireAllRules(); 4.3. Security 4.3.1. Security Manager The KIE engine is a platform for the modelling and execution of business behavior, using a mul- titude of declarative abstractions and metaphores, like rules, processes, decision tables and etc. Many times, the authoring of these metaphores is done by third party groups, be it a different group inside the same company, a group from a partner company, or even anonymous third parties on the internet. Rules and Processes are designed to execute arbitrary code in order to do their job, but in such cases it might be necessary to constrain what they can do. For instance, it is unlikely a rule should be allowed to create a classloader (what could open the system to an attack) and certainly it should not be allowed to make a call to System.exit(). The Java Platform provides a very comprehensive and well defined security framework that allows users to define policies for what a system can do. The KIE platform leverages that framework and allow application developers to define a specific policy to be applied to any execution of user provided code, be it in rules, processes, work item handlers and etc. KIE 130 4.3.1.1. How to define a KIE Policy Rules and processes can run with very restrict permissions, but the engine itself needs to perform many complex operations in order to work. Examples are: it needs to create classloaders, read system properties, access the file system, etc. Once a security manager is installed, though, it will apply restrictions to all the code executing in the JVM according to the defined policy. For that reason, KIE allows the user to define two different policy files: one for the engine itself and one for the assets deployed into and executed by the engine. One easy way to setup the enviroment is to give the engine itself a very permissive policy, while providing a constrained policy for rules and processes. Policy files follow the standard policy file syntax as described in the Java documentation. For more details, see: http://docs.oracle.com/javase/6/docs/technotes/guides/security/PolicyFiles.html#FileSyntax A permissive policy file for the engine can look like the following: Example 4.63. A sample engine.policy file grant { permission java.security.AllPermission; } An example security policy for rules could be: Example 4.64. A sample rules.policy file grant { permission java.util.PropertyPermission "*", "read"; permission java.lang.RuntimePermission "accessDeclaredMembers"; } Please note that depending on what the rules and processes are supposed to do, many more permissions might need to be granted, like accessing files in the filesystem, databases, etc. In order to use these policy files, all that is necessary is to execute the application with these files as parameters to the JVM. Three parameters are required: Table 4.3. Parameters Parameter Meaning -Djava.security.manager Enables the security manager KIE 131 Parameter Meaning -Djava.security.policy= Defines the global policy file to be applied to the whole application, including the engine -Dkie.security.policy= Defines the policy file to be applied to rules and processes For instance: java -Djava.security.manager -Djava.security.policy=global.policy - Dkie.security.policy=rules.policy foo.bar.MyApp Note When executing the engine inside a container, use your container's documenta- tion to find out how to configure the Security Manager and how to define the glob- al security policy. Define the kie security policy as described above and set the kie.security.policy system property in order to configure the engine to use it. Note Please note that unless a Security Manager is configured, the kie.security.policy will be ignored. Note A Security Manager has a high performance impact in the JVM. Applications with strict performance requirements are strongly discouraged of using a Security Man- ager. An alternative is the use of other security procedures like the auditing of rules/ processes before testing and deployment to prevent malicious code from being deployed to the environment. Part III. Drools Runtime and Language Drools is a powerful Hybrid Reasoning System. 133 Chapter 5. Hybrid Reasoning 5.1. Artificial Intelligence 5.1.1. A Little History Over the last few decades artificial intelligence (AI) became an unpopular term, with the well-known "AI Winter" [http://en.wikipedia.org/wiki/AI_winter]. There were large boasts from scientists and engineers looking for funding, which never lived up to expectations, re- sulting in many failed projects. Thinking Machines Corporation [http://en.wikipedia.org/wi- ki/Thinking_Machines_Corporation] and the 5th Generation Computer [http://en.wikipedia.org/wi- ki/Fifth-generation_computer] (5GP) project probably exemplify best the problems at the time. Thinking Machines Corporation was one of the leading AI firms in 1990, it had sales of nearly $65 million. Here is a quote from its brochure: “Some day we will build a thinking machine. It will be a truly intelligent machine. One that can see and hear and speak. A machine that will be proud of us.” Yet 5 years later it filed for bankruptcy protection under Chapter 11. The site inc.com has a fascinating article titled "The Rise and Fall of Thinking Machines" [http://www.inc.com/ magazine/19950915/2622.html]. The article covers the growth of the industry and how a cosy re- lationship with Thinking Machines and DARPA [http://en.wikipedia.org/wiki/DARPA] over-heated the market, to the point of collapse. It explains how and why commerce moved away from AI and towards more practical number-crunching super computers. The 5th Generation Computer project was a USD 400 million project in Japan to build a next generation computer. Valves (or Tubes) was the first generation, transistors the second, integrated circuits the third and finally microprocessors was the fourth. The fifth was intended to be a machine capable of effective Artificial Intelligence. This project spurred an "arms" race with the UK and USA, that caused much of the AI bubble. The 5GP would provide massive multi-cpu parallel processing hardware along with powerful knowledge representation and reasoning software via Prolog; a type of expert system. By 1992 the project was considered a failure and cancelled. It was the largest and most visible commercial venture for Prolog, and many of the failures are pinned on the problems of trying to run a logic based programming language concurrently on multi CPU hardware with effective results. Some believe that the failure of the 5GP project tainted Prolog and relegated it to academia, see "Whatever Happened to Prolog" [http://www.dvorak.org/blog/ whatever-happened-to-prolog/] by John C. Dvorak. However while research funding dried up and the term AI became less used, many green shoots where planted and continued more quietly under discipline specific names: cognitive systems, ma- chine learning, intelligent systems, knowledge representation and reasoning. Offshoots of these then made their way into commercial systems, such as expert systems in the Business Rules Management System (BRMS) market. Hybrid Reasoning 134 Imperative, system based languages, languages such as C, C++, Java and C#/.Net have dom- inated the last 20 years, enabled by the practicality of the languages and ability to run with good performance on commodity hardware. However many believe there is a renaissance un- derway in the field of AI, spurred by advances in hardware capabilities and AI research. In 2005 Heather Havenstein authored "Spring comes to AI winter" [http://www.computerworld.com/s/ar- ticle/99691/Spring_comes_to_AI_winter] which outlines a case for this resurgence. Norvig and Russel dedicate several pages to what factors allowed the industry to overcome it's problems and the research that came about as a result: Recent years have seen a revolution in both the content and the methodology of work in artificial intelligence. It is now more common to build on existing the- ories than to propose brand-new ones, to base claims on rigorous theorems or hard experimental evidence rather than on intuition, and to show relevance to real-world applications rather than toy examples. —Artificial Intelligence: A Modern Approach Computer vision, neural networks, machine learning and knowledge representation and reason- ing (KRR) have made great strides towards becoming practical in commercial environments. For example, vision-based systems can now fully map out and navigate their environments with strong recognition skills. As a result we now have self-driving cars about to enter the commercial market. Ontological research, based around description logic, has provided very rich semantics to repre- sent our world. Algorithms such as the tableaux algorithm have made it possible to use those rich semantics effectively in large complex ontologies. Early KRR systems, like Prolog in 5GP, were dogged by the limited semantic capabilities and memory restrictions on the size of those ontologies. 5.1.2. Knowledge Representation and Reasoning In A Little History talks about AI as a broader subject and touches on Knowledge Representation and Reasoning (KRR) and also Expert Systems, I'll come back to Expert Systems later. KRR is about how we represent our knowledge in symbolic form, i.e. how we describe something. Reasoning is about how we go about the act of thinking using this knowledge. System based object-oriented languages, like C++, Java and C#, have data definitions called classes for de- scribing the composition and behaviour of modeled entities. In Java we call exemplars of these described things beans or instances. However those classification systems are limited to ensure computational efficiency. Over the years researchers have developed increasingly sophisticated ways to represent our world. Many of you may already have heard of OWL (Web Ontology Lan- guage). There is always a gap between what can be theoretically represented and what can be used computationally in practically timely manner, which is why OWL has different sub-languages from Lite to Full. It is not believed that any reasoning system can support OWL Full. However, algorithmic advances continue to narrow that gap and improve the expressiveness available to reasoning engines. There are also many approaches to how these systems go about thinking. You may have heard discussions comparing the merits of forward chaining, which is reactive and data driven, with Hybrid Reasoning 135 backward chaining, which is passive and query driven. Many other types of reasoning techniques exist, each of which enlarges the scope of the problems we can tackle declaratively. To list just a few: imperfect reasoning (fuzzy logic, certainty factors), defeasible logic, belief systems, temporal reasoning and correlation. You don't need to understand all these terms to understand and use Drools. They are just there to give an idea of the range of scope of research topics, which is actually far more extensive, and continues to grow as researchers push new boundaries. KRR is often referred to as the core of Artificial Intelligence. Even when using biological approach- es like neural networks, which model the brain and are more about pattern recognition than think- ing, they still build on KRR theory. My first endeavours with Drools were engineering oriented, as I had no formal training or understanding of KRR. Learning KRR has allowed me to get a much wider theoretical background. Allowing me to better understand both what I've done and where I'm going, as it underpins nearly all of the theoretical side to our Drools R&D. It really is a vast and fascinating subject that will pay dividends for those who take the time to learn. I know it did and still does for me. Bracham and Levesque have written a seminal piece of work, called "Knowledge Representation and Reasoning" that is a must read for anyone wanting to build strong foundations. I would also recommend the Russel and Norvig book "Artificial Intelligence, a modern approach" which also covers KRR. 5.1.3. Rule Engines and Production Rule Systems (PRS) We've now covered a brief history of AI and learnt that the core of AI is formed around KRR. We've shown than KRR is a vast and fascinating subject which forms the bulk of the theory driving Drools R&D. The rule engine is the computer program that delivers KRR functionality to the developer. At a high level it has three components: • Ontology • Rules • Data As previously mentioned the ontology is the representation model we use for our "things". It could use records or Java classes or full-blown OWL based ontologies. The rules perform the reasoning, i.e., they facilitate "thinking". The distinction between rules and ontologies blurs a little with OWL based ontologies, whose richness is rule based. The term "rules engine" is quite ambiguous in that it can be any system that uses rules, in any form, that can be applied to data to produce outcomes. This includes simple systems like form validation and dynamic expression engines. The book "How to Build a Business Rules Engine" (2004) by Malcolm Chisholm exemplifies this ambiguity. The book is actually about how to build and alter a database schema to hold validation rules. The book then shows how to generate Visual Basic code from those validation rules to validate data entry. While perfectly valid, this is very different to what we are talking about. Hybrid Reasoning 136 Drools started life as a specific type of rule engine called a Production Rule System (PRS) and was based around the Rete algorithm (usually pronounced as two syllables, e.g., REH-te or RAY-tay). The Rete algorithm, developed by Charles Forgy in 1974, forms the brain of a Production Rule System and is able to scale to a large number of rules and facts. A Production Rule is a two-part structure: the engine matches facts and data against Production Rules - also called Productions or just Rules - to infer conclusions which result in actions. when then ; The process of matching the new or existing facts against Production Rules is called pattern matching, which is performed by the inference engine. Actions execute in response to changes in data, like a database trigger; we say this is a data driven approach to reasoning. The actions themselves can change data, which in turn could match against other rules causing them to fire; this is referred to as forward chaining Drools 5.x implements and extends the Rete algorithm. This extended Rete algorithm is named ReteOO, signifying that Drools has an enhanced and optimized implementation of the Rete algo- rithm for object oriented systems. Other Rete based engines also have marketing terms for their proprietary enhancements to Rete, like RetePlus and Rete III. The most common enhancements are covered in "Production Matching for Large Learning Systems" (1995) by Robert B. Dooren- bos' thesis, which presents Rete/UL. Drools 6.x introduces a new lazy algorithm named PHREAK; which is covered in more detail in the PHEAK algorithm section. The Rules are stored in the Production Memory and the facts that the Inference Engine matches against are kept in the Working Memory. Facts are asserted into the Working Memory where they may then be modified or retracted. A system with a large number of rules and facts may result in many rules being true for the same fact assertion; these rules are said to be in conflict. The Agenda manages the execution order of these conflicting rules using a Conflict Resolution strategy. Hybrid Reasoning 137 Figure 5.1. High-level View of a Production Rule System 5.1.4. Hybrid Reasoning Systems (HRS) You may have read discussions comparing the merits of forward chaining (reactive and data driven) or backward chaining (passive query). Here is a quick explanation of these two main types of reasoning. Forward chaining is "data-driven" and thus reactionary, with facts being asserted into working memory, which results in one or more rules being concurrently true and scheduled for execution by the Agenda. In short, we start with a fact, it propagates through the rules, and we end in a conclusion. Hybrid Reasoning 138 Figure 5.2. Forward Chaining Backward chaining is "goal-driven", meaning that we start with a conclusion which the engine tries to satisfy. If it can't, then it searches for conclusions that it can satisfy. These are known as subgoals, that will help satisfy some unknown part of the current goal. It continues this process until either the initial conclusion is proven or there are no more subgoals. Prolog is an example of a Backward Chaining engine. Drools can also do backward chaining, which we refer to as derivation queries. Hybrid Reasoning 139 Figure 5.3. Backward Chaining Hybrid Reasoning 140 Historically you would have to make a choice between systems like OPS5 (forward) or Prolog (backward). Nowadays many modern systems provide both types of reasoning capabilities. There are also many other types of reasoning techniques, each of which enlarges the scope of the problems we can tackle declaratively. To list just a few: imperfect reasoning (fuzzy logic, certainty factors), defeasible logic, belief systems, temporal reasoning and correlation. Modern systems are merging these capabilities, and others not listed, to create hybrid reasoning systems (HRS). While Drools started out as a PRS, 5.x introduced Prolog style backward chaining reasoning as well as some functional programming styles. For this reason we now prefer the term Hybrid Reasoning System when describing Drools. Drools currently provides crisp reasoning, but imperfect reasoning is almost ready. Initially this will be imperfect reasoning with fuzzy logic; later we'll add support for other types of uncertainty. Work is also under way to bring OWL based ontological reasoning, which will integrate with our traits system. We also continue to improve our functional programming capabilities. 5.1.5. Expert Systems You will often hear the terms expert systems used to refer to production rule systems or Prolog- like systems. While this is normally acceptable, it's technically incorrect as these are frameworks to build expert systems with, rather than expert systems themselves. It becomes an expert system once there is an ontological model to represent the domain and there are facilities for knowledge acquisition and explanation. Mycin is the most famous expert system, built during the 70s. It is still heavily covered in academic literature, such as the recommended book "Expert Systems" by Peter Jackson. Hybrid Reasoning 141 Figure 5.4. Early History of Expert Systems 5.1.6. Recommended Reading General AI, KRR and Expert System Books For those wanting to get a strong theoretical background in KRR and expert systems, I'd strongly recommend the following books. "Artificial Intelligence: A Modern Approach" is a must have, for anyone's bookshelf. • Introduction to Expert Systems • Peter Jackson • Expert Systems: Principles and Programming Hybrid Reasoning 142 • Joseph C. Giarratano, Gary D. Riley • Knowledge Representation and Reasoning • Ronald J. Brachman, Hector J. Levesque • Artificial Intelligence : A Modern Approach. • Stuart Russell and Peter Norvig Figure 5.5. Recommended Reading Hybrid Reasoning 143 Papers Here are some recommended papers that cover interesting areas in rule engine research: • Production Matching for Large Learning Systems: Rete/UL (1993) • Robert B. Doorenbos • Advances In Rete Pattern Matching • Marshall Schor, Timothy P. Daly, Ho Soo Lee, Beth R. Tibbitts (AAAI 1986) • Collection-Oriented Match • Anurag Acharya and Milind Tambe (1993) • The Leaps Algorithm • Don Batery (1990) • Gator: An Optimized Discrimination Network for Active Database Rule Condition Testing • Eric Hanson , Mohammed S. Hasan (1993) Drools Books There are currently three Drools books, all from Packt Publishing. • JBoss Drools Business Rules • Paul Browne • Drools JBoss Rules 5.0 Developers Guide • Michal Bali • Drools Developer's Cookbook • Lucas Amador Hybrid Reasoning 144 Figure 5.6. Recommended Reading 5.2. Rete Algorithm The Rete algorithm was invented by Dr. Charles Forgy and documented in his PhD thesis in 1978-79. A simplified version of the paper was published in 1982 (http://citeseer.ist.psu.edu/con- text/505087/0). The latin word "rete" means "net" or "network". The Rete algorithm can be broken into 2 parts: rule compilation and runtime execution. Hybrid Reasoning 145 The compilation algorithm describes how the Rules in the Production Memory are processed to generate an efficient discrimination network. In non-technical terms, a discrimination network is used to filter data as it propagates through the network. The nodes at the top of the network would have many matches, and as we go down the network, there would be fewer matches. At the very bottom of the network are the terminal nodes. In Dr. Forgy's 1982 paper, he described 4 basic nodes: root, 1-input, 2-input and terminal. Figure 5.7. Rete Nodes The root node is where all objects enter the network. From there, it immediately goes to the Ob- jectTypeNode. The purpose of the ObjectTypeNode is to make sure the engine doesn't do more work than it needs to. For example, say we have 2 objects: Account and Order. If the rule engine tried to evaluate every single node against every object, it would waste a lot of cycles. To make things efficient, the engine should only pass the object to the nodes that match the object type. The easiest way to do this is to create an ObjectTypeNode and have all 1-input and 2-input nodes descend from it. This way, if an application asserts a new Account, it won't propagate to the nodes for the Order object. In Drools when an object is asserted it retrieves a list of valid ObjectType- sNodes via a lookup in a HashMap from the object's Class; if this list doesn't exist it scans all the ObjectTypeNodes finding valid matches which it caches in the list. This enables Drools to match against any Class type that matches with an instanceof check. Hybrid Reasoning 146 Figure 5.8. ObjectTypeNodes ObjectTypeNodes can propagate to AlphaNodes, LeftInputAdapterNodes and BetaNodes. Al- phaNodes are used to evaluate literal conditions. Although the 1982 paper only covers equality conditions, many RETE implementations support other operations. For example, Account.name == "Mr Trout" is a literal condition. When a rule has multiple literal conditions for a single object type, they are linked together. This means that if an application asserts an Account object, it must first satisfy the first literal condition before it can proceed to the next AlphaNode. In Dr. Forgy's paper, he refers to these as IntraElement conditions. The following diagram shows the AlphaNode combinations for Cheese( name == "cheddar", strength == "strong" ): Figure 5.9. AlphaNodes Hybrid Reasoning 147 Drools extends Rete by optimizing the propagation from ObjectTypeNode to AlphaNode using hashing. Each time an AlphaNode is added to an ObjectTypeNode it adds the literal value as a key to the HashMap with the AlphaNode as the value. When a new instance enters the ObjectType node, rather than propagating to each AlphaNode, it can instead retrieve the correct AlphaNode from the HashMap,thereby avoiding unnecessary literal checks. There are two two-input nodes, JoinNode and NotNode, and both are types of BetaNodes. Be- taNodes are used to compare 2 objects, and their fields, to each other. The objects may be the same or different types. By convention we refer to the two inputs as left and right. The left input for a BetaNode is generally a list of objects; in Drools this is a Tuple. The right input is a single object. Two Nodes can be used to implement 'exists' checks. BetaNodes also have memory. The left input is called the Beta Memory and remembers all incoming tuples. The right input is called the Alpha Memory and remembers all incoming objects. Drools extends Rete by performing indexing on the BetaNodes. For instance, if we know that a BetaNode is performing a check on a String field, as each object enters we can do a hash lookup on that String value. This means when facts enter from the opposite side, instead of iterating over all the facts to find valid joins, we do a lookup returning potentially valid candidates. At any point a valid join is found the Tuple is joined with the Object; which is referred to as a partial match; and then propagated to the next node. Hybrid Reasoning 148 Figure 5.10. JoinNode To enable the first Object, in the above case Cheese, to enter the network we use a LeftInputN- odeAdapter - this takes an Object as an input and propagates a single Object Tuple. Terminal nodes are used to indicate a single rule having matched all its conditions; at this point we say the rule has a full match. A rule with an 'or' conditional disjunctive connective results in subrule generation for each possible logically branch; thus one rule can have multiple terminal nodes. Drools also performs node sharing. Many rules repeat the same patterns, and node sharing allows us to collapse those patterns so that they don't have to be re-evaluated for every single instance. The following two rules share the first pattern, but not the last: rule when Cheese( $cheddar : name == "cheddar" ) $person : Person( favouriteCheese == $cheddar ) then Hybrid Reasoning 149 System.out.println( $person.getName() + " likes cheddar" ); end rule when Cheese( $cheddar : name == "cheddar" ) $person : Person( favouriteCheese != $cheddar ) then System.out.println( $person.getName() + " does not like cheddar" ); end As you can see below, the compiled Rete network shows that the alpha node is shared, but the beta nodes are not. Each beta node has its own TerminalNode. Had the second pattern been the same it would have also been shared. Hybrid Reasoning 150 Figure 5.11. Node Sharing Hybrid Reasoning 151 5.3. ReteOO Algorithm The ReteOO was developed throughout the 3, 4 and 5 series releases. It takes the RETE algorithm and applies well known enhancements, all of which are covered by existing academic literature: Node sharing • Sharing is applied to both the alpha and beta network. The beta network sharing is always from the root pattern. Alpha indexing • Alpha Nodes with many children use a hash lookup mechanism, to avoid testing each result. Beta indexing • Join, Not and Exist nodes indexing their memories using a hash. This reduces the join attempts for equal checks. Recently range indexing was added to Not and Exists. Tree based graphs • Join matches did not contain any references to their parent or children matches. Deletions would have to recalculate all join matches again, which involves recreating all those join match objects, to be able to find the parts of the network where the tuples should be deleted. This is called symmetrical propagation. A tree graph provides parent and children references, so a deletion is just a matter of following those references. This is asymmetrical propagation. The result is faster and less impact on the GC, and more robust because changes in values will not cause memory leaks if they happen without the engine being notified. Modify-in-place • Traditional RETE implements a modify as a delete + insert. This causes all join tuples to be GC'd, many of which are recreated again as part of the insert. Modify-in-place instead propagates as a single pass, every node is inspected Property reactive • Also called "new trigger condition". Allows more fine grained reactivity to updates. A Pattern can react to changes to specific properties and ignore others. This alleviates problems of recursion and also helps with performance. Sub-networks • Not, Exists and Accumulate can each have nested conditional elements, which forms sub net- works. Hybrid Reasoning 152 Backward Chaining • Prolog style derivation trees for backward chaining are supported. The implementation is stack based, so does not have method recursion issues for large graphs. Lazy Truth Maintenance • Truth maintenance has a runtime cost, which is incurred whether TMS is used or not. Lazy TMS only turns it on, on first use. Further it's only turned on for that object type, so other object types do not incur the runtime cost. Heap based agenda • The agenda uses a binary heap queue to sort rule matches by salience, rather than any linear search or maintenance approach. Dynamic Rules • Rules can be added and removed at runtime, while the engine is still populated with data. 5.4. PHREAK Algorithm Drools 6 introduces a new algorithm, that attempts to address some of the core issues of RETE. The algorithm is not a rewrite form scratch and incorporates all of the existing code from ReteOO, and all its enhancements. While PHREAK is an evolution of the RETE algorithm, it is no longer classified as a RETE implementation. In the same way that once an animal evolves beyond a certain point and key characteristics are changed, the animal becomes classified as new species. There are two key RETE characteristics that strongly identify any derivative strains, regardless of optimizations. That it is an eager, data oriented algorithm. Where all work is doing done the insert, update or delete actions; eagerly producing all partial matches for all rules. PHREAK in contrast is characterised as a lazy, goal oriented algorithm; where partial matching is aggressively delayed. This eagerness of RETE can lead to a lot of churn in large systems, and much wasted work. Where wasted work is classified as matching efforts that do not result in a rule firing. PHREAK was heavily inspired by a number of algorithms; including (but not limited to) LEAPS, RETE/UL and Collection-Oriented Match. PHREAK has all enhancements listed in the ReteOO section. In addition it adds the following set of enhancements, which are explained in more detail in the following paragraphs. • Three layers of contextual memory; Node, Segment and Rule memories. • Rule, segment and node based linking. • Lazy (delayed) rule evaluation. Hybrid Reasoning 153 • Isolated rule evaluation. • Set oriented propagations. • Stack based evaluations, with pause and resume. When the PHREAK engine is started all rules are said to be unlinked, no rule evaluation can hap- pen while rules are unlinked. The insert, update and deletes actions are queued before entering the beta network. A simple heuristic, based on the rule most likely to result in firings, is used to select the next rule for evaluation; this delays the evaluation and firing of the other rules. Only once a rule has all right inputs populated will the rule be considered linked in, although no work is yet done. Instead a goal is created, that represents the rule, and placed into a priority queue; which is ordered by salience. Each queue itself is associated with an AgendaGroup. Only the active AgendaGroup will inspect its queue, popping the goal for the rule with the highest salience and submitting it for evaluation. So the work done shifts from the insert, update, delete phase to the fireAllRules phase. Only the rule for which the goal was created is evaluated, other potential rule evaluations from those facts are delayed. While individual rules are evaluated, node sharing is still achieved through the process of segmentation, which is explained later. Each successful join attempt in RETE produces a tuple (or token, or partial match) that will be propagated to the child nodes. For this reason it is characterised as a tuple oriented algorithm. For each child node that it reaches it will attempt to join with the other side of the node, again each successful join attempt will be propagated straight away. This creates a descent recursion effect. Thrashing the network of nodes as it ripples up and down, left and right from the point of entry into the beta network to all the reachable leaf nodes. PHREAK propagation is set oriented (or collection-oriented), instead of tuple oriented. For the rule being evaluated it will visit the first node and process all queued insert, update and deletes. The results are added to a set and the set is propagated to the child node. In the child node all queued inset, update and deletes are processed, adding the results to the same set. Once finished that set is propagated to the next child node, and so on until the terminal node is reached. This creates a single pass, pipeline type effect, that is isolated to the current rule being evaluated. This creates a batch process effect which can provide performance advantages for certain rule constructs; such as sub-networks with accumulates. In the future it will leans itself to being able to exploit multi-core machines in a number of ways. The Linking and Unlinking uses a layered bit mask system, based on a network segmentation. When the rule network is built segments are created for nodes that are shared by the same set of rules. A rule itself is made up from a path of segments, although if there is no sharing that will be a single segment. A bit-mask offset is assigned to each node in the segment. Also another bit mask (the layering) is assigned to each segment in the rule's path. When there is at least one input (data propagation) the node's bit is set to on. When each node has its bit set to on the segment's bit is also set to on. Conversely if any node's bit is set to off, the segment is then also set to off. If each segment in the rule's path is set to on, the rule is said to be linked in and a goal is created to schedule the rule for evaluation. The same bit-mask technique is used to also track dirty node, segments and rules; this allows for a rule already link in to be scheduled for evaluation if it's considered dirty since it was last evaluated. Hybrid Reasoning 154 This ensures that no rule will ever evaluate partial matches, if it's impossible for it to result in rule instances because one of the joins has no data. This is possible in RETE and it will merrily churn away producing martial match attempts for all nodes, even if the last join is empty. While the incremental rule evaluation always starts from the root node, the dirty bit masks are used to allow nodes and segments that are not dirty to be skipped. Using the existence of at at least one items of data per node, is a fairly basic heuristic. Future work would attempt to delay the linking even further; using techniques such as arc consistency to determine whether or not matching will result in rule instance firings. Where as RETE has just a singe unit of memory, the node memory, PHREAK has 3 levels of memory. This allows for much more contextual understanding during evaluation of a Rule. Figure 5.12. PHREAK 3 Layered memory system Example 1 shows a single rule, with three patterns; A, B and C. It forms a single segment, with bits 1, 2 and 4 for the nodes. The single segment has a bit offset of 1. Hybrid Reasoning 155 Figure 5.13. Example1: Single rule, no sharing Example 2 demonstrates what happens when another rule is added that shares the pattern A. A is placed in its own segment, resulting in two segments per rule. Those two segments form a path, for their respective rules. The first segment is shared by both paths. When A is linked the segment becomes linked, it then iterates each path the segment is shared by, setting the bit 1 to on. If B and C are later turned on, the second segment for path R1 is linked in; this causes bit 2 to be turned on for R1. With bit 1 and bit 2 set to on for R1, the rule is now linked and a goal created to schedule the rule for later evaluation and firing. When a rule is evaluated it is the segments that allow the results of matching to be shared. Each segment has a staging memory to queue all insert, update and deletes for that segment. If R1 was to evaluated it would process A and result in a set of tuples. The algorithm detects that there is a segmentation split and will create peered tuples for each insert, update and delete in the set and add them to R2's staging memory. Those tuples will be merged with any existing staged tuples and wait for R2 to eventually be evaluated. Hybrid Reasoning 156 Figure 5.14. Example 2: Two rules, with sharing Example 3 adds a third rule and demonstrates what happens when A and B are shared. Only the bits for the segments are shown this time. Demonstrating that R4 has 3 segments, R3 has 3 segments and R1 has 2 segments. A and B are shared by R1, R3 and R4. While D is shared by R3 and R4. Hybrid Reasoning 157 Figure 5.15. Example 3: Three rules, with sharing Sub-networks are formed when a Not, Exists or Accumulate node contain more than one element. In Example 4 "B not( C )" forms the sub network, note that "not(C)" is a single element and does not require a sub network and is merged inside of the Not node. The sub network gets its own segment. R1 still has a path of two segments. The sub network forms another "inner" path. When the sub network is linked in, it will link in the outer segment. Hybrid Reasoning 158 Figure 5.16. Example 4 : Single rule, with sub-network and no sharing Example 5 shows that the sub-network nodes can be shard by a rule that does not have a sub- network. This results in the sub-network segment being split into two. Hybrid Reasoning 159 Figure 5.17. Example 5: Two rules, one with a sub-network and sharing Not nodes with constraints and accumulate nodes have special behaviour and can never unlink a segment, and are always considered to have their bits on. All rule evaluations are incremental, and will not waste work recomputing matches that it has already produced. The evaluation algorithm is stack based, instead of method recursion. Evaluation can be paused and resumed at any time, via the use of a StackEntry to represent current node being evaluated. When a rule evaluation reaches a sub-network a StackEntry is created for the outer path segment and the sub-network segment. The sub-network segment is evaluated first, when the set reaches the end of the sub-network path it is merged into a staging list for the outer node it feeds into. The previous StackEntry is then resumed where it can process the results of the sub network. This has the added benefit that all work is processed in a batch, before propagating to the child node; which is much more efficient for accumulate nodes. The same stack system can be used for efficient backward chaining. When a rule evaluation reaches a query node it again pauses the current evaluation, by placing it on the stack. The query is then evaluated which produces a result set, which is saved in a memory location for the resumed StackEntry to pick up and propagate to the child node. If the query itself called other queries the Hybrid Reasoning 160 process would repeat, with the current query being paused and a new evaluation setup for the current query node. One final point on performance. One single rule in general will not evaluate any faster with PHREAK than it does with RETE. For a given rule and same data set, which using a root context object to enable and disable matching, both attempt the same amount of matches and produce the same number of rule instances, and take roughly the same time. Except for the use case with subnetworks and accumulates. PHREAK can however be considered more forgiving that RETE for poorly written rule bases and with a more graceful degradation of performance as the number of rules and complexity increases. RETE will also churn away producing partial machines for rules that do not have data in all the joins; where as PHREAK will avoid this. So it's not that PHREAK is faster than RETE, it just won't slow down as much as your system grows :) AgendaGroups did not help in RETE performance, as all rules where evaluated at all times, re- gardless of the group. The same is true for salience. Which is why root context objects are often used, to limit matching attempts. PHREAK only evaluates rules for the active AgendaGroup, and within that group will attempt to avoid evaluation of rules (via salience) that do not result in rule instance firings. With PHREAK AgendaGroups and salience now become useful performance tools. The root con- text objects are no longer needed and potentially counter productive to performance, as they force the flushing and recreation of matches for rules. 161 Chapter 6. User Guide 6.1. The Basics 6.1.1. Stateless Knowledge Session So where do we get started? There are so many use cases and so much functionality in a rule engine such as Drools that it becomes beguiling. Have no fear my intrepid adventurer, the com- plexity is layered and you can ease yourself in with simple use cases. Stateless session, not utilising inference, forms the simplest use case. A stateless session can be called like a function passing it some data and then receiving some results back. Some common use cases for stateless sessions are, but not limited to: • Validation • Is this person eligible for a mortgage? • Calculation • Compute a mortgage premium. • Routing and Filtering • Filter incoming messages, such as emails, into folders. • Send incoming messages to a destination. So let's start with a very simple example using a driving license application. public class Applicant { private String name; private int age; private boolean valid; // getter and setter methods here } Now that we have our data model we can write our first rule. We assume that the application uses rules to reject invalid applications. As this is a simple validation use case we will add a single rule to disqualify any applicant younger than 18. package com.company.licenserule "Is of valid age"when $a : Applicant( age < 18 )then $a.setValid( false );end com.company.licenserule "Is of valid User Guide 162 age" when $a : Applicant( age < 18 ) then $a.setValid( false ); To make the engine aware of data, so it can be processed against the rules, we have to insert the data, much like with a database. When the Applicant instance is inserted into the engine it is evaluated against the constraints of the rules, in this case just two constraints for one rule. We say two because the type Applicant is the first object type constraint, and age < 18 is the second field constraint. An object type constraint plus its zero or more field constraints is referred to as a pattern. When an inserted instance satisfies both the object type constraint and all the field constraints, it is said to be matched. The $a is a binding variable which permits us to reference the matched object in the consequence. There its properties can be updated. The dollar character ('$') is optional, but it helps to differentiate variable names from field names. The process of matching patterns against the inserted data is, not surprisingly, often referred to as pattern matching. To use this rule it is necessary to put it a Drools file, just a plain text file with .drl extension , short for "Drools Rule Language". Let's call this file licenseApplication.drl, and store it in a Kie Project. A Kie Project has the structure of a normal Maven project with an additional file (kmodule.xml) defining the KieBases and KieSessions that can be created. This file has to be placed in the resources/META-INF folder of the Maven project while all the other Drools artifacts, such as the licenseApplication.drl containing the former rule, must be stored in the resources folder or in any other subfolder under it. Since meaningful defaults have been provided for all configuration aspects, the simplest kmodule.xml file can contain just an empty kmodule tag like the following: At this point it is possible to create a KieContainer that reads the files to be built, from the class- path. KieServices kieServices = KieServices.Factory.get(); KieContainer kContainer = kieServices.getKieClasspathContainer(); The above code snippet compiles all the DRL files found on the classpath and put the result of this compilation, a KieModule, in the KieContainer. If there are no errors, we are now ready to create our session from the KieContainer and execute against some data: StatelessKieSession kSession = kContainer.newStatelessKieSession(); Applicant applicant = new Applicant( "Mr John Smith", 16 ); assertTrue( applicant.isValid() ); ksession.execute( applicant ); User Guide 163 assertFalse( applicant.isValid() ); The preceding code executes the data against the rules. Since the applicant is under the age of 18, the application is marked as invalid. So far we've only used a single instance, but what if we want to use more than one? We can execute against any object implementing Iterable, such as a collection. Let's add another class called Application, which has the date of the application, and we'll also move the boolean valid field to the Application class. public class Applicant { private String name; private int age; // getter and setter methods here } public class Application { private Date dateApplied; private boolean valid; // getter and setter methods here } We will also add another rule to validate that the application was made within a period of time. package com.company.licenserule "Is of valid age"when Applicant( age < 18 ) $a : Application() then $a.setValid( false );endrule "Application was made this year"when $a : Application( dateApplied > "01-jan-2009" ) then $a.setValid( false );end com.company.licenserule "Is of valid age" when Applicant( age < 18 ) $a : Application() then $a.setValid( false ); endrule "Application was made this year" when $a : Application( dateApplied > "01-jan-2009" ) then $a.setValid( false ); Unfortunately a Java array does not implement the Iterable interface, so we have to use the JDK converter method Arrays.asList(...). The code shown below executes against an iterable list, where all collection elements are inserted before any matched rules are fired. StatelessKieSession kSession = kContainer.newStatelessKieSession(); Applicant applicant = new Applicant( "Mr John Smith", 16 ); User Guide 164 Application application = new Application(); assertTrue( application.isValid() ); ksession.execute( Arrays.asList( new Object[] { application, applicant } ) ); assertFalse( application.isValid() ); The two execute methods execute(Object object) and execute(Iterable objects) are ac- tually convenience methods for the interface BatchExecutor's method execute(Command com- mand). The KieCommands commands factory, obtainable from the KieServices like all other factories of the KIE API, is used to create commands, so that the following is equivalent to execute(Iterable it): ksession.execute( kieServices.getCommands().newInsertElements( Arrays.asList( new Object[] { ap plication, applicant } ) ); Batch Executor and Command Factory are particularly useful when working with multiple Com- mands and with output identifiers for obtaining results. KieCommands kieCommands = kieServices.getCommands(); List cmds = new ArrayList(); cmds.add( kieCommands.newInsert( new Person( "Mr John Smith" ), "mrSmith", true, null ) ); cmds.add( kieCommands.newInsert( new Person( "Mr John Doe" ), "mrDoe", true, null ) ); BatchExecutionResults results = ksession.execute( kieCommands.newBatchExecution( cmds ) ); assertEquals( new Person( "Mr John Smith" ), results.getValue( "mrSmith" ) ); CommandFactory supports many other Commands that can be used in the BatchExecutor like StartProcess, Query, and SetGlobal. 6.1.2. Stateful Knowledge Session Stateful Sessions are long lived and allow iterative changes over time. Some common use cases for Stateful Sessions are, but not limited to: • Monitoring • Stock market monitoring and analysis for semi-automatic buying. • Diagnostics • Fault finding, medical diagnostics • Logistics • Parcel tracking and delivery provisioning • Compliance User Guide 165 • Validation of legality for market trades. In contrast to a Stateless Session, the dispose() method must be called afterwards to ensure there are no memory leaks, as the KieBase contains references to Stateful Knowledge Sessions when they are created. Since Stateful Knowledge Session is the most commonly used session type it is just named KieSession in the KIE API. KieSession also supports the BatchExecutor interface, like StatelessKieSession, the only difference being that the FireAllRules command is not automatically called at the end for a Stateful Session. We illustrate the monitoring use case with an example for raising a fire alarm. Using just four classes, we represent rooms in a house, each of which has one sprinkler. If a fire starts in a room, we represent that with a single Fire instance. public class Room { private String name // getter and setter methods here } public class Sprinkler { private Room room; private boolean on; // getter and setter methods here } public class Fire { private Room room; // getter and setter methods here } public class Alarm { } In the previous section on Stateless Sessions the concepts of inserting and matching against data were introduced. That example assumed that only a single instance of each object type was ever inserted and thus only used literal constraints. However, a house has many rooms, so rules must express relationships between objects, such as a sprinkler being in a certain room. This is best done by using a binding variable as a constraint in a pattern. This "join" process results in what is called cross products, which are covered in the next section. When a fire occurs an instance of the Fire class is created, for that room, and inserted into the session. The rule uses a binding on the room field of the Fire object to constrain matching to the sprinkler for that room, which is currently off. When this rule fires and the consequence is executed the sprinkler is turned on. rule "When there is a fire turn on the sprinkler"when Fire($room : room) $sprinkler : Sprinkler( room == $room, on == false )then modify( $sprinkler ) { setOn( true ) }; System.out.println( "Turn on the sprinkler for room " + $room.getName() );end kler" when Fire($room : room) $sprinkler : Sprinkler( room == $room, on == false ) then modify( $sprinkler ) { setOn( true ) User Guide 166 }; System.out.println( "Turn on the sprinkler for room " + $room.getName() ); Whereas the Stateless Session uses standard Java syntax to modify a field, in the above rule we use the modify statement, which acts as a sort of "with" statement. It may contain a series of comma separated Java expressions, i.e., calls to setters of the object selected by the modify statement's control expression. This modifies the data, and makes the engine aware of those changes so it can reason over them once more. This process is called inference, and it's essential for the working of a Stateful Session. Stateless Sessions typically do not use inference, so the engine does not need to be aware of changes to data. Inference can also be turned off explicitly by using the sequential mode. So far we have rules that tell us when matching data exists, but what about when it does not exist? How do we determine that a fire has been extinguished, i.e., that there isn't a Fire object any more? Previously the constraints have been sentences according to Propositional Logic, where the engine is constraining against individual instances. Drools also has support for First Order Logic that allows you to look at sets of data. A pattern under the keyword not matches when something does not exist. The rule given below turns the sprinkler off as soon as the fire in that room has disappeared. rule "When the fire is gone turn off the sprinkler"when $room : Room( ) $sprinkler : Sprinkler( room == $room, on == true ) not Fire( room == $room )then modify( $sprinkler ) { setOn( false ) }; System.out.println( "Turn off the sprinkler for room " + $room.getName() );end kler" when $room : Room( ) $sprinkler : Sprinkler( room == $room, on == true ) not Fire( room == $room ) then modify( $sprinkler ) { setOn( false ) }; System.out.println( "Turn off the sprinkler for room " + $room.getName() ); While there is one sprinkler per room, there is just a single alarm for the building. An Alarm object is created when a fire occurs, but only one Alarm is needed for the entire building, no matter how many fires occur. Previously not was introduced to match the absence of a fact; now we use its complement exists which matches for one or more instances of some category. rule "Raise the alarm when we have one or more fires" when exists Fire() then insert( new Alarm() ); System.out.println( "Raise the alarm" ); end User Guide 167 Likewise, when there are no fires we want to remove the alarm, so the not keyword can be used again. rule "Cancel the alarm when all the fires have gone"when not Fire() $alarm : Alarm()then delete( $alarm ); System.out.println( "Cancel the alarm" );end gone" when not Fire() $alarm : Alarm() then delete( $alarm ); System.out.println( "Cancel the alarm" ); Finally there is a general health status message that is printed when the application first starts and after the alarm is removed and all sprinklers have been turned off. rule "Status output when things are ok"when not Alarm() not Sprinkler( on == true ) then System.out.println( "Everything is ok" );end ok"when not Alarm() not Sprinkler( on == true ) then System.out.println( "Everything is ok" As we did in the Stateless Session example, the above rules should be placed in a single DRL file and saved into the resouces folder of your Maven project or any of its subfolder. As before, we can then obtain a KieSession from the KieContainer. The only difference is that this time we create a Stateful Session, whereas before we created a Stateless Session. KieServices kieServices = KieServices.Factory.get(); KieContainer kContainer = kieServices.getKieClasspathContainer(); KieSession ksession = kContainer.newKieSession(); With the session created it is now possible to iteratively work with it over time. Four Room objects are created and inserted, as well as one Sprinkler object for each room. At this point the engine has done all of its matching, but no rules have fired yet. Calling ksession.fireAllRules() allows the matched rules to fire, but without a fire that will just produce the health message. String[] names = new String[]{"kitchen", "bedroom", "office", "livingroom"}; Map name2room = new HashMap(); for( String name: names ){ Room room = new Room( name ); name2room.put( name, room ); ksession.insert( room ); Sprinkler sprinkler = new Sprinkler( room ); User Guide 168 ksession.insert( sprinkler ); } ksession.fireAllRules(); > Everything is ok We now create two fires and insert them; this time a reference is kept for the returned FactHandle. A Fact Handle is an internal engine reference to the inserted instance and allows instances to be retracted or modified at a later point in time. With the fires now in the engine, once fireAllRules() is called, the alarm is raised and the respective sprinklers are turned on. Fire kitchenFire = new Fire( name2room.get( "kitchen" ) ); Fire officeFire = new Fire( name2room.get( "office" ) ); FactHandle kitchenFireHandle = ksession.insert( kitchenFire ); FactHandle officeFireHandle = ksession.insert( officeFire ); ksession.fireAllRules(); > Raise the alarm > Turn on the sprinkler for room kitchen > Turn on the sprinkler for room office After a while the fires will be put out and the Fire instances are retracted. This results in the sprinklers being turned off, the alarm being cancelled, and eventually the health message is printed again. ksession.delete( kitchenFireHandle ); ksession.delete( officeFireHandle ); ksession.fireAllRules(); > Cancel the alarm> Turn off the sprinkler for room office> Turn off the sprinkler for room kitchen> Everything is ok alarm> Turn off the sprinkler for room office> Turn off the sprinkler for room kitchen> Everything is Everyone still with me? That wasn't so hard and already I'm hoping you can start to see the value and power of a declarative rule system. User Guide 169 6.1.3. Methods versus Rules People often confuse methods and rules, and new rule users often ask, "How do I call a rule?" After the last section, you are now feeling like a rule expert and the answer to that is obvious, but let's summarize the differences nonetheless. public void helloWorld(Person person) { if ( person.getName().equals( "Chuck" ) ) { System.out.println( "Hello Chuck" ); } } • Methods are called directly. • Specific instances are passed. • One call results in a single execution. rule "Hello World" when Person( name == "Chuck" )then System.out.println( "Hello Chuck" );end when Person( name == "Chuck" ) then System.out.println( "Hello Chuck" ); • Rules execute by matching against any data as long it is inserted into the engine. • Rules can never be called directly. • Specific instances cannot be passed to a rule. • Depending on the matches, a rule may fire once or several times, or not at all. 6.1.4. Cross Products Earlier the term "cross product" was mentioned, which is the result of a join. Imagine for a moment that the data from the fire alarm example were used in combination with the following rule where there are no field constraints: rule "Show Sprinklers" when $room : Room() $sprinkler : Sprinkler()then System.out.println( "room:" + $room.getName() + " sprinkler:" + $sprinkler.getRoom().getName() );end when $room : Room() $sprinkler : Sprinkler() then System.out.println( "room:" + $room.getName() + " sprinkler:" + $sprinkler.getRoom().getName() ); User Guide 170 In SQL terms this would be like doing select * from Room, Sprinkler and every row in the Room table would be joined with every row in the Sprinkler table resulting in the following output: room:office sprinkler:office room:office sprinkler:kitchen room:office sprinkler:livingroom room:office sprinkler:bedroom room:kitchen sprinkler:office room:kitchen sprinkler:kitchen room:kitchen sprinkler:livingroom room:kitchen sprinkler:bedroom room:livingroom sprinkler:office room:livingroom sprinkler:kitchen room:livingroom sprinkler:livingroom room:livingroom sprinkler:bedroom room:bedroom sprinkler:office room:bedroom sprinkler:kitchen room:bedroom sprinkler:livingroom room:bedroom sprinkler:bedroom These cross products can obviously become huge, and they may very well contain spurious data. The size of cross products is often the source of performance problems for new rule authors. From this it can be seen that it's always desirable to constrain the cross products, which is done with the variable constraint. rule when $room : Room() $sprinkler : Sprinkler( room == $room ) then System.out.println( "room:" + $room.getName() + " sprinkler:" + $sprinkler.getRoom().getName() ); end This results in just four rows of data, with the correct Sprinkler for each Room. In SQL (actually HQL) the corresponding query would be select * from Room, Sprinkler where Room == Sprinkler.room. room:office sprinkler:office room:kitchen sprinkler:kitchen room:livingroom sprinkler:livingroom room:bedroom sprinkler:bedroom User Guide 171 6.2. Execution Control 6.2.1. Agenda The Agenda is a Rete feature. It maintains set of rules that are able to execute, its job is to schedule that execution in a deterministic order. During actions on the RuleRuntime, rules may become fully matched and eligible for execution; a single Rule Runtime Action can result in multiple eligible rules. When a rule is fully matched a Rule Match is created, referencing the rule and the matched facts, and placed onto the Agenda. The Agenda controls the execution order of these Matches using a Conflict Resolution strategy. The engine cycles repeatedly through two phases: 1. Rule Runtime Actions. This is where most of the work takes place, either in the Consequence (the RHS itself) or the main Java application process. Once the Consequence has finished or the main Java application process calls fireAllRules() the engine switches to the Agenda Evaluation phase. 2. Agenda Evaluation. This attempts to select a rule to fire. If no rule is found it exits, otherwise it fires the found rule, switching the phase back to Rule Runtime Actions. Figure 6.1. Two Phase Execution User Guide 172 The process repeats until the agenda is clear, in which case control returns to the calling applica- tion. When Rule Runtime Actions are taking place, no rules are being fired. 6.2.2. Rule Matches and Conflict Sets. 6.2.2.1. Cashflow Example So far the data and the matching process has been simple and small. To mix things up a bit a new example will be explored that handles cashflow calculations over date periods. The state of the engine will be illustratively shown at key stages to help get a better understanding of what is actually going on under the hood. Three classes will be used, as shown below. This will help us grow our understanding of pattern matching and joins further. We will then use this to illustate different techniques for execution control. public class CashFlow { private Date date; private double amount; private int type; long accountNo; // getter and setter methods here } public class Account { private long accountNo; private double balance; // getter and setter methods here } public AccountPeriod { private Date start; private Date end; // getter and setter methods here } By now you already know how to create KieBases and how to instantiate facts to populate the KieSession, so tables will be used to show the state of the inserted data, as it makes things clearer for illustration purposes. The tables below show that a single fact was inserted for the Account. Also inserted are a series of debits and credits as CashFlow objects for that account, extending over two quarters. User Guide 173 Figure 6.2. CashFlows and Account Two rules can be used to determine the debit and credit for that quarter and update the Account balance. The two rules below constrain the cashflows for an account for a given time period. Notice the "&&" which use short cut syntax to avoid repeating the field name twice. rule "increase balance for credits"when ap : AccountPeriod() acc : Account( $accountNo : accountNo ) CashFlow( type == CREDIT, accountNo == $accountNo, date >= ap.start && <= ap.end, $amount : amount )then acc.balance += $amount;end credits"when ap : AccountPeriod() acc : Account( $accountNo : accountNo ) CashFlow( type == CREDIT, accountNo == $accountNo, date >= ap.start && <= ap.end, $amount : amount )then acc.balance += rule "decrease balance for deb its" when ap : AccountPeriod() acc : Account( $accountNo : accountNo ) CashFlow( type == DEBIT, accountNo == $accountNo, date >= ap.start && <= ap.end, $amount : amount ) then acc.balance -= $amount; end debits" when ap : AccountPeriod() acc : Account( $accountNo : accountNo ) CashFlow( type == DEBIT, accountNo == $accountNo, date >= ap.start && <= ap.end, $amount : amount ) then acc.balance -= Earlier we showed how rules would equate to SQL, which can often help people with an SQL background to understand rules. The two rules above can be represented with two views and a trigger for each view, as below: Table 6.1. select * from Account acc, Cashflow cf, AccountPeriod ap where acc.accountNo == cf.accountNo and cf.type == CREDIT and cf.date >= ap.start and cf.date <= ap.end acc, Cashflow cf, AccountPeriod ap where acc.accountNo == cf.accountNo and cf.type == CREDIT and cf.date >= ap.start and cf.date <= select * from Account acc, Cashflow cf, AccountPeriod ap where acc.accountNo == cf.accountNo and cf.type == DEBIT and cf.date >= ap.start and cf.date <= ap.end acc, Cashflow cf, AccountPeriod ap where acc.accountNo == cf.accountNo and cf.type == DEBIT and cf.date >= ap.start and cf.date <= User Guide 174 trigger : acc.balance += cf.amount trigger : acc.balance -= cf.amount If the AccountPeriod is set to the first quarter we constrain the rule "increase balance for credits" to fire on two rows of data and "decrease balance for debits" to act on one row of data. Figure 6.3. AccountingPeriod, CashFlows and Account The two cashflow tables above represent the matched data for the two rules. The data is matched during the insertion stage and, as you discovered in the previous chapter, does not fire straight away, but only after fireAllRules() is called. Meanwhile, the rule plus its matched data is placed on the Agenda and referred to as an RuIe Match or Rule Instance. The Agenda is a table of Rule Matches that are able to fire and have their consequences executed, as soon as fireAllRules() is called. Rule Matches on the Agenda are referred to as a conflict set and their execution is determine by a conflict resolution strategy. Notice that the order of execution so far is considered arbitrary. Figure 6.4. CashFlows and Account After all of the above activations are fired, the account has a balance of -25. Figure 6.5. CashFlows and Account If the AccountPeriod is updated to the second quarter, we have just a single matched row of data, and thus just a single Rule Match on the Agenda. User Guide 175 The firing of that Activation results in a balance of 25. Figure 6.6. CashFlows and Account Figure 6.7. CashFlows and Account 6.2.2.2. Conflict Resolution What if you don't want the order of rule execution to be arbitrary? When there is one or more Rule Match on the Agenda they are said to be in conflict, and a conflict resolution strategy is used to determine the order of execution. The Drools strategy is very simple and based around a salience value, which assigns a priority to a rule. Each rule has a default value of 0, the higher the value the higher the priority. As a general rule, it is a good idea not to count on rules firing in any particular order, and to author the rules without worrying about a "flow". However when a flow is needed a number of possibilities exist beyond salience: agenda groups, rule flow groups, activation groups and control/semaphore facts. As of Drools 6.0 rule definition order in the source file is used to set priority after salience. 6.2.2.3. Salience To illustrate Salience we add a rule to print the account balance, where we want this rule to be executed after all the debits and credits have been applied for all accounts. We achieve this by assigning a negative salience to this rule so that it fires after all rules with the default salience 0. Table 6.2. rule "Print balance for AccountPeriod" salience -50 when ap : AccountPeriod() acc : Account() then System.out.println( acc.accountNo + " : " + acc.balance ); end User Guide 176 AccountPeriod" salience -50 when ap : AccountPeriod() acc : Account() then System.out.println( acc.accountNo + " : " + acc.balance ); The table below depicts the resulting Agenda. The three debit and credit rules are shown to be in arbitrary order, while the print rule is ranked last, to execute afterwards. Figure 6.8. CashFlows and Account 6.2.2.4. Agenda Groups Agenda groups allow you to place rules into groups, and to place those groups onto a stack. The stack has push/pop bevaviour. Calling "setFocus" places the group onto the stack: ksession.getAgenda().getAgendaGroup( "Group A" ).setFocus(); The agenda always evaluates the top of the stack. When all the rules have fired for a group, it is poped from the stack and the next group is evaluated. Table 6.3. rule "increase balance for credits" agenda- group "calculation"when ap : AccountPeriod() acc : Account( $accountNo : accountNo ) CashFlow( type == CREDIT, accountNo == $accountNo, date >= ap.start && <= ap.end, $amount : amount )then acc.balance += $amount;end credits" agenda- group "calculation"when ap : AccountPeriod() acc : Account( $accountNo : accountNo ) CashFlow( type == CREDIT, accountNo == $accountNo, date >= ap.start && <= ap.end, $amount : amount )then acc.balance rule "Print balance for AccountPeri od" agenda-group "report"when ap : AccountPeriod() acc : Account()then System.out.println( acc.accountNo + " : " + acc.balance ); end AccountPeriod" agenda-group "report"when ap : AccountPeriod() acc : Account()then System.out.println( acc.accountNo + " : " + acc.balance ); User Guide 177 += First set the focus to the "report" group and then by placing the focus on "calculation" we ensure that group is evaluated first. Agenda agenda = ksession.getAgenda(); agenda.getAgendaGroup( "report" ).setFocus(); agenda.getAgendaGroup( "calculation" ).setFocus(); ksession.fireAllRules(); 6.2.2.5. Rule Flow Drools also features ruleflow-group attributes which allows workflow diagrams to declaratively specify when rules are allowed to fire. The screenshot below is taken from Eclipse using the Drools plugin. It has two ruleflow-group nodes which ensures that the calculation rules are executed before the reporting rules. The use of the ruleflow-group attribute in a rule is shown below. Table 6.4. rule "increase balance for credits" ruleflow-group "calculation"when ap : rule "Print balance for AccountPeri od" ruleflow-group "report"when ap : User Guide 178 AccountPeriod() acc : Account( $accountNo : accountNo ) CashFlow( type == CREDIT, accountNo == $accountNo, date >= ap.start && <= ap.end, $amount : amount )then acc.balance += $amount;end credits" ruleflow- group "calculation"when ap : AccountPeriod() acc : Account( $accountNo : accountNo ) CashFlow( type == CREDIT, accountNo == $accountNo, date >= ap.start && <= ap.end, $amount : amount )then acc.balance += AccountPeriod() acc : Account()then System.out.println( acc.accountNo + " : " + acc.balance ); end AccountPeriod" ruleflow-group "report"when ap : AccountPeriod() acc : Account()then System.out.println( acc.accountNo + " : " + acc.balance ); 6.2.3. Declarative Agenda Warning Declarative Agenda is experimental, and all aspects are highly likely to change in the future. @Eager and @Direct are temporary annotations to control the behav- iour of rules, which will also change as Declarative Agenda evolves. Annotations instead of attributes where chosen, to reflect their experimental nature. The declarative agenda allows to use rules to control which other rules can fire and when. While this will add a lot more overhead than the simple use of salience, the advantage is it is declarative and thus more readable and maintainable and should allow more use cases to be achieved in a simpler fashion. This feature is off by default and must be explicitly enabled, that is because it is considered high- ly experimental for the moment and will be subject to change, but can be activated on a given KieBase by adding the declarativeAgenda='enabled' attribute in the corresponding kbase tag of the kmodule.xml file as in the following example. Example 6.1. Enabling the Declarative Agenda The basic idea is: User Guide 179 • All rule's Matches are inserted into WorkingMemory as facts. So you can now do pattern match- ing against a Match. The rule's metadata and declarations are available as fields on the Match object. • You can use the kcontext.blockMatch( Match match ) for the current rule to block the selected match. Only when that rule becomes false will the match be eligible for firing. If it is already eligible for firing and is later blocked, it will be removed from the agenda until it is unblocked. • A match may have multiple blockers and a count is kept. All blockers must became false for the counter to reach zero to enable the Match to be eligible for firing. • kcontext.unblockAllMatches( Match match ) is an over-ride rule that will remove all blockers regardless • An activation may also be cancelled, so it never fires with cancelMatch • An unblocked Match is added to the Agenda and obeys normal salience, agenda groups, rule- flow groups etc. • The @Direct annotations allows a rule to fire as soon as it's matched, this is to be used for rules that block/unblock matches, it is not desirable for these rules to have side effects that impact else where. Example 6.2. New RuleContext methods void blockMatch(Match match); void unblockAllMatches(Match match); void cancelMatch(Match match); Here is a basic example that will block all matches from rules that have metadata @department('sales'). They will stay blocked until the blockerAllSalesRules rule becomes false, i.e. "go2" is retracted. Example 6.3. Block rules based on rule metadata rule rule1 @Eager @department('sales') when $s : String( this == 'go1' )then list.add( kcontext.rule.name + ':' + $s );endrule rule2 @Eager @department('sales') when $s : String( this == 'go1' )then list.add( kcontext.rule.name + ':' + $s );endrule blockerAllSalesRules @Direct @Eager when $s : String( this == 'go2' ) $i : Match( department == 'sales' )then list.add( $i.rule.name + ':' + $s ); kcontext.blockMatch( $i );end when $s : String( this == 'go1' ) then list.add( kcontext.rule.name + ':' + $s ); endrule rule2 @Eager @department('sales') when $s : String( this == 'go1' ) then list.add( kcontext.rule.name + ':' + $s ); User Guide 180 endrule blockerAllSalesRules @Direct @Eager when $s : String( this == 'go2' ) $i : Match( department == 'sales' ) then list.add( $i.rule.name + ':' + $s ); kcontext.blockMatch( $i ); Warning Further than annotate the blocking rule with @Direct, it is also necessary to anno- tate all the rules that could be potentially blocked by it with @Eager. This is be- cause, since the Match has to be evaluated by the pattern matching of the blocking rule, the potentially blocked ones cannot be evaluated lazily, otherwise won't be any Match to be evaluated. This example shows how you can use active property to count the number of active or inactive (already fired) matches. Example 6.4. Count the number of active/inactive Matches rule rule1 @Eager @department('sales') when $s : String( this == 'go1' )then list.add( kcontext.rule.name + ':' + $s );endrule rule2 @Eager @department('sales') when $s : String( this == 'go1' )then list.add( kcontext.rule.name + ':' + $s );endrule rule3 @Eager @department('sales') when $s : String( this == 'go1' )then list.add( kcontext.rule.name + ':' + $s );endrule countActivateInActive @Direct @Eager when $s : String( this == 'go2' ) $active : Number( this == 1 ) from accumulate( $a : Match( department == 'sales', active == true ), count( $a ) ) $inActive : Number( this == 2 ) from accumulate( $a : Match( department == 'sales', active == false ), count( $a ) )then kcontext.halt( );end when $s : String( this == 'go1' ) then list.add( kcontext.rule.name + ':' + $s ); endrule rule2 @Eager @department('sales') when $s : String( this == 'go1' ) then list.add( kcontext.rule.name + ':' + $s ); endrule rule3 @Eager @department('sales') when $s : String( this == 'go1' ) then list.add( kcontext.rule.name + ':' + $s ); endrule countActivateInActive @Direct @Eager when $s : String( this == 'go2' ) $active : Number( this == 1 ) from accumulate( $a : Match( department == 'sales', active == true ), count( $a ) ) $inActive : Number( this == 2 ) from accumulate( $a : Match( department == 'sales', active == false ), count( $a ) ) then kcontext.halt( User Guide 181 );end 6.3. Inference 6.3.1. Bus Pass Example Inference has a bad name these days, as something not relevant to business use cases and just too complicated to be useful. It is true that contrived and complicated examples occur with inference, but that should not detract from the fact that simple and useful ones exist too. But more than this, correct use of inference can crate more agile and less error prone business rules, which are easier to maintain. So what is inference? Something is inferred when we gain knowledge of something from using previous knowledge. For example, given a Person fact with an age field and a rule that provides age policy control, we can infer whether a Person is an adult or a child and act on this. rule "Infer Adult"when $p : Person( age >= 18 )then insert( new IsAdult( $p ) )end Adult" when $p : Person( age >= 18 ) then insert( new IsAdult( $p ) ) Due to the preceding rule, every Person who is 18 or over will have an instance of IsAdult inserted for them. This fact is special in that it is known as a relation. We can use this inferred relation in any rule: $p : Person()IsAdult( person == $p ) son()IsAdult( person == $p So now we know what inference is, and have a basic example, how does this facilitate good rule design and maintenance? Let's take a government department that are responsible for issuing ID cards when children be- come adults, henceforth referred to as ID department. They might have a decision table that in- cludes logic like this, which says when an adult living in London is 18 or over, issue the card: User Guide 182 However the ID department does not set the policy on who an adult is. That's done at a central government level. If the central government were to change that age to 21, this would initiate a change management process. Someone would have to liaise with the ID department and make sure their systems are updated, in time for the law going live. This change management process and communication between departments is not ideal for an agile environment, and change becomes costly and error prone. Also the card department is managing more information than it needs to be aware of with its "monolithic" approach to rules management which is "leaking" information better placed elsewhere. By this I mean that it doesn't care what explicit "age >= 18" information determines whether someone is an adult, only that they are an adult. In contrast to this, let's pursue an approach where we split (de-couple) the authoring responsibil- ities, so that both the central government and the ID department maintain their own rules. It's the central government's job to determine who is an adult. If they change the law they just update their central repository with the new rules, which others use: The IsAdult fact, as discussed previously, is inferred from the policy rules. It encapsulates the seemingly arbitrary piece of logic "age >= 18" and provides semantic abstractions for its meaning. Now if anyone uses the above rules, they no longer need to be aware of explicit information that determines whether someone is an adult or not. They can just use the inferred fact: User Guide 183 While the example is very minimal and trivial it illustrates some important points. We started with a monolithic and leaky approach to our knowledge engineering. We created a single decision table that had all possible information in it and that leaks information from central government that the ID department did not care about and did not want to manage. We first de-coupled the knowledge process so each department was responsible for only what it needed to know. We then encapsulated this leaky knowledge using an inferred fact IsAdult. The use of the term IsAdult also gave a semantic abstraction to the previously arbitrary logic "age >= 18". So a general rule of thumb when doing your knowledge engineering is: • Bad • Monolithic • Leaky • Good • De-couple knowledge responsibilities • Encapsulate knowledge • Provide semantic abstractions for those encapsulations 6.4. Truth Maintenance with Logical Objects 6.4.1. Overview After regular inserts you have to retract facts explicitly. With logical assertions, the fact that was asserted will be automatically retracted when the conditions that asserted it in the first place are no longer true. Actually, it's even cleverer then that, because it will be retracted only if there isn't any single condition that supports the logical assertion. Normal insertions are said to be stated, i.e., just like the intuitive meaning of "stating a fact" implies. Using a HashMap and a counter, we track how many times a particular equality is stated; this means we count how many different instances are equal. User Guide 184 When we logically insert an object during a RHS execution we are said to justify it, and it is con- sidered to be justified by the firing rule. For each logical insertion there can only be one equal object, and each subsequent equal logical insertion increases the justification counter for this log- ical assertion. A justification is removed by the LHS of the creating rule becoming untrue, and the counter is decreased accordingly. As soon as we have no more justifications the logical object is automatically retracted. If we try to logically insert an object when there is an equal stated object, this will fail and return null. If we state an object that has an existing equal object that is justified we override the Fact; how this override works depends on the configuration setting WM_BEHAVIOR_PRESERVE. When the property is set to discard we use the existing handle and replace the existing instance with the new Object, which is the default behavior; otherwise we override it to stated but we create an new FactHandle. This can be confusing on a first read, so hopefully the flow charts below help. When it says that it returns a new FactHandle, this also indicates the Object was propagated through the network. User Guide 185 Figure 6.9. Stated Insertion User Guide 186 Figure 6.10. Logical Insertion 6.4.1.1. Bus Pass Example With Inference and TMS The previous example was issuing ID cards to over 18s, in this example we now issue bus passes, either a child or adult pass. rule "Issue Child Bus Pass" when $p : Person( age < 16 )then insert(new ChildBusPass( $p ) );end rule "Issue Adult Bus Pass" when $p : Person( age >= 16 )then insert(new AdultBusPass( $p ) );end when $p : Person( age < 16 ) then insert(new ChildBusPass( $p ) ); end rule "Issue Adult Bus Pass" when $p : Person( age >= 16 ) then insert(new AdultBusPass( $p ) ); User Guide 187 As before the above example is considered monolithic, leaky and providing poor separation of concerns. As before we can provide a more robust application with a separation of concerns using inference. Notice this time we don't just insert the inferred object, we use "insertLogical": rule "Infer Child" when $p : Person( age < 16 )then insertLogical( new IsChild( $p ) )endrule "Infer Adult" when $p : Person( age >= 16 )then insertLogical( new IsAdult( $p ) )end when $p : Person( age < 16 ) then insertLogical( new IsChild( $p ) ) endrule "Infer Adult" when $p : Person( age >= 16 ) then insertLogical( new IsAdult( $p ) ) A "insertLogical" is part of the Drools Truth Maintenance System (TMS). When a fact is logically inserted, this fact is dependant on the truth of the "when" clause. It means that when the rule becomes false the fact is automatically retracted. This works particularly well as the two rules are mutually exclusive. So in the above rules if the person is under 16 it inserts an IsChild fact, once the person is 16 or over the IsChild fact is automatically retracted and the IsAdult fact inserted. Returning to the code to issue bus passes, these two rules can + logically insert the ChildBusPass and AdultBusPass facts, as the TMS + supports chaining of logical insertions for a cascading set of retracts. rule "Issue Child Bus Pass" when $p : Person( ) IsChild( person == $p )then insertLogical(new ChildBusPass( $p ) );end rule "Issue Adult Bus Pass" when $p : Person( age >= 16 ) IsAdult( person =$p )then insertLogical(new AdultBusPass( $p ) );end when $p : Person( ) IsChild( person == $p ) then insertLogical(new ChildBusPass( $p ) ); end rule "Issue Adult Bus Pass" when $p : Person( age >= 16 ) IsAdult( person =$p ) then insertLogical(new AdultBusPass( $p ) ); Now when a person changes from being 15 to 16, not only is the IsChild fact automatically re- tracted, so is the person's ChildBusPass fact. For bonus points we can combine this with the 'not' conditional element to handle notifications, in this situation, a request for the returning of the pass. So when the TMS automatically retracts the ChildBusPass object, this rule triggers and sends a request to the person: User Guide 188 rule "Return ChildBusPass Request "when $p : Person( ) not( ChildBusPass( person == $p ) )then requestChildBusPass( $p );end Request "when $p : Person( ) not( ChildBusPass( person == $p ) )then requestChildBusPass( 6.4.1.2. Important note: Equality for Java objects It is important to note that for Truth Maintenance (and logical assertions) to work at all, your Fact objects (which may be JavaBeans) must override equals and hashCode methods (from java.lang.Object) correctly. As the truth maintenance system needs to know when two different physical objects are equal in value, both equals and hashCode must be overridden correctly, as per the Java standard. Two objects are equal if and only if their equals methods return true for each other and if their hashCode methods return the same values. See the Java API for more details (but do keep in mind you MUST override both equals and hashCode). TMS behaviour is not affected by theruntime configuration of Identity vs Equality, TMS is always equality. 6.5. Decision Tables in Spreadsheets Decision tables are a "precise yet compact" (ref. Wikipedia) way of representing conditional logic, and are well suited to business level rules. Drools supports managing rules in a spreadsheet format. Supported formats are Excel (XLS), and CSV, which means that a variety of spreadsheet programs (such as Microsoft Excel, OpenOffice.org Calc amongst others) can be utilized. It is expected that web based decision table editors will be included in a near future release. Decision tables are an old concept (in software terms) but have proven useful over the years. Very briefly speaking, in Drools decision tables are a way to generate rules driven from the data entered into a spreadsheet. All the usual features of a spreadsheet for data capture and manipulation can be taken advantage of. 6.5.1. When to Use Decision Tables Consider decision tables as a course of action if rules exist that can be expressed as rule templates and data: each row of a decision table provides data that is combined with a template to generate a rule. Many businesses already use spreadsheets for managing data, calculations, etc. If you are happy to continue this way, you can also manage your business rules this way. This also assumes you are happy to manage packages of rules in .xls or .csv files. Decision tables are not recommended User Guide 189 for rules that do not follow a set of templates, or where there are a small number of rules (or if there is a dislike towards software like Excel or OpenOffice.org). They are ideal in the sense that there can be control over what parameters of rules can be edited, without exposing the rules directly. Decision tables also provide a degree of insulation from the underlying object model. 6.5.2. Overview Here are some examples of real world decision tables (slightly edited to protect the innocent). Figure 6.11. Using Excel to edit a decision table Figure 6.12. Multiple actions for a rule row User Guide 190 Figure 6.13. Using OpenOffice.org In the above examples, the technical aspects of the decision table have been collapsed away (using a standard spreadsheet feature). The rules start from row 17, with each row resulting in a rule. The conditions are in columns C, D, E, etc., the actions being off-screen. The values in the cells are quite simple, and their meaning is indicated by the headers in Row 16. Column B is just a description. It is customary to use color to make it obvious what the different areas of the table mean. Note Note that although the decision tables look like they process top down, this is not necessarily the case. Ideally, rules are authored without regard for the order of rows, simply because this makes maintenance easier, as rows will not need to be shifted around all the time. As each row is a rule, the same principles apply. As the rule engine processes the facts, any rules that match may fire. (Some people are confused by this. It is possible to clear the agenda when a rule fires and simulate a very simple decision table where only the first match effects an action.) Also note that you can have multiple tables on one spreadsheet. This way, rules can be grouped where they share common templates, yet at the end of the day they are all combined into one rule package. Decision tables are essentially a tool to generate DRL rules automatically. User Guide 191 Figure 6.14. A real world example using multiple tables for grouping like rules 6.5.3. How Decision Tables Work The key point to keep in mind is that in a decision table each row is a rule, and each column in that row is either a condition or action for that rule. Figure 6.15. Rows and columns The spreadsheet looks for the RuleTable keyword to indicate the start of a rule table (both the starting row and column). Other keywords are also used to define other package level attributes (covered later). It is important to keep the keywords in one column. By convention the second column ("B") is used for this, but it can be any column (convention is to leave a margin on the left for notes). In the following diagram, C is actually the column where it starts. Everything to the left of this is ignored. User Guide 192 If we expand the hidden sections, it starts to make more sense how it works; note the keywords in column C. Figure 6.16. Expanded for rule templates Now the hidden magic which makes it work can be seen. The RuleSet keyword indicates the name to be used in the rule package that will encompass all the rules. This name is optional, using a default, but it must have the RuleSet keyword in the cell immediately to the right. The other keywords visible in Column C are Import and Sequential which will be covered later. The RuleTable keyword is important as it indicates that a chunk of rules will follow, based on some rule templates. After the RuleTable keyword there is a name, used to prefix the names of the generated rules. The sheet name and row numbers are appended to guarantee unique rule names. Warning The RuleTable name combined with the sheet name must be unique across all spreadsheet files in the same KieBase. If that's not the case, some rules might have the same name and only 1 of them will be applied. To show such ignored rules, raise the severity of such rule name conflicts. User Guide 193 The column of RuleTable indicates the column in which the rules start; columns to the left are ignored. Note In general the keywords make up name-value pairs. Referring to row 14 (the row immediately after RuleTable), the keywords CONDITION and AC- TION indicate that the data in the columns below are for either the LHS or the RHS parts of a rule. There are other attributes on the rule which can also be optionally set this way. Row 15 contains declarations of ObjectTypes. The content in this row is optional, but if this option is not in use, the row must be left blank; however this option is usually found to be quite useful. When using this row, the values in the cells below (row 16) become constraints on that object type. In the above case, it generates Person(age=="42") and Cheese(type=="stilton"), where 42 and "stilton" come from row 18. In the above example, the "==" is implicit; if just a field name is given the translator assumes that it is to generate an exact match. Note An ObjectType declaration can span columns (via merged cells), meaning that all columns below the merged range are to be combined into one set of constraints within a single pattern matching a single fact at a time, as opposed to non-merged cells containing the same ObjectType, but resulting in different patterns, potentially matching different or identical facts. Row 16 contains the rule templates themselves. They can use the "$param" placeholder to indi- cate where data from the cells below should be interpolated. (For multiple insertions, use "$1", "$2", etc., indicating parameters from a comma-separated list in a cell below.) Row 17 is ignored; it may contain textual descriptions of the column's purpose. Rows 18 and 19 show data, which will be combined (interpolated) with the templates in row 15, to generate rules. If a cell contains no data, then its template is ignored. (This would mean that some condition or action does not apply for that rule row.) Rule rows are read until there is a blank row. Multiple RuleTables can exist in a sheet. Row 20 contains another keyword, and a value. The row positions of keywords like this do not matter (most people put them at the top) but their column should be the same one where the RuleTable or RuleSet keywords should appear. In our case column C has been chosen to be significant, but any other column could be used instead. In the above example, rules would be rendered like the following (as it uses the "ObjectType" row): //row 18 rule "Cheese_fans_18" when User Guide 194 Person(age=="42") Cheese(type=="stilton") then list.add("Old man stilton"); end Note The constraints age=="42" and type=="stilton" are interpreted as single con- straints, to be added to the respective ObjectType in the cell above. If the cells above were spanned, then there could be multiple constraints on one "column". Warning Very large decision tables may have very large memory requirements. 6.5.4. Spreadsheet Syntax 6.5.4.1. Spreadsheet Structure There are two types of rectangular areas defining data that is used for generating a DRL file. One, marked by a cell labelled RuleSet, defines all DRL items except rules. The other one may occur repeatedly and is to the right and below a cell whose contents begin with RuleTable. These areas represent the actual decision tables, each area resulting in a set of rules of similar structure. A Rule Set area may contain cell pairs, one below the RuleSet cell and containing a keyword designating the kind of value contained in the other one that follows in the same row. The columns of a Rule Table area define patterns and constraints for the left hand sides of the rules derived from it, actions for the consequences of the rules, and the values of individual rule attributes. Thus, a Rule Table area should contain one or more columns, both for conditions and actions, and an arbitrary selection of columns for rule attributes, at most one column for each of these. The first four rows following the row with the cell marked with RuleTable are earmarked as header area, mostly used for the definition of code to construct the rules. It is any additional row below these four header rows that spawns another rule, with its data providing for variations in the code defined in the Rule Table header. All keywords are case insensitive. Only the first worksheet is examined for decision tables. 6.5.4.2. Rule Set Entries Entries in a Rule Set area may define DRL constructs (except rules), and specify rule attributes. While entries for constructs may be used repeatedly, each rule attribute may be given at most User Guide 195 once, and it applies to all rules unless it is overruled by the same attribute being defined within the Rule Table area. Entries must be given in a vertically stacked sequence of cell pairs. The first one contains a key- word and the one to its right the value, as shown in the table below. This sequence of cell pairs may be interrupted by blank rows or even a Rule Table, as long as the column marked by RuleSet is upheld as the one containing the keyword. Table 6.5. Entries in the Rule Set area Keyword Value Usage RuleSet The package name for the generated DRL file. Optional, the default is rule_table. Must be First entry. Sequential "true" or "false". If "true", then salience is used to ensure that rules fire from the top down. Optional, at most once. If omit- ted, no firing order is imposed. EscapeQuotes "true" or "false". If "true", then quotation marks are escaped so that they appear literally in the DRL. Optional, at most once. If omit- ted, quotation marks are es- caped. Import A comma-separated list of Ja- va classes to import. Optional, may be used repeat- edly. Variables Declarations of DRL globals, i.e., a type followed by a vari- able name. Multiple global de- finitions must be separated with a comma. Optional, may be used repeat- edly. Functions One or more function defini- tions, according to DRL syn- tax. Optional, may be used repeat- edly. Queries One or more query definitions, according to DRL syntax. Optional, may be used repeat- edly. Declare One or more declarative types, according to DRL syn- tax. Optional, may be used repeat- edly. Warning In some locales, MS Office, LibreOffice and OpenOffice will encode a double quoth " differently, which will cause a compilation error. The difference is often hard to see. For example: “A” will fail, but "A" will work. User Guide 196 For defining rule attributes that apply to all rules in the generated DRL file you can use any of the entries in the following table. Notice, however, that the proper keyword must be used. Also, each of these attributes may be used only once. Important Rule attributes specified in a Rule Set area will affect all rule assets in the same package (not only in the spreadsheet). Unless you are sure that the spreadsheet is the only one rule asset in the package, the recommendation is to specify rule attributes not in a Rule Set area but in a Rule Table columns for each rule instead. Table 6.6. Rule attribute entries in the Rule Set area Keyword Initial Value PRIORITY P An integer defining the "salience" value for the rule. Overridden by the "Sequential" flag. DURATION D A long integer value defining the "duration" value for the rule. TIMER T A timer definition. See "Timers and Calendars". ENABLED B A Boolean value. "true" en- ables the rule; "false" disables the rule. CALENDARS E A calendars definition. See "Timers and Calendars". NO-LOOP U A Boolean value. "true" inhibits looping of rules due to changes made by its consequence. LOCK-ON-ACTIVE L A Boolean value. "true" in- hibits additional activations of all rules with this flag set with- in the same ruleflow or agenda group. AUTO-FOCUS F A Boolean value. "true" for a rule within an agenda group causes activations of the rule to automatically give the focus to the group. User Guide 197 Keyword Initial Value ACTIVATION-GROUP X A string identifying an activa- tion (or XOR) group. Only one rule within an activation group will fire, i.e., the first one to fire cancels any existing activa- tions of other rules within the same group. AGENDA-GROUP G A string identifying an agenda group, which has to be acti- vated by giving it the "focus", which is one way of control- ling the flow between groups of rules. RULEFLOW-GROUP R A string identifying a rule-flow group. 6.5.4.3. Rule Tables All Rule Tables begin with a cell containing "RuleTable", optionally followed by a string within the same cell. The string is used as the initial part of the name for all rules derived from this Rule Table, with the row number appended for distinction. (This automatic naming can be overridden by using a NAME column.) All other cells defining rules of this Rule Table are below and to the right of this cell. The next row defines the column type, with each column resulting in a part of the condition or the consequence, or providing some rule attribute, the rule name or a comment. The table below shows which column headers are available; additional columns may be used according to the table showing rule attribute entries given in the preceding section. Note that each attribute column may be used at most once. For a column header, either use the keyword or any other word beginning with the letter given in the "Initial" column of these tables. Table 6.7. Column Headers in the Rule Table Keyword Initial Value Usage NAME N Provides the name for the rule generat- ed from that row. The default is constructed from the text following the RuleTable tag and the row number. At most one column User Guide 198 Keyword Initial Value Usage DESCRIPTION I A text, resulting in a comment within the generated rule. At most one column CONDITION C Code snippet and in- terpolated values for constructing a con- straint within a pattern in a condition. At least one per rule table ACTION A Code snippet and in- terpolated values for constructing an action for the consequence of the rule. At least one per rule table METADATA @ Code snippet and in- terpolated values for constructing a meta- data entry for the rule. Optional, any number of columns Given a column headed CONDITION, the cells in successive lines result in a conditional element. • Text in the first cell below CONDITION develops into a pattern for the rule condition, with the snippet in the next line becoming a constraint. If the cell is merged with one or more neighbours, a single pattern with multiple constraints is formed: all constraints are combined into a paren- thesized list and appended to the text in this cell. The cell may be left blank, which means that the code snippet in the next row must result in a valid conditional element on its own. To include a pattern without constraints, you can write the pattern in front of the text for another pattern. The pattern may be written with or without an empty pair of parentheses. A "from" clause may be appended to the pattern. If the pattern ends with "eval", code snippets are supposed to produce boolean expressions for inclusion into a pair of parentheses after "eval". • Text in the second cell below CONDITION is processed in two steps. 1. The code snippet in this cell is modified by interpolating values from cells farther down in the column. If you want to create a constraint consisting of a comparison using "==" with the value from the cells below, the field selector alone is sufficient. Any other comparison operator must be specified as the last item within the snippet, and the value from the cells below is appended. For all other constraint forms, you must mark the position for including the contents of a cell with the symbol $param. Multiple insertions are possible by using the symbols $1, $2, etc., and a comma-separated list of values in the cells below. User Guide 199 A text according to the pattern forall(delimiter){snippet} is expanded by repeating the snippet once for each of the values of the comma-separated list of values in each of the cells below, inserting the value in place of the symbol $ and by joining these expansions by the given delimiter. Note that the forall construct may be surrounded by other text. 2. If the cell in the preceding row is not empty, the completed code snippet is added to the conditional element from that cell. A pair of parentheses is provided automatically, as well as a separating comma if multiple constraints are added to a pattern in a merged cell. If the cell above is empty, the interpolated result is used as is. • Text in the third cell below CONDITION is for documentation only. It should be used to indicate the column's purpose to a human reader. • From the fourth row on, non-blank entries provide data for interpolation as described above. A blank cell results in the omission of the conditional element or constraint for this rule. Given a column headed ACTION, the cells in successive lines result in an action statement. • Text in the first cell below ACTION is optional. If present, it is interpreted as an object reference. • Text in the second cell below ACTION is processed in two steps. 1. The code snippet in this cell is modified by interpolating values from cells farther down in the column. For a singular insertion, mark the position for including the contents of a cell with the symbol $param. Multiple insertions are possible by using the symbols $1, $2, etc., and a comma-separated list of values in the cells below. A method call without interpolation can be achieved by a text without any marker symbols. In this case, use any non-blank entry in a row below to include the statement. The forall construct is available here, too. 2. If the first cell is not empty, its text, followed by a period, the text in the second cell and a terminating semicolon are stringed together, resulting in a method call which is added as an action statement for the consequence. If the cell above is empty, the interpolated result is used as is. • Text in the third cell below ACTION is for documentation only. It should be used to indicate the column's purpose to a human reader. • From the fourth row on, non-blank entries provide data for interpolation as described above. A blank cell results in the omission of the action statement for this rule. User Guide 200 Note Using $1 instead of $param works in most cases, but it will fail if the replacement text contains a comma: then, only the part preceding the first comma is inserted. Use this "abbreviation" judiciously. Given a column headed METADATA, the cells in successive lines result in a metadata annotation for the generated rules. • Text in the first cell below METADATA is ignored. • Text in the second cell below METADATA is subject to interpolation, as described above, using values from the cells in the rule rows. The metadata marker character @ is prefixed automatically, and thus it should not be included in the text for this cell. • Text in the third cell below METADATA is for documentation only. It should be used to indicate the column's purpose to a human reader. • From the fourth row on, non-blank entries provide data for interpolation as described above. A blank cell results in the omission of the metadata annotation for this rule. 6.5.4.4. Examples The various interpolations are illustrated in the following example. Example 6.5. Interpolating cell data If the template is Foo(bar == $param) and the cell is 42, then the result is Foo(bar == 42). If the template is Foo(bar < $1, baz == $2) and the cell contains 42,43, the result will be Foo(bar < 42, baz ==43). The template forall(&&){bar != $} with a cell containing 42,43 results in bar != 42 && bar != 43. The next example demonstrates the joint effect of a cell defining the pattern type and the code snippet below it. User Guide 201 This spreadsheet section shows how the Person type declaration spans 2 columns, and thus both constraints will appear as Person(age == ..., type == ...). Since only the field names are present in the snippet, they imply an equality test. In the following example the marker symbol $param is used. The result of this column is the pattern Person(age == "42")). You may have noticed that the marker and the operator "==" are redundant. The next example illustrates that a trailing insertion marker can be omitted. User Guide 202 Here, appending the value from the cell is implied, resulting in Person(age < "42")). You can provide the definition of a binding variable, as in the example below. . Here, the result is c: Cheese(type == "stilton"). Note that the quotes are provided auto- matically. Actually, anything can be placed in the object type row. Apart from the definition of a binding variable, it could also be an additional pattern that is to be inserted literally. A simple construction of an action statement with the insertion of a single value is shown below. User Guide 203 The cell below the ACTION header is left blank. Using this style, anything can be placed in the con- sequence, not just a single method call. (The same technique is applicable within a CONDITION column as well.) Below is a comprehensive example, showing the use of various column headers. It is not an error to have no value below a column header (as in the NO-LOOP column): here, the attribute will not be applied in any of the rules. Figure 6.17. Example usage of keywords for imports, headers, etc. And, finally, here is an example of Import, Variables and Functions. User Guide 204 Figure 6.18. Example usage of keywords for functions, etc. Multiple package names within the same cell must be separated by a comma. Also, the pairs of type and variable names must be comma-separated. Functions, however, must be written as they appear in a DRL file. This should appear in the same column as the "RuleSet" keyword; it could be above, between or below all the rule rows. Note It may be more convenient to use Import, Variables, Functions and Queries repeat- edly rather than packing several definitions into a single cell. 6.5.5. Creating and integrating Spreadsheet based Decision Ta- bles The API to use spreadsheet based decision tables is in the drools-decisiontables module. There is really only one class to look at: SpreadsheetCompiler. This class will take spreadsheets in various formats, and generate rules in DRL (which you can then use in the normal way). The SpreadsheetCompiler can just be used to generate partial rule files if it is wished, and assemble it into a complete rule package after the fact (this allows the separation of technical and non- technical aspects of the rules if needed). To get started, a sample spreadsheet can be used as a base. Alternatively, if the plug-in is being used (Rule Workbench IDE), the wizard can generate a spreadsheet from a template (to edit it an xls compatible spreadsheet editor will need to be used). User Guide 205 Figure 6.19. Wizard in the IDE 6.5.6. Managing Business Rules in Decision Tables 6.5.6.1. Workflow and Collaboration Spreadsheets are well established business tools (in use for over 25 years). Decision tables lend themselves to close collaboration between IT and domain experts, while making the business rules clear to business analysts, it is an ideal separation of concerns. Typically, the whole process of authoring rules (coming up with a new decision table) would be something like: 1. Business analyst takes a template decision table (from a repository, or from IT) 2. Decision table business language descriptions are entered in the table(s) 3. Decision table rules (rows) are entered (roughly) 4. Decision table is handed to a technical resource, who maps the business language (descrip- tions) to scripts (this may involve software development of course, if it is a new application or data model) 5. Technical person hands back and reviews the modifications with the business analyst. 6. The business analyst can continue editing the rule rows as needed (moving columns around is also fine etc). 7. In parallel, the technical person can develop test cases for the rules (liaising with business analysts) as these test cases can be used to verify rules and rule changes once the system is running. 6.5.6.2. Using spreadsheet features Features of applications like Excel can be used to provide assistance in entering data into spread- sheets, such as validating fields. Lists that are stored in other worksheets can be used to provide valid lists of values for cells, like in the following diagram. Wizard in the IDE User Guide 206 Figure 6.20. Some applications provide a limited ability to keep a history of changes, but it is recommended to use an alternative means of revision control. When changes are being made to rules over time, older versions are archived (many open source solutions exist for this, such as Subversion or Git). 6.5.7. Rule Templates Related to decision tables (but not necessarily requiring a spreadsheet) are "Rule Templates" (in the drools-templates module). These use any tabular data source as a source of rule data - pop- ulating a template to generate many rules. This can allow both for more flexible spreadsheets, but also rules in existing databases for instance (at the cost of developing the template up front to generate the rules). With Rule Templates the data is separated from the rule and there are no restrictions on which part of the rule is data-driven. So whilst you can do everything you could do in decision tables you can also do the following: • store your data in a database (or any other format) • conditionally generate rules based on the values in the data • use data for any part of your rules (e.g. condition operator, class name, property name) • run different templates over the same data As an example, a more classic decision table is shown, but without any hidden rows for the rule meta data (so the spreadsheet only contains the raw data to generate the rules). User Guide 207 Figure 6.21. Template data See the ExampleCheese.xls in the examples download for the above spreadsheet. If this was a regular decision table there would be hidden rows before row 1 and between rows 1 and 2 containing rule metadata. With rule templates the data is completely separate from the rules. This has two handy consequences - you can apply multiple rule templates to the same data and your data is not tied to your rules at all. So what does the template look like? 1 template header 2 age 3 type 4 log 5 6 package org.drools.examples.templates; 7 8 global java.util.List list; 9 10 template "cheesefans" 11 12 rule "Cheese fans_@{row.rowNumber}" 13 when 14 Person(age == @{age}) 15 Cheese(type == "@{type}") 16 then 17 list.add("@{log}"); 18 end 19 20 end template Annotations to the preceding program listing: • Line 1: All rule templates start with template header. • Lines 2-4: Following the header is the list of columns in the order they appear in the data. In this case we are calling the first column age, the second type and the third log. • Line 5: An empty line signifies the end of the column definitions. User Guide 208 • Lines 6-9: Standard rule header text. This is standard rule DRL and will appear at the top of the generated DRL. Put the package statement and any imports and global and function definitions into this section. • Line 10: The keyword template signals the start of a rule template. There can be more than one template in a template file, but each template should have a unique name. • Lines 11-18: The rule template - see below for details. • Line 20: The keywords end template signify the end of the template. The rule templates rely on MVEL to do substitution using the syntax @{token_name}. There is currently one built-in expression, @{row.rowNumber} which gives a unique number for each row of data and enables you to generate unique rule names. For each row of data a rule will be generated with the values in the data substituted for the tokens in the template. With the example data above the following rule file would be generated: package org.drools.examples.templates; global java.util.List list; rule "Cheese fans_1" when Person(age == 42) Cheese(type == "stilton") then list.add("Old man stilton"); end rule "Cheese fans_2" when Person(age == 21) Cheese(type == "cheddar") then list.add("Young man cheddar"); end The code to run this is simple: DecisionTableConfiguration dtableconfiguration = KnowledgeBuilderFactory.newDecisionTableConfiguration(); dtableconfiguration.setInputType( DecisionTableInputType.XLS ); KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder(); kbuilder.add( ResourceFactory.newClassPathResource( getSpreadsheetName(), getClass() ), ResourceType.DTABLE, dtableconfiguration ); User Guide 209 6.6. Logging One way to illuminate the black box that is a rule engine, is to play with the logging level. Everything is logged to SLF4J [http://www.slf4j.org/], which is a simple logging facade that can delegate any log to Logback, Apache Commons Logging, Log4j or java.util.logging. Add a depen- dency to the logging adaptor for your logging framework of choice. If you're not using any logging framework yet, you can use Logback by adding this Maven dependency: ch.qos.logback logback-classic 1.x Note If you're developing for an ultra light environment, use slf4j-nop or slf4j-simple instead. Configure the logging level on the package org.drools. For example: In Logback, configure it in your logback.xml file: ... In Log4J, configure it in your log4j.xml file: ... 210 Chapter 7. Running Ths sections extends the KIE Running section, which should be read first, with specifics for the Drools runtime. 7.1. KieRuntime 7.1.1. EntryPoint The EntryPoint provides the methods around inserting, updating and deleting facts. The term "entry point" is related to the fact that we have multiple partitions in a Working Memory and you can choose which one you are inserting into. The use of multiple entry points is more common in event processing use cases, but they can be used by pure rule applications as well. The KieRuntime interface provides the main interaction with the engine. It is available in rule consequences and process actions. In this manual the focus is on the methods and interfaces related to rules, and the methods pertaining to processes will be ignored for now. But you'll notice that the KieRuntime inherits methods from both the WorkingMemory and the ProcessRuntime, thereby providing a unified API to work with processes and rules. When working with rules, three interfaces form the KieRuntime: EntryPoint, WorkingMemory and the KieRuntime itself. Figure 7.1. EntryPoint 7.1.1.1. Insert In order for a fact to be evaluated against the rules in a KieBase, it has to be inserted into the session. This is done by calling the method insert(yourObject). When a fact is inserted into the session, some of its properties might be immediately evaluated (eager evaluation) and some might be deferred for later evaluation (lazy evaluation). The exact behaviour depends on the rules engine algorithm being used. Note Expert systems typically use the term assert or assertion to refer to facts made available to the system. However, due to "assert" being a keyword in most lan- guages, we have decided to use the insert keyword; In this manual, the two terms are used interchangeably. When an Object is inserted it returns a FactHandle. This FactHandle is the token used to repre- sent your inserted object within the WorkingMemory. It is also used for interactions with the Work- ingMemory when you wish to delete or modify an object. Running 211 Cheese stilton = new Cheese("stilton"); FactHandle stiltonHandle = ksession.insert( stilton ); As mentioned in the KieBase section, a Working Memory may operate in two assertion modes: either equality or identity. Identity is the default. Identity means that the Working Memory uses an IdentityHashMap to store all asserted objects. New instance assertions always result in the return of new FactHandle, but if an instance is asserted again then it returns the original fact handle, i.e., it ignores repeated insertions for the same object. Equality means that the Working Memory uses a HashMap to store all asserted objects. An object instance assertion will only return a new FactHandle if the inserted object is not equal (according to its equal()/hashcode() methods) to an already existing fact. 7.1.1.2. Delete In order to remove a fact from the session, the method delete() is used. When a fact is deleted, any matches that are active and depend on that fact will be cancelled. Note that it is possible to have rules that depend on the nonexistence of a fact, in which case deleting a fact may cause a rule to activate. (See the not and exists keywords). Note Expert systems typically use the term retract or retraction to refer to the operation of removing facts from the Working Memory. Drools prefers the keyword delete for symmetry with the keyword insert; Drools also supports the keyword retract, but it was deprecated in favor of delete. In this manual, the two terms are used interchangeably. Retraction may be done using the FactHandle that was returned by the insert call. On the right hand side of a rule the delete statement is used, which works with a simple object reference. Cheese stilton = new Cheese("stilton"); FactHandle stiltonHandle = ksession.insert( stilton ); .... ksession.delete( stiltonHandle ); 7.1.1.3. Update The Rule Engine must be notified of modified facts, so that they can be reprocessed. You must use the update() method to notify the WorkingMemory of changed objects for those objects that are not able to notify the WorkingMemory themselves. Notice that update() always takes the modified object as a second parameter, which allows you to specify new instances for immutable Running 212 objects. On the right hand side of a rule the modify statement is recommended, as it makes the changes and notifies the engine in a single statement. Alternatively, after changing a fact object's field values through calls of setter methods you must invoke update immediately, event before changing another fact, or you will cause problems with the indexing within the rule engine. The modify statement avoids this problem. Cheese stilton = new Cheese("stilton"); FactHandle stiltonHandle = workingMemory.insert( stilton ); ... stilton.setPrice( 100 ); workingMemory.update( stiltonHandle, stilton ); 7.1.2. RuleRuntime The RuleRuntime provides access to the Agenda, permits query executions, and lets you access named Entry Points. Figure 7.2. RuleRuntime 7.1.2.1. Query Queries are used to retrieve fact sets based on patterns, as they are used in rules. Patterns may make use of optional parameters. Queries can be defined in the Knowledge Base, from where they are called up to return the matching results. While iterating over the result collection, any identifier bound in the query can be used to access the corresponding fact or fact field by calling the get method with the binding variable's name as its argument. If the binding refers to a fact object, its FactHandle can be retrieved by calling getFactHandle, again with the variable's name as the parameter. Figure 7.3. QueryResults Figure 7.4. QueryResultsRow Example 7.1. Simple Query Example QueryResults results = ksession.getQueryResults( "my query", new Object[] { "string" } ); for ( QueryResultsRow row : results ) { Running 213 System.out.println( row.get( "varName" ) ); } 7.1.2.2. Live Queries Invoking queries and processing the results by iterating over the returned set is not a good way to monitor changes over time. To alleviate this, Drools provides Live Queries, which have a listener attached instead of returning an iterable result set. These live queries stay open by creating a view and publishing change events for the contents of this view. To activate, you start your query with parameters and listen to changes in the resulting view. The dispose method terminates the query and discontinues this reactive scenario. Example 7.2. Implementing ViewChangedEventListener final List updated = new ArrayList(); final List removed = new ArrayList(); final List added = new ArrayList(); ViewChangedEventListener listener = new ViewChangedEventListener() { public void rowUpdated(Row row) { updated.add( row.get( "$price" ) ); } public void rowRemoved(Row row) { removed.add( row.get( "$price" ) ); } public void rowAdded(Row row) { added.add( row.get( "$price" ) ); } }; // Open the LiveQuery LiveQuery query = ksession.openLiveQuery( "cheeses", new Object[] { "cheddar", "stilton" }, listener ); ... ... query.dispose() // calling dispose to terminate the live query A Drools blog article contains an example of Glazed Lists integration for live queries: http://blog.athico.com/2010/07/glazed-lists-examples-for-drools-live.html 7.1.3. StatefulRuleSession The StatefulRuleSession is inherited by the KieSession and provides the rule related methods that are relevant from outside of the engine. Running 214 Figure 7.5. StatefulRuleSession 7.1.3.1. Agenda Filters Figure 7.6. AgendaFilters AgendaFilter objects are optional implementations of the filter interface which are used to allow or deny the firing of a match. What you filter on is entirely up to the implementation. Drools 4.0 used to supply some out of the box filters, which have not be exposed in drools 5.0 knowledge-api, but they are simple to implement and the Drools 4.0 code base can be referred to. To use a filter specify it while calling fireAllRules(). The following example permits only rules ending in the string "Test". All others will be filtered out. ksession.fireAllRules( new RuleNameEndsWithAgendaFilter( "Test" ) ); 7.2. Agenda The Agenda is a Rete feature. During actions on the WorkingMemory, rules may become fully matched and eligible for execution; a single Working Memory Action can result in multiple eligible rules. When a rule is fully matched a Match is created, referencing the rule and the matched facts, and placed onto the Agenda. The Agenda controls the execution order of these Matches using a Conflict Resolution strategy. The engine cycles repeatedly through two phases: 1. Working Memory Actions. This is where most of the work takes place, either in the Conse- quence (the RHS itself) or the main Java application process. Once the Consequence has fin- ished or the main Java application process calls fireAllRules() the engine switches to the Agenda Evaluation phase. 2. Agenda Evaluation. This attempts to select a rule to fire. If no rule is found it exits, otherwise it fires the found rule, switching the phase back to Working Memory Actions. Figure 7.7. Two Phase Execution The process repeats until the agenda is clear, in which case control returns to the calling applica- tion. When Working Memory Actions are taking place, no rules are being fired. Running 215 Figure 7.8. Agenda 7.2.1. Conflict Resolution Conflict resolution is required when there are multiple rules on the agenda. (The basics to this are covered in chapter "Quick Start".) As firing a rule may have side effects on the working memory, the rule engine needs to know in what order the rules should fire (for instance, firing ruleA may cause ruleB to be removed from the agenda). The default conflict resolution strategies employed by Drools are: Salience and LIFO (last in, first out). The most visible one is salience (or priority), in which case a user can specify that a certain rule has a higher priority (by giving it a higher number) than other rules. In that case, the rule with higher salience will be preferred. LIFO priorities are based on the assigned Working Memory Action counter value, with all rules created during the same action receiving the same value. The execution order of a set of firings with the same priority value is arbitrary. As a general rule, it is a good idea not to count on rules firing in any particular order, and to author the rules without worrying about a "flow". However when a flow is needed a number of possibilities exist, including but not limited to: agenda groups, rule flow groups, activation groups, control/semaphore facts. These are discussed in later sections. Drools 4.0 supported custom conflict resolution strategies; while this capability still exists in Drools it has not yet been exposed to the end user via knowledge-api in Drools 5.0. 7.2.2. AgendaGroup Figure 7.9. AgendaGroup Agenda groups are a way to partition rules (matches, actually) on the agenda. At any one time, only one group has "focus" which means that matches for rules in that group only will take effect. You can also have rules with "auto focus" which means that the focus is taken for its agenda group when that rule's conditions are true. Agenda groups are known as "modules" in CLIPS terminology. While it best to design rules that do not need control flow, this is not always possible. Agenda groups provide a handy way to create a "flow" between grouped rules. You can switch the group which has focus either from within the rule engine, or via the API. If your rules have a clear need for multiple "phases" or "sequences" of processing, consider using agenda-groups for this purpose. Each time setFocus() is called it pushes that Agenda Group onto a stack. When the focus group is empty it is popped from the stack and the focus group that is now on top evaluates. An Agenda Running 216 Group can appear in multiple locations on the stack. The default Agenda Group is "MAIN", with all rules which do not specify an Agenda Group being in this group. It is also always the first group on the stack, given focus initially, by default. ksession.getAgenda().getAgendaGroup( "Group A" ).setFocus(); The clear() method can be used to cancel all the activations generated by the rules belonging to a given Agenda Group before one has had a chance to fire. ksession.getAgenda().getAgendaGroup( "Group A" ).clear(); Note that, due to the lazy nature of the phreak algorithm used by Drools, the activations are by default materialized only at firing time, but it is possible to anticipate the evaluation and then the activation of a given rule at the moment when a fact is inserted into the session by annotating it with @Propagation(IMMEDIATE) as explained in the Propagation modes section. 7.2.3. ActivationGroup Figure 7.10. ActivationGroup An activation group is a set of rules bound together by the same "activation-group" rule attribute. In this group only one rule can fire, and after that rule has fired all the other rules are cancelled from the agenda. The clear() method can be called at any time, which cancels all of the activations before one has had a chance to fire. ksession.getAgenda().getActivationGroup( "Group B" ).clear(); 7.2.4. RuleFlowGroup Figure 7.11. RuleFlowGroup A rule flow group is a group of rules associated by the "ruleflow-group" rule attribute. These rules can only fire when the group is activate. The group itself can only become active when the elab- oration of the ruleflow diagram reaches the node representing the group. Here too, the clear() method can be called at any time to cancels all matches still remaining on the Agenda. ksession.getAgenda().getRuleFlowGroup( "Group C" ).clear(); Running 217 7.3. Event Model The event package provides means to be notified of rule engine events, including rules firing, ob- jects being asserted, etc. This allows you, for instance, to separate logging and auditing activities from the main part of your application (and the rules). The WorkingMemoryEventManager allows for listeners to be added and removed, so that events for the working memory and the agenda can be listened to. Figure 7.12. WorkingMemoryEventManager The following code snippet shows how a simple agenda listener is declared and attached to a session. It will print matches after they have fired. Example 7.3. Adding an AgendaEventListener ksession.addEventListener( new DefaultAgendaEventListener() { public void afterMatchFired(AfterMatchFiredEvent event) { super.afterMatchFired( event ); System.out.println( event ); } }); Drools also provides DebugRuleRuntimeEventListener and DebugAgendaEventListener which implement each method with a debug print statement. To print all Working Memory events, you add a listener like this: Example 7.4. Adding a DebugRuleRuntimeEventListener ksession.addEventListener( new DebugRuleRuntimeEventListener() ); The events currently supported are: • MatchCreatedEvent • MatchCancelledEvent • BeforeMatchFiredEvent • AfterMatchFiredEvent • AgendaGroupPushedEvent • AgendaGroupPoppedEvent • ObjectInsertEvent Running 218 • ObjectDeletedEvent • ObjectUpdatedEvent • ProcessCompletedEvent • ProcessNodeLeftEvent • ProcessNodeTriggeredEvent • ProcessStartEvent 7.4. StatelessKieSession The StatelessKieSession wraps the KieSession, instead of extending it. Its main focus is on decision service type scenarios. It avoids the need to call dispose(). Stateless sessions do not support iterative insertions and the method call fireAllRules() from Java code; the act of calling execute() is a single-shot method that will internally instantiate a KieSession, add all the user data and execute user commands, call fireAllRules(), and then call dispose(). While the main way to work with this class is via the BatchExecution (a subinterface of Command) as supported by the CommandExecutor interface, two convenience methods are provided for when simple object insertion is all that's required. The CommandExecutor and BatchExecution are talked about in detail in their own section. Figure 7.13. StatelessKieSession Our simple example shows a stateless session executing a given collection of Java objects using the convenience API. It will iterate the collection, inserting each element in turn. Example 7.5. Simple StatelessKieSession execution with a Collection StatelessKieSession ksession = kbase.newStatelessKieSession(); ksession.execute( collection ); If this was done as a single Command it would be as follows: Example 7.6. Simple StatelessKieSession execution with InsertElements Command ksession.execute( CommandFactory.newInsertElements( collection ) ); If you wanted to insert the collection itself, and the collection's individual elements, then CommandFactory.newInsert(collection) would do the job. Running 219 Methods of the CommandFactory create the supported commands, all of which can be marshalled using XStream and the BatchExecutionHelper. BatchExecutionHelper provides details on the XML format as well as how to use Drools Pipeline to automate the marshalling of BatchExecution and ExecutionResults. StatelessKieSession supports globals, scoped in a number of ways. I'll cover the non-command way first, as commands are scoped to a specific execution call. Globals can be resolved in three ways. • The StatelessKieSession method getGlobals() returns a Globals instance which provides access to the session's globals. These are shared for all execution calls. Exercise caution re- garding mutable globals because execution calls can be executing simultaneously in different threads. Example 7.7. Session scoped global StatelessKieSession ksession = kbase.newStatelessKieSession(); // Set a global hbnSession, that can be used for DB interactions in the rules. ksession.setGlobal( "hbnSession", hibernateSession ); // Execute while being able to resolve the "hbnSession" identifier. ksession.execute( collection ); • Using a delegate is another way of global resolution. Assigning a value to a global (with setGlobal(String, Object)) results in the value being stored in an internal collection map- ping identifiers to values. Identifiers in this internal collection will have priority over any supplied delegate. Only if an identifier cannot be found in this internal collection, the delegate global (if any) will be used. • The third way of resolving globals is to have execution scoped globals. Here, a Command to set a global is passed to the CommandExecutor. The CommandExecutor interface also offers the ability to export data via "out" parameters. Inserted facts, globals and query results can all be returned. Example 7.8. Out identifiers // Set up a list of commands List cmds = new ArrayList(); cmds.add( CommandFactory.newSetGlobal( "list1", new ArrayList(), true ) ); cmds.add( CommandFactory.newInsert( new Person( "jon", 102 ), "person" ) ); cmds.add( CommandFactory.newQuery( "Get People" "getPeople" ); // Execute the list ExecutionResults results = ksession.execute( CommandFactory.newBatchExecution( cmds ) ); // Retrieve the ArrayList results.getValue( "list1" ); // Retrieve the inserted Person fact Running 220 results.getValue( "person" ); // Retrieve the query as a QueryResults instance. results.getValue( "Get People" ); 7.4.1. Sequential Mode With Rete you have a stateful session where objects can be asserted and modified over time, and where rules can also be added and removed. Now what happens if we assume a stateless session, where after the initial data set no more data can be asserted or modified and rules cannot be added or removed? Certainly it won't be necessary to re-evaluate rules, and the engine will be able to operate in a simplified way. 1. Order the Rules by salience and position in the ruleset (by setting a sequence attribute on the rule terminal node). 2. Create an elements, one element for each possible rule match; element position indicates firing order. 3. Turn off all node memories, except the right-input Object memory. 4. Disconnect the Left Input Adapter Node propagation, and let the Object plus the Node be refer- enced in a Command object, which is added to a list on the Working Memory for later execution. 5. Assert all objects, and, when all assertions are finished and thus right-input node memories are populated, check the Command list and execute each in turn. 6. All resulting Matches should be placed in the elements, based upon the determined sequence number of the Rule. Record the first and last populated elements, to reduce the iteration range. 7. Iterate the elements of Matches, executing populated element in turn. 8. If we have a maximum number of allowed rule executions, we can exit our network evaluations early to fire all the rules in the elements. The LeftInputAdapterNode no longer creates a Tuple, adding the Object, and then propagate the Tuple – instead a Command object is created and added to a list in the Working Memory. This Command object holds a reference to the LeftInputAdapterNode and the propagated object. This stops any left-input propagations at insertion time, so that we know that a right-input propaga- tion will never need to attempt a join with the left-inputs (removing the need for left-input memory). All nodes have their memory turned off, including the left-input Tuple memory but excluding the right-input object memory, which means that the only node remembering an insertion propagation is the right-input object memory. Once all the assertions are finished and all right-input memories populated, we can then iterate the list of LeftInputAdatperNode Command objects calling each in turn. They will propagate down the network attempting to join with the right-input objects, but they won't be remembered in the left input as we know there will be no further object assertions and thus propagations into the right-input memory. There is no longer an Agenda, with a priority queue to schedule the Tuples; instead, there is simply an elements for the number of rules. The sequence number of the RuleTerminalNode Running 221 indicates the element within the elements where to place the Match. Once all Command objects have finished we can iterate our elements, checking each element in turn, and firing the Matches if they exist. To improve performance, we remember the first and the last populated cell in the elements. The network is constructed, with each RuleTerminalNode being given a sequence number based on a salience number and its order of being added to the network. Typically the right-input node memories are Hash Maps, for fast object deletion; here, as we know there will be no object deletions, we can use a list when the values of the object are not indexed. For larger numbers of objects indexed Hash Maps provide a performance increase; if we know an object type has only a few instances, indexing is probably not advantageous, and a list can be used. Sequential mode can only be used with a Stateless Session and is off by default. To turn it on, either call RuleBaseConfiguration.setSequential(true), or set the rulebase configura- tion property drools.sequential to true. Sequential mode can fall back to a dynamic agen- da by calling setSequentialAgenda with SequentialAgenda.DYNAMIC. You may also set the "drools.sequential.agenda" property to "sequential" or "dynamic". 7.5. Propagation modes The introduction of PHREAK as default algorithm for the Drools engine made the rules' evaluation lazy. This new Drools lazy behavior allowed a relevant performance boost but, in some very spe- cific cases, breaks the semantic of a few Drools features. More precisely in some circumstances it is necessary to propagate the insertion of new fact into th session immediately. For instance Drools allows a query to be executed in pull only (or passive) mode by prepending a '?' symbol to its invocation as in the following example: Example 7.9. A passive query query Q (Integer i) String( this == i.toString() ) end rule R when $i : Integer() ?Q( $i; ) then System.out.println( $i ); end In this case, since the query is passive, it shouldn't react to the insertion of a String matching the join condition in the query itself. In other words this sequence of commands KieSession ksession = ... ksession.insert(1); ksession.insert("1"); ksession.fireAllRules(); Running 222 shouldn't cause the rule R to fire because the String satisfying the query condition has been inserted after the Integer and the passive query shouldn't react to this insertion. Conversely the rule should fire if the insertion sequence is inverted because the insertion of the Integer, when the passive query can be satisfied by the presence of an already existing String, will trigger it. Unfortunately the lazy nature of PHREAK doesn't allow the engine to make any distinction regard- ing the insertion sequence of the two facts, so the rule will fire in both cases. In circumstances like this it is necessary to evaluate the rule eagerly as done by the old RETEOO-based engine. In other cases it is required that the propagation is eager, meaning that it is not immedate, but anyway has to happen before the engine/agenda starts scheduled evaluations. For instance this is necessary when a rule has the no-loop or the lock-on-active attribute and in fact when this happens this propagation mode is automatically enforced by the engine. To cover these use cases, and in all other situations where an immediate or eager rule eval- uation is required, it is possible to declaratively specify so by annotating the rule itself with @Propagation(Propagation.Type), where Propagation.Type is an enumeration with 3 possible values: • IMMEDIATE means that the propagation is performed immediately. • EAGER means that the propagation is performed lazily but eagerly evaluated before scheduled evaluations. • LAZY means that the propagation is totally lazy and this is default PHREAK behaviour This means that the following drl: Example 7.10. A data-driven rule using a passive query query Q (Integer i) String( this == i.toString() ) end rule R @Propagation(IMMEDIATE) when $i : Integer() ?Q( $i; ) then System.out.println( $i ); end will make the rule R to fire if and only if the Integer is inserted after the String, thus behaving in accordance with the semantic of the passive query. 7.6. Commands and the CommandExecutor The CommandFactory allows for commands to be executed on those sessions, the only difference being that the Stateless Knowledge Session executes fireAllRules() at the end before dispos- ing the session. The currently supported commands are: Running 223 • FireAllRules • GetGlobal • SetGlobal • InsertObject • InsertElements • Query • StartProcess • BatchExecution InsertObject will insert a single object, with an optional "out" identifier. InsertElements will iterate an Iterable, inserting each of the elements. What this means is that a Stateless Knowledge Session is no longer limited to just inserting objects, it can now start processes or execute queries, and do this in any order. Example 7.11. Insert Command StatelessKieSession ksession = kbase.newStatelessKieSession(); ExecutionResults bresults = ksession.execute( CommandFactory.newInsert( new Cheese( "stilton" ), "stilton_id" ) ); Stilton stilton = bresults.getValue( "stilton_id" ); The execute method always returns an ExecutionResults instance, which allows access to any command results if they specify an out identifier such as the "stilton_id" above. Example 7.12. InsertElements Command StatelessKieSession ksession = kbase.newStatelessKieSession(); Command cmd = CommandFactory.newInsertElements( Arrays.asList( Object[] { new Cheese( "stilton" ), new Cheese( "brie" ), new Cheese( "cheddar" ), }); ExecutionResults bresults = ksession.execute( cmd ); The execute method only allows for a single command. That's where BatchExecution comes in, which represents a composite command, created from a list of commands. Now, execute will iterate over the list and execute each command in turn. This means you can insert some objects, start a process, call fireAllRules and execute a query, all in a single execute(...) call, which is quite powerful. As mentioned previosly, the StatelessKieSession will execute fireAllRules() automatically at the end. However the keen-eyed reader probably has already noticed the FireAllRules com- Running 224 mand and wondered how that works with a StatelessKieSession. The FireAllRules command is allowed, and using it will disable the automatic execution at the end; think of using it as a sort of manual override function. A custom XStream marshaller can be used with the Drools Pipeline to achieve XML scripting, which is perfect for services. Here are two simple XML samples, one for the BatchExecution and one for the ExecutionResults. Example 7.13. Simple BatchExecution XML stilton 25 0 Example 7.14. Simple ExecutionResults XML stilton 25 30 Spring and Camel, covered in the integrations book, facilitate declarative services. Example 7.15. BatchExecution Marshalled to XML stilton 1 0 stilton cheddar Running 225 The CommandExecutor returns an ExecutionResults, and this is handled by the pipeline code snippet as well. A similar output for the XML sample above would be: Example 7.16. ExecutionResults Marshalled to XML stilton 2 cheese cheddar 2 0 cheddar 1 0 The BatchExecutionHelper provides a configured XStream instance to support the marshalling of Batch Executions, where the resulting XML can be used as a message format, as shown above. Configured converters only exist for the commands supported via the Command Factory. The user may add other converters for their user objects. This is very useful for scripting stateless or stateful knowledge sessions, especially when services are involved. There is currently no XML schema to support schema validation. The basic format is outlined here, and the drools-pipeline module has an illustrative unit test in the XStreamBatchExecution- Test unit test. The root element is and it can contain zero or more commands elements. Running 226 Example 7.17. Root XML element ... This contains a list of elements that represent commands, the supported commands is limited to those Commands provided by the Command Factory. The most basic of these is the element, which inserts objects. The contents of the insert element is the user object, as dictated by XStream. Example 7.18. Insert ... The insert element features an "out-identifier" attribute, demanding that the inserted object will also be returned as part of the result payload. Example 7.19. Insert with Out Identifier Command ... It's also possible to insert a collection of objects using the element. This com- mand does not support an out-identifier. The org.domain.UserClass is just an illustrative user object that XStream would serialize. Example 7.20. Insert Elements command ... ... ... Running 227 While the out attribute is useful in returning specific instances as a result payload, we often wish to run actual queries. Both parameter and parameterless queries are supported. The name attribute is the name of the query to be called, and the out-identifier is the identifier to be used for the query results in the payload. Example 7.21. Query Command stilton cheddar 228 Chapter 8. Rule Language Reference 8.1. Overview Drools has a "native" rule language. This format is very light in terms of punctuation, and supports natural and domain specific languages via "expanders" that allow the language to morph to your problem domain. This chapter is mostly concerted with this native rule format. The diagrams used to present the syntax are known as "railroad" diagrams, and they are basically flow charts for the language terms. The technically very keen may also refer to DRL.g which is the Antlr3 grammar for the rule language. If you use the Rule Workbench, a lot of the rule structure is done for you with content assistance, for example, type "ru" and press ctrl+space, and it will build the rule structure for you. 8.1.1. A rule file A rule file is typically a file with a .drl extension. In a DRL file you can have multiple rules, queries and functions, as well as some resource declarations like imports, globals and attributes that are assigned and used by your rules and queries. However, you are also able to spread your rules across multiple rule files (in that case, the extension .rule is suggested, but not required) - spreading rules across files can help with managing large numbers of rules. A DRL file is simply a text file. The overall structure of a rule file is: Example 8.1. Rules file package package-name imports globals functions queries rules The order in which the elements are declared is not important, except for the package name that, if declared, must be the first element in the rules file. All elements are optional, so you will use only those you need. We will discuss each of them in the following sections. Rule Language Reference 229 8.1.2. What makes a rule For the impatient, just as an early view, a rule has the following rough structure: rule "name" attributes when LHS then RHS end It's really that simple. Mostly punctuation is not needed, even the double quotes for "name" are optional, as are newlines. Attributes are simple (always optional) hints to how the rule should behave. LHS is the conditional parts of the rule, which follows a certain syntax which is covered below. RHS is basically a block that allows dialect specific semantic code to be executed. It is important to note that white space is not important, except in the case of domain specific languages, where lines are processed one by one and spaces may be significant to the domain language. 8.2. Keywords Drools 5 introduces the concept of hard and soft keywords. Hard keywords are reserved, you cannot use any hard keyword when naming your domain objects, properties, methods, functions and other elements that are used in the rule text. Here is the list of hard keywords that must be avoided as identifiers when writing rules: • true • false • null Soft keywords are just recognized in their context, enabling you to use these words in any other place if you wish, although, it is still recommended to avoid them, to avoid confusions, if possible. Here is a list of the soft keywords: • lock-on-active • date-effective • date-expires • no-loop Rule Language Reference 230 • auto-focus • activation-group • agenda-group • ruleflow-group • entry-point • duration • package • import • dialect • salience • enabled • attributes • rule • extend • when • then • template • query • declare • function • global • eval • not • in • or • and • exists Rule Language Reference 231 • forall • accumulate • collect • from • action • reverse • result • end • over • init Of course, you can have these (hard and soft) words as part of a method name in camel case, like notSomething() or accumulateSomething() - there are no issues with that scenario. Although the 3 hard keywords above are unlikely to be used in your existing domain models, if you absolutely need to use them as identifiers instead of keywords, the DRL language provides the ability to escape hard keywords on rule text. To escape a word, simply enclose it in grave accents, like this: Holiday( `true` == "yes" ) // please note that Drools will resolve that reference to the method Holiday.isTrue() 8.3. Comments Comments are sections of text that are ignored by the rule engine. They are stripped out when they are encountered, except inside semantic code blocks, like the RHS of a rule. 8.3.1. Single line comment To create single line comments, you can use '//'. The parser will ignore anything in the line after the comment symbol. Example: rule "Testing Comments"when // this is a single line comment eval( true ) // this is a comment in the same line of a patternthen // this is a comment inside a semantic code blockend Com ments"when // this is a single line comment eval( true ) // this is a comment in the same line of a patternthen // this is a comment inside a semantic code Rule Language Reference 232 Warning '#' for comments has been removed. 8.3.2. Multi-line comment Figure 8.1. Multi-line comment Multi-line comments are used to comment blocks of text, both in and outside semantic code blocks. Example: rule "Test Multi-line Comments"when /* this is a multi-line comment in the left hand side of a rule */ eval( true )then /* and this is a multi-line comment in the right hand side of a rule */end Com ments"when /* this is a multi-line comment in the left hand side of a rule */ eval( true )then /* and this is a multi-line comment in the right hand side of a rule */ 8.4. Error Messages Drools 5 introduces standardized error messages. This standardization aims to help users to find and resolve problems in a easier and faster way. In this section you will learn how to identify and interpret those error messages, and you will also receive some tips on how to solve the problems associated with them. 8.4.1. Message format The standardization includes the error message format and to better explain this format, let's use the following example: Figure 8.2. Error Message Format 1st Block: This area identifies the error code. 2nd Block: Line and column information. Rule Language Reference 233 3rd Block: Some text describing the problem. 4th Block: This is the first context. Usually indicates the rule, function, template or query where the error occurred. This block is not mandatory. 5th Block: Identifies the pattern where the error occurred. This block is not mandatory. 8.4.2. Error Messages Description 8.4.2.1. 101: No viable alternative Indicates the most common errors, where the parser came to a decision point but couldn't identify an alternative. Here are some examples: Example 8.2. 1: rule one 2: when 3: exists Foo() 4: exits Bar() 5: then 6: end The above example generates this message: •[ERR 101] Line 4:4 no viable alternative at input 'exits' in rule one At first glance this seems to be valid syntax, but it is not (exits != exists). Let's take a look at next example: Example 8.3. 1: package org.drools.examples;2: rule3: when4: Object()5: then6: System.out.println("A RHS");7: end org.drools.examples;2: rule3: when4: Object()5: then6: System.out.println("A RHS");7: Now the above code generates this message: •[ERR 101] Line 3:2 no viable alternative at input 'WHEN' This message means that the parser encountered the token WHEN, actually a hard keyword, but it's in the wrong place since the the rule name is missing. Rule Language Reference 234 The error "no viable alternative" also occurs when you make a simple lexical mistake. Here is a sample of a lexical problem: Example 8.4. 1: rule simple_rule 2: when 3: Student( name == "Andy ) 4: then 5: end The above code misses to close the quotes and because of this the parser generates this error message: •[ERR 101] Line 0:-1 no viable alternative at input '' in rule simple_rule in pattern Student Note Usually the Line and Column information are accurate, but in some cases (like unclosed quotes), the parser generates a 0:-1 position. In this case you should check whether you didn't forget to close quotes, apostrophes or parentheses. 8.4.2.2. 102: Mismatched input This error indicates that the parser was looking for a particular symbol that it didn't #nd at the current input position. Here are some samples: Example 8.5. 1: rule simple_rule 2: when 3: foo3 : Bar( The above example generates this message: •[ERR 102] Line 0:-1 mismatched input '' expecting ')' in rule simple_rule in pattern Bar To fix this problem, it is necessary to complete the rule statement. Note Usually when you get a 0:-1 position, it means that parser reached the end of source. Rule Language Reference 235 The following code generates more than one error message: Example 8.6. 1: package org.drools.examples;2:3: rule "Avoid NPE on wrong syntax"4: when5: not( Cheese( ( type == "stilton", price == 10 ) || ( type == "brie", price == 15 ) ) from $cheeseList )6: then7: System.out.println("OK");8: end org.drools.examples; 2:3: rule "Avoid NPE on wrong syntax"4: when5: not( Cheese( ( type == "stilton", price == 10 ) || ( type == "brie", price == 15 ) ) from $cheeseList )6: then7: System.out.println("OK");8: These are the errors associated with this source: •[ERR 102] Line 5:36 mismatched input ',' expecting ')' in rule "Avoid NPE on wrong syntax" in pattern Cheese •[ERR 101] Line 5:57 no viable alternative at input 'type' in rule "Avoid NPE on wrong syntax" •[ERR 102] Line 5:106 mismatched input ')' expecting 'then' in rule "Avoid NPE on wrong syntax" Note that the second problem is related to the first. To fix it, just replace the commas (',') by AND operator ('&&'). Note In some situations you can get more than one error message. Try to fix one by one, starting at the first one. Some error messages are generated merely as con- sequences of other errors. 8.4.2.3. 103: Failed predicate A validating semantic predicate evaluated to false. Usually these semantic predicates are used to identify soft keywords. This sample shows exactly this situation: Example 8.7. 1: package nesting; 2: dialect "mvel" 3: 4: import org.drools.compiler.Person 5: import org.drools.compiler.Address 6: 7: nesting; 2: dialect "mvel" 3: 4: import org.drools.compiler.Person 5: import Rule Language Reference 236 org.drools.compiler.Address 6: fdsfdsfds 8: 9: rule "test something" 10: when 11: p: Person( name=="Michael" ) 12: then 13: p.name = "other"; 14: System.out.println(p.name); 15: end With this sample, we get this error message: •[ERR 103] Line 7:0 rule 'rule_key' failed predicate: {(validateIdentifierKey(DroolsSoftKeywords.RULE))}? in rule The fdsfdsfds text is invalid and the parser couldn't identify it as the soft keyword rule. Note This error is very similar to 102: Mismatched input, but usually involves soft key- words. 8.4.2.4. 104: Trailing semi-colon not allowed This error is associated with the eval clause, where its expression may not be terminated with a semicolon. Check this example: Example 8.8. 1: rule simple_rule 2: when 3: eval(abc();) 4: then 5: end Due to the trailing semicolon within eval, we get this error message: •[ERR 104] Line 3:4 trailing semi-colon not allowed in rule simple_rule This problem is simple to fix: just remove the semi-colon. 8.4.2.5. 105: Early Exit The recognizer came to a subrule in the grammar that must match an alternative at least once, but the subrule did not match anything. Simply put: the parser has entered a branch from where there is no way out. This example illustrates it: Rule Language Reference 237 Example 8.9. 1: template test_error2: aa s 11;3: end test_error2: aa s 11;3: This is the message associated to the above sample: •[ERR 105] Line 2:2 required (...)+ loop did not match anything at input 'aa' in template test_error To fix this problem it is necessary to remove the numeric value as it is neither a valid data type which might begin a new template slot nor a possible start for any other rule file construct. 8.4.3. Other Messages Any other message means that something bad has happened, so please contact the development team. 8.5. Package A package is a collection of rules and other related constructs, such as imports and globals. The package members are typically related to each other - perhaps HR rules, for instance. A package represents a namespace, which ideally is kept unique for a given grouping of rules. The package name itself is the namespace, and is not related to files or folders in any way. It is possible to assemble rules from multiple rule sources, and have one top level package config- uration that all the rules are kept under (when the rules are assembled). Although, it is not possible to merge into the same package resources declared under different names. A single Rulebase may, however, contain multiple packages built on it. A common structure is to have all the rules for a package in the same file as the package declaration (so that is it entirely self-contained). The following railroad diagram shows all the components that may make up a package. Note that a package must have a namespace and be declared using standard Java conventions for package names; i.e., no spaces, unlike rule names which allow spaces. In terms of the order of elements, they can appear in any order in the rule file, with the exception of the package statement, which must be at the top of the file. In all cases, the semicolons are optional. Rule Language Reference 238 Figure 8.3. package Notice that any rule attribute (as described the section Rule Attributes) may also be written at package level, superseding the attribute's default value. The modified default may still be replaced by an attribute setting within a rule. 8.5.1. import Figure 8.4. import Import statements work like import statements in Java. You need to specify the fully qualified paths and type names for any objects you want to use in the rules. Drools automatically imports classes from the Java package of the same name, and also from the package java.lang. 8.5.2. global Figure 8.5. global Rule Language Reference 239 With global you define global variables. They are used to make application objects available to the rules. Typically, they are used to provide data or services that the rules use, especially application services used in rule consequences, and to return data from the rules, like logs or values added in rule consequences, or for the rules to interact with the application, doing callbacks. Globals are not inserted into the Working Memory, and therefore a global should never be used to establish conditions in rules except when it has a constant immutable value. The engine cannot be notified about value changes of globals and does not track their changes. Incorrect use of globals in constraints may yield surprising results - surprising in a bad way. If multiple packages declare globals with the same identifier they must be of the same type and all of them will reference the same global value. In order to use globals you must: 1. Declare your global variable in your rules file and use it in rules. Example: global java.util.List myGlobalList;rule "Using a global"when eval( true )then myGlobalList.add( "Hello World" );end myGlobalList;rule "Using a global"when eval( true )then myGlobalList.add( "Hello 2. Set the global value on your working memory. It is a best practice to set all global values before asserting any fact to the working memory. Example: List list = new ArrayList(); KieSession kieSession = kiebase.newKieSession(); kieSession.setGlobal( "myGlobalList", list ); Note that these are just named instances of objects that you pass in from your application to the working memory. This means you can pass in any object you want: you could pass in a service locator, or perhaps a service itself. With the new from element it is now common to pass a Hibernate session as a global, to allow from to pull data from a named Hibernate query. One example may be an instance of a Email service. In your integration code that is calling the rule engine, you obtain your emailService object, and then set it in the working memory. In the DRL, you declare that you have a global of type EmailService, and give it the name "email". Then in your rule consequences, you can use things like email.sendSMS(number, message). Globals are not designed to share data between rules and they should never be used for that purpose. Rules always reason and react to the working memory state, so if you want to pass data from rule to rule, assert the data as facts into the working memory. Rule Language Reference 240 Care must be taken when changing data held by globals because the rule engine is not aware of those changes, hence cannot react to them. 8.6. Function Figure 8.6. function Functions are a way to put semantic code in your rule source file, as opposed to in normal Java classes. They can't do anything more than what you can do with helper classes. (In fact, the compiler generates the helper class for you behind the scenes.) The main advantage of using functions in a rule is that you can keep the logic all in one place, and you can change the functions as needed (which can be a good or a bad thing). Functions are most useful for invoking actions on the consequence (then) part of a rule, especially if that particular action is used over and over again, perhaps with only differing parameters for each rule. A typical function declaration looks like: function String hello(String name) { return "Hello "+name+"!";} { return "Hello "+name +"!"; Note that the function keyword is used, even though its not really part of Java. Parameters to the function are defined as for a method, and you don't have to have parameters if they are not needed. The return type is defined just like in a regular method. Alternatively, you could use a static method in a helper class, e.g., Foo.hello(). Drools supports the use of function imports, so all you would need to do is: import function my.package.Foo.hello Rule Language Reference 241 Irrespective of the way the function is defined or imported, you use a function by calling it by its name, in the consequence or inside a semantic code block. Example: rule "using a static function"when eval( true )then System.out.println( hello( "Bob" ) );end ic function"when eval( true )then System.out.println( hello( "Bob" 8.7. Type Declaration Figure 8.7. meta_data Rule Language Reference 242 Figure 8.8. type_declaration Type declarations have two main goals in the rules engine: to allow the declaration of new types, and to allow the declaration of metadata for types. • Declaring new types: Drools works out of the box with plain Java objects as facts. Sometimes, however, users may want to define the model directly to the rules engine, without worrying about creating models in a lower level language like Java. At other times, there is a domain model already built, but eventually the user wants or needs to complement this model with additional entities that are used mainly during the reasoning process. • Declaring metadata: facts may have meta information associated to them. Examples of meta information include any kind of data that is not represented by the fact attributes and is consistent among all instances of that fact type. This meta information may be queried at runtime by the engine and used in the reasoning process. 8.7.1. Declaring New Types To declare a new type, all you need to do is use the keyword declare, followed by the list of fields, and the keyword end. A new fact must have a list of fields, otherwise the engine will look for an existing fact class in the classpath and raise an error if not found. Rule Language Reference 243 Example 8.10. Declaring a new fact type: Address declare Address number : int streetName : String city : String dress number : int streetName : String city : end The previous example declares a new fact type called Address. This fact type will have three attributes: number, streetName and city. Each attribute has a type that can be any valid Java type, including any other class created by the user or even other fact types previously declared. For instance, we may want to declare another fact type Person: Example 8.11. declaring a new fact type: Person declare Person name : String dateOfBirth : java.util.Date address : Address son name : String dateOfBirth : java.util.Date address : end As we can see on the previous example, dateOfBirth is of type java.util.Date, from the Java API, while address is of the previously defined fact type Address. You may avoid having to write the fully qualified name of a class every time you write it by using the import clause, as previously discussed. Example 8.12. Avoiding the need to use fully qualified class names by using import import java.util.Date declare Person name : String dateOfBirth : Date address : Address end When you declare a new fact type, Drools will, at compile time, generate bytecode that implements a Java class representing the fact type. The generated Java class will be a one-to-one Java Bean mapping of the type definition. So, for the previous example, the generated Java class would be: Rule Language Reference 244 Example 8.13. generated Java class for the previous Person fact type declaration public class Person implements Serializable { private String name; private java.util.Date dateOfBirth; private Address address; // empty constructor public Person() {...} // constructor with all fields public Person( String name, Date dateOfBirth, Address address ) {...} // if keys are defined, constructor with keys public Person( ...keys... ) {...} // getters and setters // equals/hashCode // toString } Since the generated class is a simple Java class, it can be used transparently in the rules, like any other fact. Example 8.14. Using the declared types in rules rule "Using a declared Type" when $p : Person( name == "Bob" ) then // Insert Mark, who is Bob's mate. Person mark = new Person(); mark.setName("Mark"); insert( mark ); end 8.7.1.1. Declaring enumerative types DRL also supports the declaration of enumerative types. Such type declarations require the ad- ditional keyword enum, followed by a comma separated list of admissible values terminated by a semicolon. Example 8.15. declare enum DaysOfWeek SUN,MON,TUE,WED,THU,FRI,SAT; Rule Language Reference 245 end The compiler will generate a valid Java enum, with static methods valueOf() and values(), as well as instance methods ordinal(), compareTo() and name(). Complex enums are also partially supported, declaring the internal fields similarly to a regular type declaration. Notice that as of version 6.x, enum fields do NOT support other declared types or enums Example 8.16. declare enum DaysOfWeek SUN("Sunday"),MON("Monday"),TUE("Tuesday"),WED("Wednesday"),THU("Thursday"),FRI("Friday"),SAT("Saturday"); fullName : String end Enumeratives can then be used in rules Example 8.17. Using declarative enumerations in rules rule "Using a declared Enum" when $p : Employee( dayOff == DaysOfWeek.MONDAY ) then ... end 8.7.2. Declaring Metadata Metadata may be assigned to several different constructions in Drools: fact types, fact attributes and rules. Drools uses the at sign ('@') to introduce metadata, and it always uses the form: @metadata_key( metadata_value ) The parenthesized metadata_value is optional. For instance, if you want to declare a metadata attribute like author, whose value is Bob, you could simply write: Rule Language Reference 246 Example 8.18. Declaring a metadata attribute @author( Bob ) Drools allows the declaration of any arbitrary metadata attribute, but some will have special mean- ing to the engine, while others are simply available for querying at runtime. Drools allows the declaration of metadata both for fact types and for fact attributes. Any metadata that is declared before the attributes of a fact type are assigned to the fact type, while metadata declared after an attribute are assigned to that particular attribute. Example 8.19. Declaring metadata attributes for fact types and attributes import java.util.Date declare Person @author( Bob ) @dateOfCreation( 01-Feb-2009 ) name : String @key @maxLength( 30 ) dateOfBirth : Date address : Address end In the previous example, there are two metadata items declared for the fact type (@author and @dateOfCreation) and two more defined for the name attribute (@key and @maxLength). Please note that the @key metadata has no required value, and so the parentheses and the value were omitted.: 8.7.2.1. Predefined class level annotations Some annotations have predefined semantics that are interpreted by the engine. The following is a list of some of these predefined annotations and their meaning. 8.7.2.1.1. @role( ) The @role annotation defines how the engine should handle instances of that type: either as regular facts or as events. It accepts two possible values: • fact : this is the default, declares that the type is to be handled as a regular fact. • event : declares that the type is to be handled as an event. The following example declares that the fact type StockTick in a stock broker application is to be handled as an event. Rule Language Reference 247 Example 8.20. declaring a fact type as an event import some.package.StockTick declare StockTick @role( event ) end The same applies to facts declared inline. If StockTick was a fact type declared in the DRL itself, instead of a previously existing class, the code would be: Example 8.21. declaring a fact type and assigning it the event role declare StockTick @role( event ) datetime : java.util.Date symbol : String price : double end 8.7.2.1.2. @typesafe( ) By default all type declarations are compiled with type safety enabled; @typesafe( false ) provides a means to override this behaviour by permitting a fall-back, to type unsafe evaluation where all constraints are generated as MVEL constraints and executed dynamically. This can be important when dealing with collections that do not have any generics or mixed type collections. 8.7.2.1.3. @timestamp( ) Every event has an associated timestamp assigned to it. By default, the timestamp for a given event is read from the Session Clock and assigned to the event at the time the event is inserted into the working memory. Although, sometimes, the event has the timestamp as one of its own attributes. In this case, the user may tell the engine to use the timestamp from the event's attribute instead of reading it from the Session Clock. @timestamp( ) To tell the engine what attribute to use as the source of the event's timestamp, just list the attribute name as a parameter to the @timestamp tag. Example 8.22. declaring the VoiceCall timestamp attribute declare VoiceCall Rule Language Reference 248 Call @role( event ) @timestamp( callDateTime ) end 8.7.2.1.4. @duration( ) Drools supports both event semantics: point-in-time events and interval-based events. A point-in- time event is represented as an interval-based event whose duration is zero. By default, all events have duration zero. The user may attribute a different duration for an event by declaring which attribute in the event type contains the duration of the event. @duration( ) So, for our VoiceCall fact type, the declaration would be: Example 8.23. declaring the VoiceCall duration attribute declare VoiceCall Call @role( event ) @timestamp( callDateTime ) @duration( callDuration ) end 8.7.2.1.5. @expires(