Changeset 7802:72d408a32b16 in orange
 Timestamp:
 04/02/11 22:45:14 (3 years ago)
 Branch:
 default
 Convert:
 10db50c800fcbb808366aeba541cb8d52895feaa
 Location:
 orange
 Files:

 4 edited
Legend:
 Unmodified
 Added
 Removed

orange/Orange/classification/rules.py
r7690 r7802 13 13 and rulebased classification methods. First, there is an implementation of the classic 14 14 `CN2 induction algorithm <http://www.springerlink.com/content/k6q2v76736w5039r/>`_. 15 The implementation of CN2 is modular, providing the op ortunity to change, specialize15 The implementation of CN2 is modular, providing the opportunity to change, specialize 16 16 and improve the algorithm. The implementation is thus based on the rule induction 17 17 framework that we describe below. … … 100 100 <http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.24.1700>`_. In 101 101 Machine Learning  EWSL91. Proceedings of the European Working Session on 102 Learning ., pages151163, Porto, Portugal, March 1991.102 Learning, pp 151163, Porto, Portugal, March 1991. 103 103 * Lavrac, Kavsek, Flach, Todorovski: `Subgroup Discovery with CN2SD 104 104 <http://jmlr.csail.mit.edu/papers/volume5/lavrac04a/lavrac04a.pdf>`_. Journal 105 105 of Machine Learning Research 5: 153188, 2004. 106 107 108 Argument based CN2 109 ================== 110 111 Orange also supports argumentbased CN2 learning. 112 113 .. autoclass:: Orange.classification.rules.ABCN2 114 :members: 115 :showinheritance: 116 117 This class has many more undocumented methods; see the source code for 118 reference. 119 120 .. autoclass:: Orange.classification.rules.ABCN2Ordered 121 :members: 122 :showinheritance: 123 124 .. autoclass:: Orange.classification.rules.ABCN2M 125 :members: 126 :showinheritance: 127 128 Thismodule has many more undocumented argumentbased learning related classed; 129 see the source code for reference. 130 131 References 132  133 134 * Bratko, Mozina, Zabkar. `ArgumentBased Machine Learning 135 <http://www.springerlink.com/content/f41g17t1259006k4/>`_. Lecture Notes in 136 Computer Science: vol. 4203/2006, 1117, 2006. 106 137 107 138 … … 135 166 IF TRUE THEN survived=yes<0.000, 5.000> 136 167 137 Notice that we first need to set the rule Finder component, because the default168 Notice that we first need to set the rule_finder component, because the default 138 169 components are not constructed when the learner is constructed, but only when 139 170 we run it on data. At that time, the algorithm checks which components are … … 166 197 each rule can be used as a classical Orange like 167 198 classifier. Must be of type :class:`Orange.classification.Classifier`. 168 By default, an instance of :class:`Orange.c ore.DefaultClassifier` is used.199 By default, an instance of :class:`Orange.classification.ConstantClassifier` is used. 169 200 170 201 .. attribute:: learner 171 202 172 203 learner to be used for making a classifier. Must be of type 173 :class:`Orange.c ore.learner`. By default,174 :class:`Orange.c ore.MajorityLearner` is used.175 176 .. attribute:: class Distribution204 :class:`Orange.classification.Learner`. By default, 205 :class:`Orange.classification.majority.MajorityLearner` is used. 206 207 .. attribute:: class_distribution 177 208 178 209 distribution of class in data instances covered by this rule 179 (:class:`Orange. core.Distribution`).210 (:class:`Orange.statistics.distribution.Distribution`). 180 211 181 212 .. attribute:: examples … … 183 214 data instances covered by this rule (:class:`Orange.data.Table`). 184 215 185 .. attribute:: weight ID216 .. attribute:: weight_id 186 217 187 218 ID of the weight metaattribute for the stored data instances (int). … … 199 230 but, obviously, any other measure can be applied. 200 231 201 .. method:: filterAndStore(instances, weight ID=0, targetClass=1)232 .. method:: filterAndStore(instances, weight_id=0, target_class=1) 202 233 203 234 Filter passed data instances and store them in the attribute 'examples'. 204 Also, compute class Distribution, set weight of stored examples and create235 Also, compute class_distribution, set weight of stored examples and create 205 236 a new classifier using 'learner' attribute. 206 237 207 :param weight ID: ID of the weight metaattribute.208 :type weight ID: int209 :param target Class: index of target class; 1 for all.210 :type target Class: int238 :param weight_id: ID of the weight metaattribute. 239 :type weight_id: int 240 :param target_class: index of target class; 1 for all. 241 :type target_class: int 211 242 212 243 Objects of this class can be invoked: 213 244 214 .. method:: __call__(instance, instances, weight ID=0, targetClass=1)245 .. method:: __call__(instance, instances, weight_id=0, target_class=1) 215 246 216 247 There are two ways of invoking this method. One way is only passing the … … 232 263 :type negate: bool 233 264 234 .. py:class:: Orange.classification.rules.RuleLearner(store Instances = true, targetClass = 1, baseRules = Orange.classification.rules.RuleList())235 236 Bases: :class:`Orange.c ore.Learner`265 .. py:class:: Orange.classification.rules.RuleLearner(store_instances = true, target_class = 1, base_rules = Orange.classification.rules.RuleList()) 266 267 Bases: :class:`Orange.classification.Learner` 237 268 238 269 A base rule induction learner. The algorithm follows separateandconquer … … 249 280 .. parsedliteral:: 250 281 251 def \_\_call\_\_(self, instances, weight ID=0):252 rule List = Orange.classification.rules.RuleList()253 all Instances = Orange.data.Table(instances)254 while not self.\ **data Stopping**\ (instances, weightID, self.targetClass):255 new Rule = self.\ **ruleFinder**\ (instances, weightID, self.targetClass,256 self.base Rules)257 if self.\ **rule Stopping**\ (ruleList, newRule, instances, weightID):282 def \_\_call\_\_(self, instances, weight_id=0): 283 rule_list = Orange.classification.rules.RuleList() 284 all_instances = Orange.data.Table(instances) 285 while not self.\ **data_stopping**\ (instances, weight_id, self.target_class): 286 new_rule = self.\ **rule_finder**\ (instances, weight_id, self.target_class, 287 self.base_rules) 288 if self.\ **rule_stopping**\ (rule_list, new_rule, instances, weight_id): 258 289 break 259 instances, weight ID = self.\ **coverAndRemove**\ (newRule, instances,260 weight ID, self.targetClass)261 rule List.append(newRule)290 instances, weight_id = self.\ **cover_and_remove**\ (new_rule, instances, 291 weight_id, self.target_class) 292 rule_list.append(new_rule) 262 293 return Orange.classification.rules.RuleClassifier_FirstRule( 263 rules=rule List, instances=allInstances)294 rules=rule_list, instances=all_instances) 264 295 265 The four customizable components here are the invoked data Stopping,266 rule Finder, coverAndRemove and ruleStopping objects. By default, components296 The four customizable components here are the invoked data_stopping, 297 rule_finder, cover_and_remove and rule_stopping objects. By default, components 267 298 of the original CN2 algorithm will be used, but this can be changed by 268 299 modifying those attributes: 269 300 270 .. attribute:: data Stopping301 .. attribute:: data_stopping 271 302 272 303 an object of class … … 278 309 returns True if there are no more instances of given class. 279 310 280 .. attribute:: rule Stopping311 .. attribute:: rule_stopping 281 312 282 313 an object of class … … 284 315 that decides from the last rule learned if it is worthwhile to use the 285 316 rule and learn more rules. By default, no rule stopping criteria is 286 used (rule Stopping==None), thus accepting all rules.317 used (rule_stopping==None), thus accepting all rules. 287 318 288 .. attribute:: cover AndRemove319 .. attribute:: cover_and_remove 289 320 290 321 an object of class … … 294 325 (:class:`Orange.classification.rules.RuleCovererAndRemover_Default`) 295 326 only removes the instances that belong to given target class, except if 296 it is not given (ie. target Class==1).297 298 .. attribute:: rule Finder327 it is not given (ie. target_class==1). 328 329 .. attribute:: rule_finder 299 330 300 331 an object of class … … 305 336 Constructor can be given the following parameters: 306 337 307 :param store Instances: if set to True, the rules will have data instances338 :param store_instances: if set to True, the rules will have data instances 308 339 stored. 309 :type store Instances: bool310 311 :param target Class: index of a specific class being learned; 1 for all.312 :type target Class: int313 314 :param base Rules: Rules that we would like to use in ruleFinder to340 :type store_instances: bool 341 342 :param target_class: index of a specific class being learned; 1 for all. 343 :type target_class: int 344 345 :param base_rules: Rules that we would like to use in rule_finder to 315 346 constrain the learning space. If not set, it will be set to a set 316 347 containing only an empty rule. 317 :type base Rules: :class:`Orange.classification.rules.RuleList`348 :type base_rules: :class:`Orange.classification.rules.RuleList` 318 349 319 350 Rule finders … … 327 358 Rule finders are invokable in the following manner: 328 359 329 .. method:: __call__(table, weight ID, targetClass, baseRules)360 .. method:: __call__(table, weight_id, target_class, base_rules) 330 361 331 362 Return a new rule, induced from instances in the given table. … … 334 365 :type table: :class:`Orange.data.Table` 335 366 336 :param weight ID: ID of the weight metaattribute for the stored data367 :param weight_id: ID of the weight metaattribute for the stored data 337 368 instances. 338 :type weight ID: int369 :type weight_id: int 339 370 340 :param target Class: index of a specific class being learned; 1 for all.341 :type target Class: int371 :param target_class: index of a specific class being learned; 1 for all. 372 :type target_class: int 342 373 343 :param base Rules: Rules that we would like to use in ruleFinder to374 :param base_rules: Rules that we would like to use in rule_finder to 344 375 constrain the learning space. If not set, it will be set to a set 345 376 containing only an empty rule. 346 :type base Rules: :class:`Orange.classification.rules.RuleList`377 :type base_rules: :class:`Orange.classification.rules.RuleList` 347 378 348 379 .. class:: Orange.classification.rules.RuleBeamFinder … … 355 386 .. parsedliteral:: 356 387 357 def \_\_call\_\_(self, table, weight ID, targetClass, baseRules):358 prior = orange.Distribution(table.domain.classVar, table, weightID)359 rules Star, bestRule = self.\ **initializer**\ (table, weightID, targetClass, baseRules, self.evaluator, prior)360 \# compute quality of rules in rules Star and bestRule388 def \_\_call\_\_(self, table, weight_id, target_class, base_rules): 389 prior = Orange.statistics.distribution.Distribution(table.domain.class_var, table, weight_id) 390 rules_star, best_rule = self.\ **initializer**\ (table, weight_id, target_class, base_rules, self.evaluator, prior) 391 \# compute quality of rules in rules_star and best_rule 361 392 ... 362 while len(rules Star) \> 0:363 candidates, rules Star = self.\ **candidateSelector**\ (rulesStar, table, weightID)393 while len(rules_star) \> 0: 394 candidates, rules_star = self.\ **candidate_selector**\ (rules_star, table, weight_id) 364 395 for cand in candidates: 365 new Rules = self.\ **refiner**\ (cand, table, weightID, targetClass)366 for new Rule in newRules:367 if self.\ **rule StoppingValidator**\ (newRule, table, weightID, targetClass, cand.classDistribution):368 new Rule.quality = self.\ **evaluator**\ (newRule, table, weightID, targetClass, prior)369 rules Star.append(newRule)370 if self.\ **validator**\ (new Rule, table, weightID, targetClass, prior) and371 new Rule.quality \> bestRule.quality:372 best Rule = newRule373 rules Star = self.\ **ruleFilter**\ (rulesStar, table, weightID)374 return best Rule396 new_rules = self.\ **refiner**\ (cand, table, weight_id, target_class) 397 for new_rule in new_rules: 398 if self.\ **rule_stopping_validator**\ (new_rule, table, weight_id, target_class, cand.class_distribution): 399 new_rule.quality = self.\ **evaluator**\ (new_rule, table, weight_id, target_class, prior) 400 rules_star.append(new_rule) 401 if self.\ **validator**\ (new_rule, table, weight_id, target_class, prior) and 402 new_rule.quality \> best_rule.quality: 403 best_rule = new_rule 404 rules_star = self.\ **rule_filter**\ (rules_star, table, weight_id) 405 return best_rule 375 406 376 407 Bolded in the pseudocode are several exchangeable components, exposed as … … 381 412 an object of class 382 413 :class:`Orange.classification.rules.RuleBeamInitializer` 383 used to initialize rules Star and for selecting the414 used to initialize rules_star and for selecting the 384 415 initial best rule. By default 385 416 (:class:`Orange.classification.rules.RuleBeamInitializer_Default`), 386 base Rules are returned as starting rulesSet and the best from baseRules387 is set as best Rule. If baseRules are not set, this class will return388 rules Star with rule that covers all instances (has no selectors) and389 this rule will be also used as best Rule.390 391 .. attribute:: candidate Selector417 base_rules are returned as starting rulesSet and the best from base_rules 418 is set as best_rule. If base_rules are not set, this class will return 419 rules_star with rule that covers all instances (has no selectors) and 420 this rule will be also used as best_rule. 421 422 .. attribute:: candidate_selector 392 423 393 424 an object of class 394 425 :class:`Orange.classification.rules.RuleBeamCandidateSelector` 395 426 used to separate a subset from the current 396 rules Star and return it. These rules will be used in the next427 rules_star and return it. These rules will be used in the next 397 428 specification step. Default component (an instance of 398 429 :class:`Orange.classification.rules.RuleBeamCandidateSelector_TakeAll`) 399 takes all rules in rules Star430 takes all rules in rules_star 400 431 401 432 .. attribute:: refiner … … 408 439 a conjunctive selector to selectors present in the rule. 409 440 410 .. attribute:: rule Filter441 .. attribute:: rule_filter 411 442 412 443 an object of class … … 416 447 :class:`Orange.classification.rules.RuleBeamFilter_Width`\ *(m=5)*\ . 417 448 418 .. method:: __call__(data, weight ID, targetClass, baseRules)449 .. method:: __call__(data, weight_id, target_class, base_rules) 419 450 420 451 Determines the next best rule to cover the remaining data instances. … … 423 454 :type data: :class:`Orange.data.Table` 424 455 425 :param weight ID: index of the weight metaattribute.426 :type weight ID: int427 428 :param target Class: index of the target class.429 :type target Class: int430 431 :param base Rules: existing rules.432 :type base Rules: :class:`Orange.classification.rules.RuleList`456 :param weight_id: index of the weight metaattribute. 457 :type weight_id: int 458 459 :param target_class: index of the target class. 460 :type target_class: int 461 462 :param base_rules: existing rules. 463 :type base_rules: :class:`Orange.classification.rules.RuleList` 433 464 434 465 Rule evaluators … … 441 472 following manner: 442 473 443 .. method:: __call__(rule, instances, weight ID, targetClass, prior)474 .. method:: __call__(rule, instances, weight_id, target_class, prior) 444 475 445 476 Calculates a nonnegative rule quality. … … 451 482 :type instances: :class:`Orange.data.Table` 452 483 453 :param weight ID: index of the weight metaattribute.454 :type weight ID: int484 :param weight_id: index of the weight metaattribute. 485 :type weight_id: int 455 486 456 :param target Class: index of target class of this rule.457 :type target Class: int487 :param target_class: index of target class of this rule. 488 :type target_class: int 458 489 459 490 :param prior: prior class distribution. 460 :type prior: :class:`Orange. core.Distribution`491 :type prior: :class:`Orange.statistics.distribution.Distribution` 461 492 462 493 .. autoclass:: Orange.classification.rules.LaplaceEvaluator … … 492 523 instances covered by the rule and return remaining instances. 493 524 494 .. method:: __call__(rule, instances, weights, target Class)525 .. method:: __call__(rule, instances, weights, target_class) 495 526 496 527 Calculates a nonnegative rule quality. … … 505 536 :type weights: int 506 537 507 :param target Class: index of target class of this rule.508 :type target Class: int538 :param target_class: index of target class of this rule. 539 :type target_class: int 509 540 510 541 .. autoclass:: CovererAndRemover_MultWeights … … 515 546  516 547 517 .. automethod:: Orange.classification.rules.rule ToString548 .. automethod:: Orange.classification.rules.rule_to_string 518 549 519 550 .. … … 528 559 """ 529 560 561 import random 562 import math 563 import operator 564 import numpy 565 566 import Orange 567 import Orange.core 530 568 from Orange.core import \ 531 569 AssociationClassifier, \ … … 561 599 RuleValidator, \ 562 600 RuleValidator_LRS 563 564 import Orange.core 565 import random 566 import math 567 568 601 from Orange.misc import deprecated_keywords 602 from Orange.misc import deprecated_members 603 604 from orngABML import * 605 606 607 @deprecated_members({"weightID": "weight_id", "targetClass": "target_class"}) 569 608 class LaplaceEvaluator(RuleEvaluator): 570 609 """ 571 610 Laplace's rule of succession. 572 611 """ 573 def __call__(self, rule, data, weight ID, targetClass, apriori):574 if not rule.class Distribution:612 def __call__(self, rule, data, weight_id, target_class, apriori): 613 if not rule.class_distribution: 575 614 return 0. 576 sumDist = rule.class Distribution.cases577 if not sumDist or (target Class>1 and not rule.classDistribution[targetClass]):615 sumDist = rule.class_distribution.cases 616 if not sumDist or (target_class>1 and not rule.class_distribution[target_class]): 578 617 return 0. 579 618 # get distribution 580 if target Class>1:581 return (rule.class Distribution[targetClass]+1)/(sumDist+2)619 if target_class>1: 620 return (rule.class_distribution[target_class]+1)/(sumDist+2) 582 621 else: 583 return (max(rule.classDistribution)+1)/(sumDist+len(data.domain.classVar.values)) 584 585 622 return (max(rule.class_distribution)+1)/(sumDist+len(data.domain.class_var.values)) 623 624 625 @deprecated_members({"weightID": "weight_id", "targetClass": "target_class"}) 586 626 class WRACCEvaluator(RuleEvaluator): 587 627 """ 588 628 Weighted relative accuracy. 589 629 """ 590 def __call__(self, rule, data, weight ID, targetClass, apriori):591 if not rule.class Distribution:630 def __call__(self, rule, data, weight_id, target_class, apriori): 631 if not rule.class_distribution: 592 632 return 0. 593 sumDist = rule.class Distribution.cases594 if not sumDist or (target Class>1 and not rule.classDistribution[targetClass]):633 sumDist = rule.class_distribution.cases 634 if not sumDist or (target_class>1 and not rule.class_distribution[target_class]): 595 635 return 0. 596 636 # get distribution 597 if target Class>1:598 pRule = rule.class Distribution[targetClass]/apriori[targetClass]599 pTruePositive = rule.class Distribution[targetClass]/sumDist600 pClass = apriori[target Class]/apriori.cases637 if target_class>1: 638 pRule = rule.class_distribution[target_class]/apriori[target_class] 639 pTruePositive = rule.class_distribution[target_class]/sumDist 640 pClass = apriori[target_class]/apriori.cases 601 641 else: 602 642 pRule = sumDist/apriori.cases 603 pTruePositive = max(rule.class Distribution)/sumDist604 pClass = apriori[rule.class Distribution.modus()]/sum(apriori)643 pTruePositive = max(rule.class_distribution)/sumDist 644 pClass = apriori[rule.class_distribution.modus()]/sum(apriori) 605 645 if pTruePositive>pClass: 606 646 return pRule*(pTruePositivepClass) … … 608 648 609 649 650 @deprecated_members({"weightID": "weight_id", "targetClass": "target_class"}) 610 651 class MEstimateEvaluator(RuleEvaluator): 611 652 """ … … 618 659 def __init__(self, m=2): 619 660 self.m = m 620 def __call__(self, rule, data, weight ID, targetClass, apriori):621 if not rule.class Distribution:661 def __call__(self, rule, data, weight_id, target_class, apriori): 662 if not rule.class_distribution: 622 663 return 0. 623 sumDist = rule.class Distribution.abs664 sumDist = rule.class_distribution.abs 624 665 if self.m == 0 and not sumDist: 625 666 return 0. 626 667 # get distribution 627 if target Class>1:628 p = rule.class Distribution[targetClass]+self.m*apriori[targetClass]/apriori.abs629 p = p / (rule.class Distribution.abs + self.m)668 if target_class>1: 669 p = rule.class_distribution[target_class]+self.m*apriori[target_class]/apriori.abs 670 p = p / (rule.class_distribution.abs + self.m) 630 671 else: 631 p = max(rule.class Distribution)+self.m*apriori[rule.\632 class Distribution.modus()]/apriori.abs633 p = p / (rule.class Distribution.abs + self.m)672 p = max(rule.class_distribution)+self.m*apriori[rule.\ 673 class_distribution.modus()]/apriori.abs 674 p = p / (rule.class_distribution.abs + self.m) 634 675 return p 635 676 636 677 678 @deprecated_members({"beamWidth": "beam_width", 679 "ruleFinder": "rule_finder", 680 "ruleStopping": "rule_stopping", 681 "dataStopping": "data_stopping", 682 "coverAndRemove": "cover_and_remove", 683 "ruleFinder": "rule_finder", 684 "storeInstances": "store_instances", 685 "targetClass": "target_class", 686 "baseRules": "base_rules", 687 "weightID": "weight_id"}) 637 688 class CN2Learner(RuleLearner): 638 689 """ … … 649 700 By default, entropy is used as a measure. 650 701 :type evaluator: :class:`Orange.classification.rules.RuleEvaluator` 651 :param beam Width: width of the search beam.652 :type beam Width: int702 :param beam_width: width of the search beam. 703 :type beam_width: int 653 704 :param alpha: significance level of the statistical test to determine 654 705 whether rule is good enough to be returned by rulefinder. Likelihood … … 659 710 """ 660 711 661 def __new__(cls, instances=None, weight ID=0, **kwargs):712 def __new__(cls, instances=None, weight_id=0, **kwargs): 662 713 self = RuleLearner.__new__(cls, **kwargs) 663 714 if instances is not None: 664 715 self.__init__(**kwargs) 665 return self.__call__(instances, weight ID)716 return self.__call__(instances, weight_id) 666 717 else: 667 718 return self 668 719 669 def __init__(self, evaluator = RuleEvaluator_Entropy(), beam Width = 5,720 def __init__(self, evaluator = RuleEvaluator_Entropy(), beam_width = 5, 670 721 alpha = 1.0, **kwds): 671 722 self.__dict__.update(kwds) 672 self.rule Finder = RuleBeamFinder()673 self.rule Finder.ruleFilter = RuleBeamFilter_Width(width = beamWidth)674 self.rule Finder.evaluator = evaluator675 self.rule Finder.validator = RuleValidator_LRS(alpha = alpha)723 self.rule_finder = RuleBeamFinder() 724 self.rule_finder.ruleFilter = RuleBeamFilter_Width(width = beam_width) 725 self.rule_finder.evaluator = evaluator 726 self.rule_finder.validator = RuleValidator_LRS(alpha = alpha) 676 727 677 728 def __call__(self, instances, weight=0): … … 683 734 684 735 736 @deprecated_members({"resultType": "result_type", "beamWidth": "beam_width"}) 685 737 class CN2Classifier(RuleClassifier): 686 738 """ … … 699 751 :type instances: :class:`Orange.data.Table` 700 752 701 :param weightID: ID of the weight metaattribute. 702 :type weightID: int 703 704 """ 705 706 def __init__(self, rules=None, instances=None, weightID = 0, **argkw): 753 :param weight_id: ID of the weight metaattribute. 754 :type weight_id: int 755 756 """ 757 758 @deprecated_keywords({"examples": "instances"}) 759 def __init__(self, rules=None, instances=None, weight_id = 0, **argkw): 707 760 self.rules = rules 708 761 self.examples = instances 709 self.weight ID = weightID710 self.class Var = None if instances is None else instances.domain.classVar762 self.weight_id = weight_id 763 self.class_var = None if instances is None else instances.domain.class_var 711 764 self.__dict__.update(argkw) 712 765 if instances is not None: 713 self.prior = Orange. core.Distribution(instances.domain.classVar,instances)766 self.prior = Orange.statistics.distribution.Distribution(instances.domain.class_var,instances) 714 767 715 768 def __call__(self, instance, result_type=Orange.classification.Classifier.GetValue): … … 730 783 if r(instance) and r.classifier: 731 784 classifier = r.classifier 732 classifier.defaultDistribution = r.class Distribution785 classifier.defaultDistribution = r.class_distribution 733 786 break 734 787 if not classifier: 735 classifier = Orange.c ore.DefaultClassifier(instance.domain.classVar,\788 classifier = Orange.classification.ConstantClassifier(instance.domain.class_var,\ 736 789 self.prior.modus()) 737 790 classifier.defaultDistribution = self.prior … … 740 793 return classifier(instance) 741 794 if result_type == Orange.classification.Classifier.GetProbabilities: 742 return classifier.default Distribution743 return (classifier(instance),classifier.default Distribution)795 return classifier.default_distribution 796 return (classifier(instance),classifier.default_distribution) 744 797 745 798 def __str__(self): 746 ret Str = ruleToString(self.rules[0])+" "+str(self.rules[0].\747 class Distribution)+"\n"799 ret_str = rule_to_string(self.rules[0])+" "+str(self.rules[0].\ 800 class_distribution)+"\n" 748 801 for r in self.rules[1:]: 749 retStr += "ELSE "+ruleToString(r)+" "+str(r.classDistribution)+"\n" 750 return retStr 751 752 802 ret_str += "ELSE "+rule_to_string(r)+" "+str(r.class_distribution)+"\n" 803 return ret_str 804 805 806 @deprecated_members({"beamWidth": "beam_width", 807 "ruleFinder": "rule_finder", 808 "ruleStopping": "rule_stopping", 809 "dataStopping": "data_stopping", 810 "coverAndRemove": "cover_and_remove", 811 "ruleFinder": "rule_finder", 812 "storeInstances": "store_instances", 813 "targetClass": "target_class", 814 "baseRules": "base_rules", 815 "weightID": "weight_id"}) 753 816 class CN2UnorderedLearner(RuleLearner): 754 817 """ … … 767 830 By default, Laplace's rule of succession is used as a measure. 768 831 :type evaluator: :class:`Orange.classification.rules.RuleEvaluator` 769 :param beam Width: width of the search beam.770 :type beam Width: int832 :param beam_width: width of the search beam. 833 :type beam_width: int 771 834 :param alpha: significance level of the statistical test to determine 772 835 whether rule is good enough to be returned by rulefinder. Likelihood … … 775 838 :type alpha: float 776 839 """ 777 def __new__(cls, instances=None, weight ID=0, **kwargs):840 def __new__(cls, instances=None, weight_id=0, **kwargs): 778 841 self = RuleLearner.__new__(cls, **kwargs) 779 842 if instances is not None: 780 843 self.__init__(**kwargs) 781 return self.__call__(instances, weight ID)844 return self.__call__(instances, weight_id) 782 845 else: 783 846 return self 784 847 785 def __init__(self, evaluator = RuleEvaluator_Laplace(), beam Width = 5,848 def __init__(self, evaluator = RuleEvaluator_Laplace(), beam_width = 5, 786 849 alpha = 1.0, **kwds): 787 850 self.__dict__.update(kwds) 788 self.ruleFinder = RuleBeamFinder() 789 self.ruleFinder.ruleFilter = RuleBeamFilter_Width(width = beamWidth) 790 self.ruleFinder.evaluator = evaluator 791 self.ruleFinder.validator = RuleValidator_LRS(alpha = alpha) 792 self.ruleFinder.ruleStoppingValidator = RuleValidator_LRS(alpha = 1.0) 793 self.ruleStopping = RuleStopping_Apriori() 794 self.dataStopping = RuleDataStoppingCriteria_NoPositives() 795 796 def __call__(self, instances, weight=0): 851 self.rule_finder = RuleBeamFinder() 852 self.rule_finder.ruleFilter = RuleBeamFilter_Width(width = beam_width) 853 self.rule_finder.evaluator = evaluator 854 self.rule_finder.validator = RuleValidator_LRS(alpha = alpha) 855 self.rule_finder.rule_stoppingValidator = RuleValidator_LRS(alpha = 1.0) 856 self.rule_stopping = RuleStopping_Apriori() 857 self.data_stopping = RuleDataStoppingCriteria_NoPositives() 858 859 @deprecated_keywords({"weight": "weight_id"}) 860 def __call__(self, instances, weight_id=0): 797 861 supervisedClassCheck(instances) 798 862 799 863 rules = RuleList() 800 self.rule Stopping.apriori = Orange.core.Distribution(instances.\801 domain.classVar,instances)864 self.rule_stopping.apriori = Orange.statistics.distribution.Distribution( 865 instances.domain.class_var,instances) 802 866 progress=getattr(self,"progressCallback",None) 803 867 if progress: 804 868 progress.start = 0.0 805 869 progress.end = 0.0 806 distrib = Orange. core.Distribution(instances.domain.classVar,\807 instances , weight)870 distrib = Orange.statistics.distribution.Distribution( 871 instances.domain.class_var, instances, weight_id) 808 872 distrib.normalize() 809 for target Class in instances.domain.classVar:873 for target_class in instances.domain.class_var: 810 874 if progress: 811 875 progress.start = progress.end 812 progress.end += distrib[target Class]813 self.target Class = targetClass814 cl = RuleLearner.__call__(self,instances,weight )876 progress.end += distrib[target_class] 877 self.target_class = target_class 878 cl = RuleLearner.__call__(self,instances,weight_id) 815 879 for r in cl.rules: 816 880 rules.append(r) 817 881 if progress: 818 882 progress(1.0,None) 819 return CN2UnorderedClassifier(rules, instances, weight )883 return CN2UnorderedClassifier(rules, instances, weight_id) 820 884 821 885 … … 836 900 :type instances: :class:`Orange.data.Table` 837 901 838 :param weightID: ID of the weight metaattribute. 839 :type weightID: int 840 841 """ 842 def __init__(self, rules = None, instances = None, weightID = 0, **argkw): 902 :param weight_id: ID of the weight metaattribute. 903 :type weight_id: int 904 905 """ 906 907 @deprecated_keywords({"examples": "instances"}) 908 def __init__(self, rules = None, instances = None, weight_id = 0, **argkw): 843 909 self.rules = rules 844 910 self.examples = instances 845 self.weight ID = weightID846 self.class Var = instances.domain.classVar if instances is not None else None911 self.weight_id = weight_id 912 self.class_var = instances.domain.class_var if instances is not None else None 847 913 self.__dict__.update(argkw) 848 914 if instances is not None: 849 self.prior = Orange.core.Distribution(instances.domain.classVar, instances) 850 851 def __call__(self, instance, result_type=Orange.core.GetValue, retRules = False): 915 self.prior = Orange.statistics.distribution.Distribution( 916 instances.domain.class_var, instances) 917 918 def __call__(self, instance, result_type=Orange.classification.Classifier.GetValue, retRules = False): 852 919 """ 853 920 :param instance: instance to be classified. … … 862 929 """ 863 930 def add(disc1, disc2, sumd): 864 disc = Orange. core.DiscDistribution(disc1)931 disc = Orange.statistics.distribution.Discrete(disc1) 865 932 sumdisc = sumd 866 933 for i,d in enumerate(disc): … … 870 937 871 938 # create empty distribution 872 retDist = Orange. core.DiscDistribution(self.examples.domain.classVar)939 retDist = Orange.statistics.distribution.Discrete(self.examples.domain.class_var) 873 940 covRules = RuleList() 874 941 # iterate through instances  add distributions 875 942 sumdisc = 0. 876 943 for r in self.rules: 877 if r(instance) and r.class Distribution:878 retDist, sumdisc = add(retDist, r.class Distribution, sumdisc)944 if r(instance) and r.class_distribution: 945 retDist, sumdisc = add(retDist, r.class_distribution, sumdisc) 879 946 covRules.append(r) 880 947 if not sumdisc: … … 883 950 884 951 if sumdisc > 0.0: 885 for c in self.examples.domain.class Var:952 for c in self.examples.domain.class_var: 886 953 retDist[c] /= sumdisc 887 954 else: … … 903 970 retStr = "" 904 971 for r in self.rules: 905 retStr += rule ToString(r)+" "+str(r.classDistribution)+"\n"972 retStr += rule_to_string(r)+" "+str(r.class_distribution)+"\n" 906 973 return retStr 907 974 … … 927 994 By default, weighted relative accuracy is used. 928 995 :type evaluator: :class:`Orange.classification.rules.RuleEvaluator` 929 :param beam Width: width of the search beam.930 :type beam Width: int996 :param beam_width: width of the search beam. 997 :type beam_width: int 931 998 :param alpha: significance level of the statistical test to determine 932 999 whether rule is good enough to be returned by rulefinder. Likelihood … … 937 1004 :type mult: float 938 1005 """ 939 def __new__(cls, instances=None, weight ID=0, **kwargs):1006 def __new__(cls, instances=None, weight_id=0, **kwargs): 940 1007 self = CN2UnorderedLearner.__new__(cls, **kwargs) 941 1008 if instances is not None: 942 1009 self.__init__(**kwargs) 943 return self.__call__(instances, weight ID)1010 return self.__call__(instances, weight_id) 944 1011 else: 945 1012 return self 946 1013 947 def __init__(self, evaluator = WRACCEvaluator(), beam Width = 5,1014 def __init__(self, evaluator = WRACCEvaluator(), beam_width = 5, 948 1015 alpha = 0.05, mult=0.7, **kwds): 949 1016 CN2UnorderedLearner.__init__(self, evaluator = evaluator, 950 beam Width = beamWidth, alpha = alpha, **kwds)951 self.cover AndRemove = CovererAndRemover_MultWeights(mult=mult)1017 beam_width = beam_width, alpha = alpha, **kwds) 1018 self.cover_and_remove = CovererAndRemover_MultWeights(mult=mult) 952 1019 953 1020 def __call__(self, instances, weight=0): … … 957 1024 classifier = CN2UnorderedLearner.__call__(self,instances,weight) 958 1025 for r in classifier.rules: 959 r.filterAndStore(oldInstances,weight,r.classifier.default Val)1026 r.filterAndStore(oldInstances,weight,r.classifier.default_val) 960 1027 return classifier 961 1028 962 1029 963 964 # Main ABCN2 class 965 class ABCN2(Orange.core.RuleLearner): 966 """COPIED&PASTED FROM orngABCN2  REFACTOR AND DOCUMENT ASAP! 967 This is implementation of ABCN2 + EVC as evaluation + LRC classification. 968 """ 969 970 def __init__(self, argumentID=0, width=5, m=2, opt_reduction=2, nsampling=100, max_rule_complexity=5, 1030 @deprecated_members({"beamWidth": "beam_width", 1031 "ruleFinder": "rule_finder", 1032 "ruleStopping": "rule_stopping", 1033 "dataStopping": "data_stopping", 1034 "coverAndRemove": "cover_and_remove", 1035 "ruleFinder": "rule_finder", 1036 "storeInstances": "store_instances", 1037 "targetClass": "target_class", 1038 "baseRules": "base_rules", 1039 "weightID": "weight_id", 1040 "argumentID": "argument_id"}) 1041 class ABCN2(RuleLearner): 1042 """ 1043 This is an implementation of argumentbased CN2 using EVC as evaluation 1044 and LRC classification. 1045 1046 Rule learning parameters that can be passed to constructor: 1047 1048 :param width: beam width (default 5). 1049 :type width: int 1050 :param learn_for_class: class for which to learn; None (default) if all 1051 classes are to be learnt. 1052 :param learn_one_rule: decides whether to rule one rule only (default 1053 False). 1054 :type learn_one_rule: boolean 1055 :param analyse_argument: index of argument to analyse; 1 to learn normally 1056 (default) 1057 :type analyse_argument: int 1058 1059 The following evaluator related arguments are supported: 1060 1061 :param m: m for mestimate to be corrected with EVC (default 2). 1062 :type m: int 1063 :param opt_reduction: type of EVC correction: 0=no correction, 1064 1=pessimistic, 2=normal (default 2). 1065 :type opt_reduction: int 1066 :param nsampling: number of samples in estimating extreme value 1067 distribution for EVC (default 100). 1068 :type nsampling: int 1069 :param evd: pregiven extreme value distributions. 1070 :param evd_arguments: pregiven extreme value distributions for arguments. 1071 1072 Those parameters control rule validation: 1073 1074 :param rule_sig: minimal rule significance (default 1.0). 1075 :type rule_sig: float 1076 :param att_sig: minimal attribute significance in rule (default 1.0). 1077 :type att_sig: float 1078 :param max_rule_complexity: maximum number of conditions in rule (default 5). 1079 :type max_rule_complexity: int 1080 :param min_coverage: minimal number of covered instances (default 5). 1081 :type min_coverage: int 1082 1083 Probabilistic covering can be controlled using: 1084 1085 :param min_improved: minimal number of instances improved in probabilistic covering (default 1). 1086 :type min_improved: int 1087 :param min_improved_perc: minimal percentage of covered instances that need to be improved (default 0.0). 1088 :type min_improved_perc: float 1089 1090 Finally, LRC (classifier) related parameters are: 1091 1092 :param add_sub_rules: decides whether to add subrules. 1093 :type add_sub_rules: boolean 1094 :param min_cl_sig: minimal significance of beta in classifier (default 0.5). 1095 :type min_cl_sig: float 1096 :param min_beta: minimal beta value (default 0.0). 1097 :type min_beta: float 1098 :param set_prefix_rules: decides whether ordered prefix rules should be 1099 added (default False). 1100 :type set_prefix_rules: boolean 1101 :param alternative_learner: use rulelearner as a correction method for 1102 other machine learning methods (default None). 1103 """ 1104 1105 def __init__(self, argument_id=0, width=5, m=2, opt_reduction=2, nsampling=100, max_rule_complexity=5, 971 1106 rule_sig=1.0, att_sig=1.0, postpruning=None, min_quality=0., min_coverage=1, min_improved=1, min_improved_perc=0.0, 972 1107 learn_for_class = None, learn_one_rule = False, evd=None, evd_arguments=None, prune_arguments=False, analyse_argument=1, 973 1108 alternative_learner = None, min_cl_sig = 0.5, min_beta = 0.0, set_prefix_rules = False, add_sub_rules = False, 974 1109 **kwds): 975 """976 Parameters:977 General rule learning:978 width ... beam width (default 5)979 learn_for_class ... learner rules for one class? otherwise None980 learn_one_rule ... learn one rule only ?981 analyse_argument ... learner only analyses argument with this index; if set to 1, then it learns normally982 983 Evaluator related:984 m ... mestimate to be corrected with EVC (default 2)985 opt_reduction ... types of EVC correction; 0=no correction, 1=pessimistic, 2=normal (default 2)986 nsampling ... number of samples in estimating extreme value distribution (for EVC) (default 100)987 evd ... pre given extreme value distributions988 evd_arguments ... pre given extreme value distributions for arguments989 990 Rule Validation:991 rule_sig ... minimal rule significance (default 1.0)992 att_sig ... minimal attribute significance in rule (default 1.0)993 max_rule_complexity ... maximum number of conditions in rule (default 5)994 min_coverage ... minimal number of covered examples (default 5)995 996 Probabilistic covering:997 min_improved ... minimal number of examples improved in probabilistic covering (default 1)998 min_improved_perc ... minimal percentage of covered examples that need to be improved (default 0.0)999 1000 Classifier (LCR) related:1001 add_sub_rules ... add sub rules ? (default False)1002 min_cl_sig ... minimal significance of beta in classifier (default 0.5)1003 min_beta ... minimal beta value (default 0.0)1004 set_prefix_rules ... should ordered prefix rules be added? (default False)1005 alternative_learner ... use rulelearner as a correction method for other machine learning methods. (default None)1006 1007 """1008 1009 1110 1010 1111 # argument ID which is passed to abcn2 learner 1011 self.argument ID = argumentID1112 self.argument_id = argument_id 1012 1113 # learn for specific class only? 1013 1114 self.learn_for_class = learn_for_class … … 1018 1119 self.postpruning = postpruning 1019 1120 # rule finder 1020 self.rule Finder = Orange.core.RuleBeamFinder()1021 self.ruleFilter = Orange.core.RuleBeamFilter_Width(width=width)1121 self.rule_finder = RuleBeamFinder() 1122 self.ruleFilter = RuleBeamFilter_Width(width=width) 1022 1123 self.ruleFilter_arguments = ABBeamFilter(width=width) 1023 1124 if max_rule_complexity  1 < 0: 1024 1125 max_rule_complexity = 10 1025 self.rule Finder.ruleStoppingValidator = Orange.core.RuleValidator_LRS(alpha = 1.0, min_quality = 0., max_rule_complexity = max_rule_complexity  1, min_coverage=min_coverage)1026 self.refiner = Orange.core.RuleBeamRefiner_Selector()1027 self.refiner_arguments = SelectorAdder(discretizer = Orange. core.EntropyDiscretization(forceAttribute = 1,1126 self.rule_finder.rule_stoppingValidator = RuleValidator_LRS(alpha = 1.0, min_quality = 0., max_rule_complexity = max_rule_complexity  1, min_coverage=min_coverage) 1127 self.refiner = RuleBeamRefiner_Selector() 1128 self.refiner_arguments = SelectorAdder(discretizer = Orange.feature.discretization.EntropyDiscretization(forceAttribute = 1, 1028 1129 maxNumberOfIntervals = 2)) 1029 1130 self.prune_arguments = prune_arguments 1030 1131 # evc evaluator 1031 1132 evdGet = Orange.core.EVDistGetter_Standard() 1032 self.rule Finder.evaluator = Orange.core.RuleEvaluator_mEVC(m=m, evDistGetter = evdGet, min_improved = min_improved, min_improved_perc = min_improved_perc)1033 self.rule Finder.evaluator.returnExpectedProb = True1034 self.rule Finder.evaluator.optimismReduction = opt_reduction1035 self.rule Finder.evaluator.ruleAlpha = rule_sig1036 self.rule Finder.evaluator.attributeAlpha = att_sig1037 self.rule Finder.evaluator.validator = Orange.core.RuleValidator_LRS(alpha = 1.0, min_quality = min_quality, min_coverage=min_coverage, max_rule_complexity = max_rule_complexity  1)1133 self.rule_finder.evaluator = RuleEvaluator_mEVC(m=m, evDistGetter = evdGet, min_improved = min_improved, min_improved_perc = min_improved_perc) 1134 self.rule_finder.evaluator.returnExpectedProb = True 1135 self.rule_finder.evaluator.optimismReduction = opt_reduction 1136 self.rule_finder.evaluator.ruleAlpha = rule_sig 1137 self.rule_finder.evaluator.attributeAlpha = att_sig 1138 self.rule_finder.evaluator.validator = RuleValidator_LRS(alpha = 1.0, min_quality = min_quality, min_coverage=min_coverage, max_rule_complexity = max_rule_complexity  1) 1038 1139 1039 1140 # learn stopping criteria 1040 self.rule Stopping = None1041 self.data Stopping = Orange.core.RuleDataStoppingCriteria_NoPositives()1141 self.rule_stopping = None 1142 self.data_stopping = RuleDataStoppingCriteria_NoPositives() 1042 1143 # evd fitting 1043 1144 self.evd_creator = EVDFitter(self,n=nsampling) … … 1051 1152 1052 1153 1053 def __call__(self, examples, weight ID=0):1154 def __call__(self, examples, weight_id=0): 1054 1155 # initialize progress bar 1055 1156 progress=getattr(self,"progressCallback",None) … … 1057 1158 progress.start = 0.0 1058 1159 progress.end = 0.0 1059 distrib = Orange.core.Distribution(examples.domain.classVar, examples, weightID) 1160 distrib = Orange.statistics.distribution.Distribution( 1161 examples.domain.class_var, examples, weight_id) 1060 1162 distrib.normalize() 1061 1163 1062 1164 # we begin with an empty set of rules 1063 all_rules = Orange.core.RuleList()1165 all_rules = RuleList() 1064 1166 1065 1167 # th en, iterate through all classes and learn rule for each class separately 1066 for cl_i,cl in enumerate(examples.domain.class Var):1168 for cl_i,cl in enumerate(examples.domain.class_var): 1067 1169 if progress: 1068 1170 step = distrib[cl] / 2. … … 1074 1176 1075 1177 # rules for this class only 1076 rules, arg_rules = Orange.core.RuleList(), Orange.core.RuleList()1178 rules, arg_rules = RuleList(), RuleList() 1077 1179 1078 1180 # create dichotomous class … … 1080 1182 1081 1183 # preparation of the learner (covering, evd, etc.) 1082 self.prepare_settings(dich_data, weight ID, cl_i, progress)1184 self.prepare_settings(dich_data, weight_id, cl_i, progress) 1083 1185 1084 1186 # learn argumented rules first ... 1085 self.turn_ABML_mode(dich_data, weight ID, cl_i)1187 self.turn_ABML_mode(dich_data, weight_id, cl_i) 1086 1188 # first specialize all unspecialized arguments 1087 # dich_data = self.specialise_arguments(dich_data, weight ID)1189 # dich_data = self.specialise_arguments(dich_data, weight_id) 1088 1190 # comment: specialisation of arguments is within learning of an argumented rule; 1089 1191 # this is now different from the published algorithm … … 1099 1201 continue 1100 1202 ae = aes[0] 1101 rule = self.learn_argumented_rule(ae, dich_data, weight ID) # target class is always first class (0)1203 rule = self.learn_argumented_rule(ae, dich_data, weight_id) # target class is always first class (0) 1102 1204 if not progress: 1103 print "learned rule", Orange.classification.rules.rule ToString(rule)1205 print "learned rule", Orange.classification.rules.rule_to_string(rule) 1104 1206 if rule: 1105 1207 arg_rules.append(rule) … … 1112 1214 # remove all examples covered by rules 1113 1215 ## for rule in rules: 1114 ## dich_data = self.remove_covered_examples(rule, dich_data, weight ID)1216 ## dich_data = self.remove_covered_examples(rule, dich_data, weight_id) 1115 1217 ## if progress: 1116 1218 ## progress(self.remaining_probability(dich_data),None) … … 1118 1220 # learn normal rules on remaining examples 1119 1221 if self.analyse_argument == 1: 1120 self.turn_normal_mode(dich_data, weight ID, cl_i)1222 self.turn_normal_mode(dich_data, weight_id, cl_i) 1121 1223 while dich_data: 1122 1224 # learn a rule 1123 rule = self.learn_normal_rule(dich_data, weight ID, self.apriori)1225 rule = self.learn_normal_rule(dich_data, weight_id, self.apriori) 1124 1226 if not rule: 1125 1227 break 1126 1228 if not progress: 1127 print "rule learned: ", Orange.classification.rules.rule ToString(rule), rule.quality1128 dich_data = self.remove_covered_examples(rule, dich_data, weight ID)1229 print "rule learned: ", Orange.classification.rules.rule_to_string(rule), rule.quality 1230 dich_data = self.remove_covered_examples(rule, dich_data, weight_id) 1129 1231 if progress: 1130 1232 progress(self.remaining_probability(dich_data),None) … … 1134 1236 1135 1237 for r in arg_rules: 1136 dich_data = self.remove_covered_examples(r, dich_data, weight ID)1238 dich_data = self.remove_covered_examples(r, dich_data, weight_id) 1137 1239 rules.append(r) 1138 1240 1139 1241 # prune unnecessary rules 1140 rules = self.prune_unnecessary_rules(rules, dich_data, weight ID)1242 rules = self.prune_unnecessary_rules(rules, dich_data, weight_id) 1141 1243 1142 1244 if self.add_sub_rules: 1143 rules = self.add_sub_rules_call(rules, dich_data, weight ID)1245 rules = self.add_sub_rules_call(rules, dich_data, weight_id) 1144 1246 1145 1247 # restore domain and class in rules, add them to all_rules 1146 1248 for r in rules: 1147 all_rules.append(self.change_domain(r, cl, examples, weight ID))1249 all_rules.append(self.change_domain(r, cl, examples, weight_id)) 1148 1250 1149 1251 if progress: 1150 1252 progress(1.0,None) 1151 1253 # create a classifier from all rules 1152 return self.create_classifier(all_rules, examples, weight ID)1153 1154 def learn_argumented_rule(self, ae, examples, weight ID):1254 return self.create_classifier(all_rules, examples, weight_id) 1255 1256 def learn_argumented_rule(self, ae, examples, weight_id): 1155 1257 # prepare roots of rules from arguments 1156 positive_args = self.init_pos_args(ae, examples, weight ID)1258 positive_args = self.init_pos_args(ae, examples, weight_id) 1157 1259 if not positive_args: # something wrong 1158 1260 raise "There is a problem with argumented example %s"%str(ae) 1159 1261 return None 1160 negative_args = self.init_neg_args(ae, examples, weight ID)1262 negative_args = self.init_neg_args(ae, examples, weight_id) 1161 1263 1162 1264 # set negative arguments in refiner 1163 self.rule Finder.refiner.notAllowedSelectors = negative_args1164 self.rule Finder.refiner.example = ae1265 self.rule_finder.refiner.notAllowedSelectors = negative_args 1266 self.rule_finder.refiner.example = ae 1165 1267 # set arguments to filter 1166 self.rule Finder.ruleFilter.setArguments(examples.domain,positive_args)1268 self.rule_finder.ruleFilter.setArguments(examples.domain,positive_args) 1167 1269 1168 1270 # learn a rule 1169 self.rule Finder.evaluator.bestRule = None1170 self.rule Finder.evaluator.returnBestFuture = True1171 self.rule Finder(examples,weightID,0,positive_args)1172 ## self.rule Finder.evaluator.bestRule.quality = 0.81271 self.rule_finder.evaluator.bestRule = None 1272 self.rule_finder.evaluator.returnBestFuture = True 1273 self.rule_finder(examples,weight_id,0,positive_args) 1274 ## self.rule_finder.evaluator.bestRule.quality = 0.8 1173 1275 1174 1276 # return best rule 1175 return self.rule Finder.evaluator.bestRule1277 return self.rule_finder.evaluator.bestRule 1176 1278 1177 def prepare_settings(self, examples, weight ID, cl_i, progress):1279 def prepare_settings(self, examples, weight_id, cl_i, progress): 1178 1280 # apriori distribution 1179 self.apriori = Orange.core.Distribution(examples.domain.classVar,examples,weightID) 1281 self.apriori = Orange.statistics.distribution.Distribution( 1282 examples.domain.class_var,examples,weight_id) 1180 1283 1181 1284 # prepare covering mechanism 1182 self.cover AndRemove = CovererAndRemover_Prob(examples, weightID, 0, self.apriori)1183 self.rule Finder.evaluator.probVar = examples.domain.getmeta(self.coverAndRemove.probAttribute)1285 self.cover_and_remove = CovererAndRemover_Prob(examples, weight_id, 0, self.apriori) 1286 self.rule_finder.evaluator.probVar = examples.domain.getmeta(self.cover_and_remove.probAttribute) 1184 1287 1185 1288 # compute extreme distributions 1186 1289 # TODO: why evd and evd_this???? 1187 if self.rule Finder.evaluator.optimismReduction > 0 and not self.evd:1188 self.evd_this = self.evd_creator.computeEVD(examples, weight ID, target_class=0, progress = progress)1290 if self.rule_finder.evaluator.optimismReduction > 0 and not self.evd: 1291 self.evd_this = self.evd_creator.computeEVD(examples, weight_id, target_class=0, progress = progress) 1189 1292 if self.evd: 1190 1293 self.evd_this = self.evd[cl_i] 1191 1294 1192 def turn_ABML_mode(self, examples, weight ID, cl_i):1295 def turn_ABML_mode(self, examples, weight_id, cl_i): 1193 1296 # evaluator 1194 if self.rule Finder.evaluator.optimismReduction > 0 and self.argumentID:1297 if self.rule_finder.evaluator.optimismReduction > 0 and self.argument_id: 1195 1298 if self.evd_arguments: 1196 self.rule Finder.evaluator.evDistGetter.dists = self.evd_arguments[cl_i]1299 self.rule_finder.evaluator.evDistGetter.dists = self.evd_arguments[cl_i] 1197 1300 else: 1198 self.rule Finder.evaluator.evDistGetter.dists = self.evd_this # self.evd_creator.computeEVD_example(examples, weightID, target_class=0)1301 self.rule_finder.evaluator.evDistGetter.dists = self.evd_this # self.evd_creator.computeEVD_example(examples, weight_id, target_class=0) 1199 1302 # rule refiner 1200 self.rule Finder.refiner = self.refiner_arguments1201 self.rule Finder.refiner.argumentID = self.argumentID1202 self.rule Finder.ruleFilter = self.ruleFilter_arguments1303 self.rule_finder.refiner = self.refiner_arguments 1304 self.rule_finder.refiner.argument_id = self.argument_id 1305 self.rule_finder.ruleFilter = self.ruleFilter_arguments 1203 1306 1204 1307 def create_dich_class(self, examples, cl): 1205 """ create dichotomous class. """ 1206 (newDomain, targetVal) = createDichotomousClass(examples.domain, examples.domain.classVar, str(cl), negate=0) 1308 """ 1309 Create dichotomous class. 1310 """ 1311 (newDomain, targetVal) = createDichotomousClass(examples.domain, examples.domain.class_var, str(cl), negate=0) 1207 1312 newDomainmetas = newDomain.getmetas() 1208 newDomain.addmeta(Orange.core.newmetaid(), examples.domain.class Var) # old class as meta1313 newDomain.addmeta(Orange.core.newmetaid(), examples.domain.class_var) # old class as meta 1209 1314 dichData = examples.select(newDomain) 1210 if self.argument ID:1315 if self.argument_id: 1211 1316 for d in dichData: # remove arguments given to other classes 1212 1317 if not d.getclass() == targetVal: 1213 d[self.argument ID] = "?"1318 d[self.argument_id] = "?" 1214 1319 return dichData 1215 1320 1216 1321 def get_argumented_examples(self, examples): 1217 if not self.argument ID:1322 if not self.argument_id: 1218 1323 return None 1219 1324 1220 1325 # get argumentated examples 1221 return ArgumentFilter_hasSpecial()(examples, self.argument ID, targetClass = 0)1326 return ArgumentFilter_hasSpecial()(examples, self.argument_id, target_class = 0) 1222 1327 1223 1328 def sort_arguments(self, arg_examples, examples): 1224 if not self.argument ID:1329 if not self.argument_id: 1225 1330 return None 1226 evaluateAndSortArguments(examples, self.argument ID)1331 evaluateAndSortArguments(examples, self.argument_id) 1227 1332 if len(arg_examples)>0: 1228 1333 # sort examples by their arguments quality (using first argument as it has already been sorted) 1229 1334 sorted = arg_examples.native() 1230 sorted.sort(lambda x,y: cmp(x[self.argument ID].value.positiveArguments[0].quality,1231 y[self.argument ID].value.positiveArguments[0].quality))1232 return Orange. core.ExampleTable(examples.domain, sorted)1335 sorted.sort(lambda x,y: cmp(x[self.argument_id].value.positiveArguments[0].quality, 1336 y[self.argument_id].value.positiveArguments[0].quality)) 1337 return Orange.data.Table(examples.domain, sorted) 1233 1338 else: 1234 1339 return None 1235 1340 1236 def turn_normal_mode(self, examples, weight ID, cl_i):1341 def turn_normal_mode(self, examples, weight_id, cl_i): 1237 1342 # evaluator 1238 if self.rule Finder.evaluator.optimismReduction > 0:1343 if self.rule_finder.evaluator.optimismReduction > 0: 1239 1344 if self.evd: 1240 self.rule Finder.evaluator.evDistGetter.dists = self.evd[cl_i]1345 self.rule_finder.evaluator.evDistGetter.dists = self.evd[cl_i] 1241 1346 else: 1242 self.rule Finder.evaluator.evDistGetter.dists = self.evd_this # self.evd_creator.computeEVD(examples, weightID, target_class=0)1347 self.rule_finder.evaluator.evDistGetter.dists = self.evd_this # self.evd_creator.computeEVD(examples, weight_id, target_class=0) 1243 1348 # rule refiner 1244 self.rule Finder.refiner = self.refiner1245 self.rule Finder.ruleFilter = self.ruleFilter1349 self.rule_finder.refiner = self.refiner 1350 self.rule_finder.ruleFilter = self.ruleFilter 1246 1351 1247 def learn_normal_rule(self, examples, weight ID, apriori):1248 if hasattr(self.rule Finder.evaluator, "bestRule"):1249 self.rule Finder.evaluator.bestRule = None1250 rule = self.rule Finder(examples,weightID,0,Orange.core.RuleList())1251 if hasattr(self.rule Finder.evaluator, "bestRule") and self.ruleFinder.evaluator.returnExpectedProb:1252 rule = self.rule Finder.evaluator.bestRule1253 self.rule Finder.evaluator.bestRule = None1352 def learn_normal_rule(self, examples, weight_id, apriori): 1353 if hasattr(self.rule_finder.evaluator, "bestRule"): 1354 self.rule_finder.evaluator.bestRule = None 1355 rule = self.rule_finder(examples,weight_id,0,RuleList()) 1356 if hasattr(self.rule_finder.evaluator, "bestRule") and self.rule_finder.evaluator.returnExpectedProb: 1357 rule = self.rule_finder.evaluator.bestRule 1358 self.rule_finder.evaluator.bestRule = None 1254 1359 if self.postpruning: 1255 rule = self.postpruning(rule,examples,weight ID,0, aprior)1360 rule = self.postpruning(rule,examples,weight_id,0, aprior) 1256 1361 return rule 1257 1362 1258 def remove_covered_examples(self, rule, examples, weight ID):1259 nexamples, nweight = self.cover AndRemove(rule,examples,weightID,0)1363 def remove_covered_examples(self, rule, examples, weight_id): 1364 nexamples, nweight = self.cover_and_remove(rule,examples,weight_id,0) 1260 1365 return nexamples 1261 1366 1262 1367 1263 def prune_unnecessary_rules(self, rules, examples, weight ID):1264 return self.cover AndRemove.getBestRules(rules,examples,weightID)1265 1266 def change_domain(self, rule, cl, examples, weight ID):1368 def prune_unnecessary_rules(self, rules, examples, weight_id): 1369 return self.cover_and_remove.getBestRules(rules,examples,weight_id) 1370 1371 def change_domain(self, rule, cl, examples, weight_id): 1267 1372 rule.examples = rule.examples.select(examples.domain) 1268 rule.classDistribution = Orange.core.Distribution(rule.examples.domain.classVar,rule.examples,weightID) # adapt distribution 1269 rule.classifier = Orange.core.DefaultClassifier(cl) # adapt classifier 1373 rule.class_distribution = Orange.statistics.distribution.Distribution( 1374 rule.examples.domain.class_var,rule.examples,weight_id) # adapt distribution 1375 rule.classifier = Orange.classification.ConstantClassifier(cl) # adapt classifier 1270 1376 rule.filter = Orange.core.Filter_values(domain = examples.domain, 1271 1377 conditions = rule.filter.conditions) 1272 1378 if hasattr(rule, "learner") and hasattr(rule.learner, "arg_example"): 1273 rule.learner.arg_example = Orange.core.Example(examples.domain, rule.learner.arg_example) 1379 rule.learner.arg_example = Orange.data.Instance( 1380 examples.domain, rule.learner.arg_example) 1274 1381 return rule 1275 1382 1276 def create_classifier(self, rules, examples, weightID): 1277 return self.classifier(rules, examples, weightID) 1278 1279 def add_sub_rules_call(self, rules, examples, weightID): 1280 apriori = Orange.core.Distribution(examples.domain.classVar,examples,weightID) 1281 newRules = Orange.core.RuleList() 1383 def create_classifier(self, rules, examples, weight_id): 1384 return self.classifier(rules, examples, weight_id) 1385 1386 def add_sub_rules_call(self, rules, examples, weight_id): 1387 apriori = Orange.statistics.distribution.Distribution( 1388 examples.domain.class_var,examples,weight_id) 1389 new_rules = RuleList() 1282 1390 for r in rules: 1283 new Rules.append(r)1391 new_rules.append(r) 1284 1392 1285 1393 # loop through rules 1286 1394 for r in rules: 1287 tmpList = Orange.core.RuleList()1395 tmpList = RuleList() 1288 1396 tmpRle = r.clone() 1289 1397 tmpRle.filter.conditions = r.filter.conditions[:r.requiredConditions] # do not split argument 1290 1398 tmpRle.parentRule = None 1291 tmpRle.filterAndStore(examples,weight ID,r.classifier.defaultVal)1399 tmpRle.filterAndStore(examples,weight_id,r.classifier.default_val) 1292 1400 tmpRle.complexity = 0 1293 1401 tmpList.append(tmpRle) 1294 1402 while tmpList and len(tmpList[0].filter.conditions) <= len(r.filter.conditions): 1295 tmpList2 = Orange.core.RuleList()1403 tmpList2 = RuleList() 1296 1404 for tmpRule in tmpList: 1297 1405 # evaluate tmpRule 1298 oldREP = self.rule Finder.evaluator.returnExpectedProb1299 self.rule Finder.evaluator.returnExpectedProb = False1300 tmpRule.quality = self.rule Finder.evaluator(tmpRule,examples,weightID,r.classifier.defaultVal,apriori)1301 self.rule Finder.evaluator.returnExpectedProb = oldREP1406 oldREP = self.rule_finder.evaluator.returnExpectedProb 1407 self.rule_finder.evaluator.returnExpectedProb = False 1408 tmpRule.quality = self.rule_finder.evaluator(tmpRule,examples,weight_id,r.classifier.default_val,apriori) 1409 self.rule_finder.evaluator.returnExpectedProb = oldREP 1302 1410 # if rule not in rules already, add it to the list 1303 if not True in [Orange.classification.rules.rules_equal(ri,tmpRule) for ri in new Rules] and len(tmpRule.filter.conditions)>0 and tmpRule.quality > apriori[r.classifier.defaultVal]/apriori.abs:1304 new Rules.append(tmpRule)1411 if not True in [Orange.classification.rules.rules_equal(ri,tmpRule) for ri in new_rules] and len(tmpRule.filter.conditions)>0 and tmpRule.quality > apriori[r.classifier.default_val]/apriori.abs: 1412 new_rules.append(tmpRule) 1305 1413 # create new tmpRules, set parent Rule, append them to tmpList2 1306 if not True in [Orange.classification.rules.rules_equal(ri,tmpRule) for ri in new Rules]:1414 if not True in [Orange.classification.rules.rules_equal(ri,tmpRule) for ri in new_rules]: 1307 1415 for c in r.filter.conditions: 1308 1416 tmpRule2 = tmpRule.clone() 1309 1417 tmpRule2.parentRule = tmpRule 1310 1418 tmpRule2.filter.conditions.append(c) 1311 tmpRule2.filterAndStore(examples,weight ID,r.classifier.defaultVal)1419 tmpRule2.filterAndStore(examples,weight_id,r.classifier.default_val) 1312 1420 tmpRule2.complexity += 1 1313 if tmpRule2.class Distribution.abs < tmpRule.classDistribution.abs:1421 if tmpRule2.class_distribution.abs < tmprule.class_distribution.abs: 1314 1422 tmpList2.append(tmpRule2) 1315 1423 tmpList = tmpList2 1316 return new Rules1317 1318 1319 def init_pos_args(self, ae, examples, weight ID):1320 pos_args = Orange.core.RuleList()1424 return new_rules 1425 1426 1427 def init_pos_args(self, ae, examples, weight_id): 1428 pos_args = RuleList() 1321 1429 # prepare arguments 1322 for p in ae[self.argument ID].value.positiveArguments:1323 new_arg = Orange.core.Rule(filter=ArgFilter(argumentID = self.argumentID,1430 for p in ae[self.argument_id].value.positiveArguments: 1431 new_arg = Rule(filter=ArgFilter(argument_id = self.argument_id, 1324 1432 filter = self.newFilter_values(p.filter)), 1325 1433 complexity = 0) … … 1328 1436 1329 1437 1330 if hasattr(self.rule Finder.evaluator, "returnExpectedProb"):1331 old_exp = self.rule Finder.evaluator.returnExpectedProb1332 self.rule Finder.evaluator.returnExpectedProb = False1438 if hasattr(self.rule_finder.evaluator, "returnExpectedProb"): 1439 old_exp = self.rule_finder.evaluator.returnExpectedProb 1440 self.rule_finder.evaluator.returnExpectedProb = False 1333 1441 1334 1442 # argument pruning (all or just unfinished arguments) 1335 1443 # if pruning is chosen, then prune arguments if possible 1336 1444 for p in pos_args: 1337 p.filterAndStore(examples, weight ID, 0)1445 p.filterAndStore(examples, weight_id, 0) 1338 1446 # pruning on: we check on all conditions and take only best 1339 1447 if self.prune_arguments: 1340 1448 allowed_conditions = [c for c in p.filter.conditions] 1341 pruned_conditions = self.prune_arg_conditions(ae, allowed_conditions, examples, weight ID)1449 pruned_conditions = self.prune_arg_conditions(ae, allowed_conditions, examples, weight_id) 1342 1450 p.filter.conditions = pruned_conditions 1343 1451 else: # prune only unspecified conditions … … 1346 1454 # let rule cover now all examples filtered by specified conditions 1347 1455 p.filter.conditions = spec_conditions 1348 p.filterAndStore(examples, weight ID, 0)1349 pruned_conditions = self.prune_arg_conditions(ae, unspec_conditions, p.examples, p.weight ID)1456 p.filterAndStore(examples, weight_id, 0) 1457 pruned_conditions = self.prune_arg_conditions(ae, unspec_conditions, p.examples, p.weight_id) 1350 1458 p.filter.conditions.extend(pruned_conditions) 1351 1459 p.filter.filter.conditions.extend(pruned_conditions) … … 1362 1470 # set parameters to arguments 1363 1471 for p_i,p in enumerate(pos_args): 1364 p.filterAndStore(examples,weight ID,0)1472 p.filterAndStore(examples,weight_id,0) 1365 1473 p.filter.domain = examples.domain 1366 1474 if not p.learner: 1367 p.learner = DefaultLearner(default Value=ae.getclass())1368 p.classifier = p.learner(p.examples, p.weight ID)1369 p.baseDist = p.class Distribution1475 p.learner = DefaultLearner(default_value=ae.getclass()) 1476 p.classifier = p.learner(p.examples, p.weight_id) 1477 p.baseDist = p.class_distribution 1370 1478 p.requiredConditions = len(p.filter.conditions) 1371 1479 p.learner.setattr("arg_length", len(p.filter.conditions)) … … 1373 1481 p.complexity = len(p.filter.conditions) 1374 1482 1375 if hasattr(self.rule Finder.evaluator, "returnExpectedProb"):1376 self.rule Finder.evaluator.returnExpectedProb = old_exp1483 if hasattr(self.rule_finder.evaluator, "returnExpectedProb"): 1484 self.rule_finder.evaluator.returnExpectedProb = old_exp 1377 1485 1378 1486 return pos_args … … 1386 1494 return newFilter 1387 1495 1388 def init_neg_args(self, ae, examples, weight ID):1389 return ae[self.argument ID].value.negativeArguments1496 def init_neg_args(self, ae, examples, weight_id): 1497 return ae[self.argument_id].value.negativeArguments 1390 1498 1391 1499 def remaining_probability(self, examples): 1392 return self.cover AndRemove.covered_percentage(examples)1393 1394 def prune_arg_conditions(self, crit_example, allowed_conditions, examples, weight ID):1500 return self.cover_and_remove.covered_percentage(examples) 1501 1502 def prune_arg_conditions(self, crit_example, allowed_conditions, examples, weight_id): 1395 1503 if not allowed_conditions: 1396 1504 return [] 1397 1505 cn2_learner = Orange.classification.rules.CN2UnorderedLearner() 1398 cn2_learner.rule Finder = Orange.core.RuleBeamFinder()1399 cn2_learner.rule Finder.refiner = SelectorArgConditions(crit_example, allowed_conditions)1400 cn2_learner.rule Finder.evaluator = Orange.classification.rules.MEstimate(self.ruleFinder.evaluator.m)1401 rule = cn2_learner.rule Finder(examples,weightID,0,Orange.core.RuleList())1506 cn2_learner.rule_finder = RuleBeamFinder() 1507 cn2_learner.rule_finder.refiner = SelectorArgConditions(crit_example, allowed_conditions) 1508 cn2_learner.rule_finder.evaluator = Orange.classification.rules.MEstimate(self.rule_finder.evaluator.m) 1509 rule = cn2_learner.rule_finder(examples,weight_id,0,RuleList()) 1402 1510 return rule.filter.conditions 1403 1511 … … 1418 1526 By default, weighted relative accuracy is used. 1419 1527 :type evaluator: :class:`Orange.classification.rules.RuleEvaluator` 1420 :param beam Width: width of the search beam.1421 :type beam Width: int1528 :param beam_width: width of the search beam. 1529 :type beam_width: int 1422 1530 :param alpha: significance level of the statistical test to determine 1423 1531 whether rule is good enough to be returned by rulefinder. Likelihood … … 1442 1550 if not self.apriori: 1443 1551 return False 1444 if not type(rule.classifier) == Orange.c ore.DefaultClassifier:1552 if not type(rule.classifier) == Orange.classification.ConstantClassifier: 1445 1553 return False 1446 ruleAcc = rule.class Distribution[rule.classifier.defaultVal]/rule.classDistribution.abs1447 aprioriAcc = self.apriori[rule.classifier.default Val]/self.apriori.abs1554 ruleAcc = rule.class_distribution[rule.classifier.default_val]/rule.class_distribution.abs 1555 aprioriAcc = self.apriori[rule.classifier.default_val]/self.apriori.abs 1448 1556 if ruleAcc>aprioriAcc: 1449 1557 return False … … 1453 1561 class RuleStopping_SetRules(RuleStoppingCriteria): 1454 1562 def __init__(self,validator): 1455 self.rule Stopping = RuleStoppingCriteria_NegativeDistribution()1563 self.rule_stopping = RuleStoppingCriteria_NegativeDistribution() 1456 1564 self.validator = validator 1457 1565 1458 1566 def __call__(self,rules,rule,instances,data): 1459 ru_st = self.rule Stopping(rules,rule,instances,data)1567 ru_st = self.rule_stopping(rules,rule,instances,data) 1460 1568 if not ru_st: 1461 1569 self.validator.rules.append(rule) … … 1468 1576 self.length = length 1469 1577 1470 def __call__(self, rule, data, weight ID, targetClass, apriori):1578 def __call__(self, rule, data, weight_id, target_class, apriori): 1471 1579 if self.length >= 0: 1472 1580 return len(rule.filter.conditions) <= self.length … … 1480 1588 min_coverage=min_coverage,max_rule_length=max_rule_length) 1481 1589 1482 def __call__(self, rule, data, weight ID, targetClass, apriori):1590 def __call__(self, rule, data, weight_id, target_class, apriori): 1483 1591 if rule_in_set(rule,self.rules): 1484 1592 return False 1485 return bool(self.validator(rule,data,weight ID,targetClass,apriori))1593 return bool(self.validator(rule,data,weight_id,target_class,apriori)) 1486 1594 1487 1595 1488 1596 1489 1597 class RuleClassifier_BestRule(RuleClassifier): 1490 def __init__(self, rules, instances, weight ID= 0, **argkw):1598 def __init__(self, rules, instances, weight_id = 0, **argkw): 1491 1599 self.rules = rules 1492 1600 self.examples = instances 1493 self.class Var = instances.domain.classVar1601 self.class_var = instances.domain.class_var 1494 1602 self.__dict__.update(argkw) 1495 self.prior = Orange.core.Distribution(instances.domain.classVar, instances) 1603 self.prior = Orange.statistics.distribution.Distribution( 1604 instances.domain.class_var, instances) 1496 1605 1497 1606 def __call__(self, instance, result_type=Orange.classification.Classifier.GetValue): 1498 retDist = Orange. core.Distribution(instance.domain.classVar)1607 retDist = Orange.statistics.distribution.Distribution(instance.domain.class_var) 1499 1608 bestRule = None 1500 1609 for r in self.rules: 1501 1610 if r(instance) and (not bestRule or r.quality>bestRule.quality): 1502 for v_i,v in enumerate(instance.domain.class Var):1503 retDist[v_i] = r.class Distribution[v_i]1611 for v_i,v in enumerate(instance.domain.class_var): 1612 retDist[v_i] = r.class_distribution[v_i] 1504 1613 bestRule = r 1505 1614 if not bestRule: … … 1509 1618 sumdist = sum(retDist) 1510 1619 if sumdist > 0.0: 1511 for c in self.examples.domain.class Var:1620 for c in self.examples.domain.class_var: 1512 1621 retDist[c] /= sumdisc 1513 1622 else: … … 1523 1632 retStr = "" 1524 1633 for r in self.rules: 1525 retStr += rule ToString(r)+" "+str(r.classDistribution)+"\n"1634 retStr += rule_to_string(r)+" "+str(r.class_distribution)+"\n" 1526 1635 return retStr 1527 1636 … … 1537 1646 def __init__(self, mult = 0.7): 1538 1647 self.mult = mult 1539 def __call__(self, rule, instances, weights, target Class):1648 def __call__(self, rule, instances, weights, target_class): 1540 1649 if not weights: 1541 1650 weights = Orange.core.newmetaid() … … 1562 1671 """ 1563 1672 1564 def __call__(self, rule, instances, weights, target Class):1673 def __call__(self, rule, instances, weights, target_class): 1565 1674 if not weights: 1566 1675 weights = Orange.core.newmetaid() … … 1598 1707 self.bestRule = [] 1599 1708 1600 def initialize(self, instances, weight ID, targetClass, apriori):1709 def initialize(self, instances, weight_id, target_class, apriori): 1601 1710 self.bestRule = [None]*len(instances) 1602 1711 self.probAttribute = Orange.core.newmetaid() … … 1605 1714 Orange.data.variable.Continuous("Probs")) 1606 1715 for instance in instances: 1607 ## if target Class<0 or (instance.getclass() == targetClass):1608 instance[self.probAttribute] = apriori[target Class]/apriori.abs1716 ## if target_class<0 or (instance.getclass() == target_class): 1717 instance[self.probAttribute] = apriori[target_class]/apriori.abs 1609 1718 return instances 1610 1719 1611 def getBestRules(self, currentRules, instances, weight ID):1612 best Rules = RuleList()1720 def getBestRules(self, currentRules, instances, weight_id): 1721 best_rules = RuleList() 1613 1722 for r in currentRules: 1614 if hasattr(r.learner, "argumentRule") and not orngCN2.rule_in_set(r,best Rules):1615 best Rules.append(r)1723 if hasattr(r.learner, "argumentRule") and not orngCN2.rule_in_set(r,best_rules): 1724 best_rules.append(r) 1616 1725 for r_i,r in enumerate(self.bestRule): 1617 if r and not rule_in_set(r,best Rules) and instances[r_i].\1618 getclass()==r.classifier.default Value:1619 best Rules.append(r)1620 return best Rules1621 1622 def remainingInstancesP(self, instances, target Class):1726 if r and not rule_in_set(r,best_rules) and instances[r_i].\ 1727 getclass()==r.classifier.default_value: 1728 best_rules.append(r) 1729 return best_rules 1730 1731 def remainingInstancesP(self, instances, target_class): 1623 1732 pSum, pAll = 0.0, 0.0 1624 1733 for ex in instances: 1625 if ex.getclass() == target Class:1734 if ex.getclass() == target_class: 1626 1735 pSum += ex[self.probAttribute] 1627 1736 pAll += 1.0 1628 1737 return pSum/pAll 1629 1738 1630 def __call__(self, rule, instances, weights, target Class):1631 if target Class<0:1739 def __call__(self, rule, instances, weights, target_class): 1740 if target_class<0: 1632 1741 for instance_i, instance in enumerate(instances): 1633 1742 if rule(instance) and rule.quality>instance[self.probAttribute]0.01: … … 1635 1744 self.bestRule[instance_i]=rule 1636 1745 else: 1637 for instance_i, instance in enumerate(instances): #rule.classifier.default Val == instance.getclass() and1746 for instance_i, instance in enumerate(instances): #rule.classifier.default_val == instance.getclass() and 1638 1747 if rule(instance) and rule.quality>instance[self.probAttribute]: 1639 1748 instance[self.probAttribute] = rule.quality+0.001 1640 1749 self.bestRule[instance_i]=rule 1641 ## if rule.classifier.default Val == instance.getclass():1750 ## if rule.classifier.default_val == instance.getclass(): 1642 1751 ## print instance[self.probAttribute] 1643 1752 # compute factor … … 1645 1754 1646 1755 1647 def ruleToString(rule, showDistribution = True): 1756 @deprecated_keywords({"showDistribution": "show_distribution"}) 1757 def rule_to_string(rule, show_distribution = True): 1648 1758 """ 1649 1759 Write a string presentation of rule in human readable format. … … 1652 1762 :type rule: :class:`Orange.classification.rules.Rule` 1653 1763 1654 :param show Distribution: determines whether presentation should also1764 :param show_distribution: determines whether presentation should also 1655 1765 contain the distribution of covered instances 1656 :type show Distribution: bool1766 :type show_distribution: bool 1657 1767 1658 1768 """ … … 1685 1795 elif type(c) == Orange.core.ValueFilter_continuous: 1686 1796 ret += domain[c.position].name + selectSign(c.oper) + str(c.ref) 1687 if rule.classifier and type(rule.classifier) == Orange.c ore.DefaultClassifier\1688 and rule.classifier.default Val:1689 ret = ret + " THEN "+domain.class Var.name+"="+\1690 str(rule.classifier.default Value)1691 if show Distribution:1692 ret += str(rule.class Distribution)1693 elif rule.classifier and type(rule.classifier) == Orange.c ore.DefaultClassifier\1694 and type(domain.class Var) == Orange.core.EnumVariable:1695 ret = ret + " THEN "+domain.class Var.name+"="+\1696 str(rule.class Distribution.modus())1697 if show Distribution:1698 ret += str(rule.class Distribution)1797 if rule.classifier and type(rule.classifier) == Orange.classification.ConstantClassifier\ 1798 and rule.classifier.default_val: 1799 ret = ret + " THEN "+domain.class_var.name+"="+\ 1800 str(rule.classifier.default_value) 1801 if show_distribution: 1802 ret += str(rule.class_distribution) 1803 elif rule.classifier and type(rule.classifier) == Orange.classification.ConstantClassifier\ 1804 and type(domain.class_var) == Orange.core.EnumVariable: 1805 ret = ret + " THEN "+domain.class_var.name+"="+\ 1806 str(rule.class_distribution.modus()) 1807 if show_distribution: 1808 ret += str(rule.class_distribution) 1699 1809 return ret 1700 1810 1701 1811 def supervisedClassCheck(instances): 1702 if not instances.domain.class Var:1812 if not instances.domain.class_var: 1703 1813 raise Exception("Class variable is required!") 1704 if instances.domain.class Var.varType == Orange.core.VarTypes.Continuous:1814 if instances.domain.class_var.varType == Orange.core.VarTypes.Continuous: 1705 1815 raise Exception("CN2 requires a discrete class!") 1706 1816 … … 1767 1877 cl_num = newData.toNumeric("C") 1768 1878 random.shuffle(cl_num[0][:,0]) 1769 clData = Orange.data.Table(Orange.data.Domain([newData.domain.class Var]),cl_num[0])1879 clData = Orange.data.Table(Orange.data.Domain([newData.domain.class_var]),cl_num[0]) 1770 1880 for d_i,d in enumerate(newData): 1771 d[newData.domain.class Var] = clData[d_i][newData.domain.classVar]1881 d[newData.domain.class_var] = clData[d_i][newData.domain.class_var] 1772 1882 return newData 1773 1883 … … 1785 1895 return mi, beta, percs 1786 1896 1787 def computeDists(data, weight=0, target Class=0, N=100, learner=None):1897 def computeDists(data, weight=0, target_class=0, N=100, learner=None): 1788 1898 """ Compute distributions of likelihood ratio statistics of extreme (best) rules.""" 1789 1899 if not learner: … … 1793 1903 ## Learner preparation ## 1794 1904 ######################### 1795 oldStopper = learner.rule Finder.ruleStoppingValidator1796 evaluator = learner.rule Finder.evaluator1797 learner.rule Finder.evaluator = RuleEvaluator_LRS()1798 learner.rule Finder.evaluator.storeRules = True1799 learner.rule Finder.ruleStoppingValidator = RuleValidator_LRS(alpha=1.0)1800 learner.rule Finder.ruleStoppingValidator.max_rule_complexity = 01905 oldStopper = learner.rule_finder.rule_stoppingValidator 1906 evaluator = learner.rule_finder.evaluator 1907 learner.rule_finder.evaluator = RuleEvaluator_LRS() 1908 learner.rule_finder.evaluator.storeRules = True 1909 learner.rule_finder.rule_stoppingValidator = RuleValidator_LRS(alpha=1.0) 1910 learner.rule_finder.rule_stoppingValidator.max_rule_complexity = 0 1801 1911 1802 1912 # loop through N (sampling repetitions) … … 1805 1915 # create data set (remove and randomize) 1806 1916 tempData = createRandomDataSet(data) 1807 learner.rule Finder.evaluator.rules = RuleList()1917 learner.rule_finder.evaluator.rules = RuleList() 1808 1918 # Next, learn a rule 1809 bestRule = learner.rule Finder(tempData,weight,targetClass,RuleList())1919 bestRule = learner.rule_finder(tempData,weight,target_class,RuleList()) 1810 1920 maxVals.append(bestRule.quality) 1811 extreme Dists=[compParameters(maxVals,1.0,1.0)]1921 extreme_dists=[compParameters(maxVals,1.0,1.0)] 1812 1922 1813 1923 ##################### 1814 1924 ## Restore learner ## 1815 1925 ##################### 1816 learner.rule Finder.evaluator = evaluator1817 learner.rule Finder.ruleStoppingValidator = oldStopper1818 return extreme Dists1926 learner.rule_finder.evaluator = evaluator 1927 learner.rule_finder.rule_stoppingValidator = oldStopper 1928 return extreme_dists 1819 1929 1820 1930 def createEVDistList(evdList): … … 1825 1935 1826 1936 def add_sub_rules(rules, instances, weight, learner, dists): 1827 apriori = Orange.core.Distribution(instances.domain.class Var,instances,weight)1828 new Rules = RuleList()1937 apriori = Orange.core.Distribution(instances.domain.class_var,instances,weight) 1938 new_rules = RuleList() 1829 1939 for r in rules: 1830 new Rules.append(r)1940 new_rules.append(r) 1831 1941 1832 1942 # loop through rules … … 1836 1946 tmpRle.filter.conditions = [] 1837 1947 tmpRle.parentRule = None 1838 tmpRle.filterAndStore(instances,weight,r.classifier.default Val)1948 tmpRle.filterAndStore(instances,weight,r.classifier.default_val) 1839 1949 tmpList.append(tmpRle) 1840 1950 while tmpList and len(tmpList[0].filter.conditions) <= len(r.filter.conditions): … … 1842 1952 for tmpRule in tmpList: 1843 1953 # evaluate tmpRule 1844 oldREP = learner.rule Finder.evaluator.returnExpectedProb1845 learner.rule Finder.evaluator.returnExpectedProb = False1846 learner.rule Finder.evaluator.evDistGetter.dists = createEVDistList(\1847 dists[int(r.classifier.default Val)])1848 tmpRule.quality = learner.rule Finder.evaluator(tmpRule,1849 instances,weight,r.classifier.default Val,apriori)1850 learner.rule Finder.evaluator.returnExpectedProb = oldREP1954 oldREP = learner.rule_finder.evaluator.returnExpectedProb 1955 learner.rule_finder.evaluator.returnExpectedProb = False 1956 learner.rule_finder.evaluator.evDistGetter.dists = createEVDistList(\ 1957 dists[int(r.classifier.default_val)]) 1958 tmpRule.quality = learner.rule_finder.evaluator(tmpRule, 1959 instances,weight,r.classifier.default_val,apriori) 1960 learner.rule_finder.evaluator.returnExpectedProb = oldREP 1851 1961 # if rule not in rules already, add it to the list 1852 if not True in [rules_equal(ri,tmpRule) for ri in new Rules] and\1962 if not True in [rules_equal(ri,tmpRule) for ri in new_rules] and\ 1853 1963 len(tmpRule.filter.conditions)>0 and tmpRule.quality >\ 1854 apriori[r.classifier.default Val]/apriori.abs:1855 new Rules.append(tmpRule)1964 apriori[r.classifier.default_val]/apriori.abs: 1965 new_rules.append(tmpRule) 1856 1966 # create new tmpRules, set parent Rule, append them to tmpList2 1857 if not True in [rules_equal(ri,tmpRule) for ri in new Rules]:1967 if not True in [rules_equal(ri,tmpRule) for ri in new_rules]: 1858 1968 for c in r.filter.conditions: 1859 1969 tmpRule2 = tmpRule.clone() 1860 1970 tmpRule2.parentRule = tmpRule 1861 1971 tmpRule2.filter.conditions.append(c) 1862 tmpRule2.filterAndStore(instances,weight,r.classifier.default Val)1863 if tmpRule2.class Distribution.abs < tmpRule.classDistribution.abs:1972 tmpRule2.filterAndStore(instances,weight,r.classifier.default_val) 1973 if tmpRule2.class_distribution.abs < tmprule.class_distribution.abs: 1864 1974 tmpList2.append(tmpRule2) 1865 1975 tmpList = tmpList2 1866 for cl in instances.domain.class Var:1976 for cl in instances.domain.class_var: 1867 1977 tmpRle = Rule() 1868 1978 tmpRle.filter = Orange.core.Filter_values(domain = instances.domain) 1869 1979 tmpRle.parentRule = None 1870 1980 tmpRle.filterAndStore(instances,weight,int(cl)) 1871 tmpRle.quality = tmpRle.classDistribution[int(cl)]/tmpRle.classDistribution.abs 1872 newRules.append(tmpRle) 1873 return newRules 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 ################################################################################ 1884 ################################################################################ 1885 ## This has been copyed&pasted from orngABCN2.py and not yet appropriately ## 1886 ## refactored and documented. ## 1887 ################################################################################ 1888 ################################################################################ 1889 1890 1891 """ This module implements argument based rule learning. 1892 The main learner class is ABCN2. The first few classes are some variants of ABCN2 with reasonable settings. """ 1893 1894 1895 import operator 1896 import random 1897 import numpy 1898 import math 1899 1900 from orngABML import * 1901 1902 # Default learner  returns # 1903 # default classifier with pre # 1904 # defined output class # 1981 tmpRle.quality = tmpRle.class_distribution[int(cl)]/tmpRle.class_distribution.abs 1982 new_rules.append(tmpRle) 1983 return new_rules 1984 1985 1905 1986 class DefaultLearner(Orange.core.Learner): 1906 def __init__(self,defaultValue = None): 1907 self.defaultValue = defaultValue 1908 def __call__(self,examples,weightID=0): 1909 return Orange.core.DefaultClassifier(self.defaultValue,defaultDistribution = Orange.core.Distribution(examples.domain.classVar,examples,weightID)) 1987 """ 1988 Default lerner  returns default classifier with predefined output class. 1989 """ 1990 def __init__(self,default_value = None): 1991 self.default_value = default_value 1992 def __call__(self,examples,weight_id=0): 1993 return Orange.classification.majority.ConstantClassifier(self.default_value,defaultDistribution = Orange.core.Distribution(examples.domain.class_var,examples,weight_id)) 1910 1994 1911 1995 class ABCN2Ordered(ABCN2): 1912 """ Rules learned by ABCN2 are ordered and used as a decision list. """ 1913 def __init__(self, argumentID=0, **kwds): 1914 ABCN2.__init__(self, argumentID=argumentID, **kwds) 1996 """ 1997 Rules learned by ABCN2 are ordered and used as a decision list. 1998 """ 1999 def __init__(self, argument_id=0, **kwds): 2000 ABCN2.__init__(self, argument_id=argument_id, **kwds) 1915 2001 self.classifier.set_prefix_rules = True 1916 2002 self.classifier.optimize_betas = False 1917 2003 1918 2004 class ABCN2M(ABCN2): 1919 """ Argument based rule learning with mestimate as evaluation function. """ 1920 def __init__(self, argumentID=0, **kwds): 1921 ABCN2.__init__(self, argumentID=argumentID, **kwds) 2005 """ 2006 Argument based rule learning with mestimate as evaluation function. 2007 """ 2008 def __init__(self, argument_id=0, **kwds): 2009 ABCN2.__init__(self, argument_id=argument_id, **kwds) 1922 2010 self.opt_reduction = 0 1923 2011 1924 2012 1925 # *********************** #1926 # Argument based covering #1927 # *********************** #1928 1929 2013 class ABBeamFilter(Orange.core.RuleBeamFilter): 1930 """ ABBeamFilter: Filters beam; 2014 """ 2015 ABBeamFilter: Filters beam; 1931 2016  leaves first N rules (by quality) 1932  leaves first N rules that have only of arguments in condition part 2017  leaves first N rules that have only of arguments in condition part 1933 2018 """ 1934 2019 def __init__(self,width=5): … … 1936 2021 self.pArgs=None 1937 2022 1938 def __call__(self,rulesStar,examples,weight ID):2023 def __call__(self,rulesStar,examples,weight_id): 1939 2024 newStar=Orange.core.RuleList() 1940 2025 rulesStar.sort(lambda x,y: cmp(x.quality,y.quality)) … … 1968 2053 1969 2054 class ruleCoversArguments: 1970 """ Class determines if rule covers one out of a set of arguments. """ 2055 """ 2056 Class determines if rule covers one out of a set of arguments. 2057 """ 1971 2058 def __init__(self, arguments): 1972 2059 self.arguments = arguments … … 2028 2115 at,type=r_i,3 2029 2116 return at,type 2030 oneSelectorToCover = staticmethod(oneSelectorToCover) 2031 2117 oneSelectorToCover = staticmethod(oneSelectorToCover) 2118 2119 2120 @deprecated_members({"notAllowedSelectors": "not_allowed_selectors", 2121 "argumentID": "argument_id"}) 2032 2122 class SelectorAdder(Orange.core.RuleBeamRefiner): 2033 """ Selector adder, this function is a refiner function: 2034  refined rules are not consistent with any of negative arguments. """ 2035 def __init__(self, example=None, notAllowedSelectors=[], argumentID = None, 2123 """ 2124 Selector adder, this function is a refiner function: 2125  refined rules are not consistent with any of negative arguments. 2126 """ 2127 def __init__(self, example=None, not_allowed_selectors=[], argument_id = None, 2036 2128 discretizer = Orange.core.EntropyDiscretization(forceAttribute=True)): 2037 2129 # required values  needed values of attributes 2038 2130 self.example = example 2039 self.argument ID = argumentID2040 self.not AllowedSelectors = notAllowedSelectors2131 self.argument_id = argument_id 2132 self.not_allowed_selectors = not_allowed_selectors 2041 2133 self.discretizer = discretizer 2042 2134 2043 def __call__(self, oldRule, data, weight ID, targetClass=1):2044 inNotAllowedSelectors = ruleCoversArguments(self.not AllowedSelectors)2045 new Rules = Orange.core.RuleList()2135 def __call__(self, oldRule, data, weight_id, target_class=1): 2136 inNotAllowedSelectors = ruleCoversArguments(self.not_allowed_selectors) 2137 new_rules = Orange.core.RuleList() 2046 2138 2047 2139 # get positive indices (selectors already in the rule) … … 2052 2144 2053 2145 # get negative indices (selectors that should not be in the rule) 2054 negative Indices = [0]*len(data.domain.attributes)2055 for nA in self.not AllowedSelectors:2146 negative_indices = [0]*len(data.domain.attributes) 2147 for nA in self.not_allowed_selectors: 2056 2148 #print indices, nA.filter.indices 2057 2149 at_i,type_na = ruleCoversArguments.oneSelectorToCover(indices, nA.filter.indices) 2058 2150 if at_i>1: 2059 negative Indices[at_i] = operator.or_(negativeIndices[at_i],type_na)2151 negative_indices[at_i] = operator.or_(negative_indices[at_i],type_na) 2060 2152 2061 2153 #iterate through indices = attributes … … 2065 2157 if ind == 1: 2066 2158 continue 2067 if data.domain[i].varType == Orange.core.VarTypes.Discrete and not negative Indices[i]==1: # DISCRETE attribute2159 if data.domain[i].varType == Orange.core.VarTypes.Discrete and not negative_indices[i]==1: # DISCRETE attribute 2068 2160 if self.example: 2069 2161 values = [self.example[i]] … … 2077 2169 tempRule.complexity += 1 2078 2170 tempRule.filter.indices[i] = 1 # 1 stands for discrete attribute (see ruleCoversArguments.conditionIndex) 2079 tempRule.filterAndStore(oldRule.examples, oldRule.weight ID, targetClass)2171 tempRule.filterAndStore(oldRule.examples, oldRule.weight_id, target_class) 2080 2172 if len(tempRule.examples)<len(oldRule.examples): 2081 new Rules.append(tempRule)2082 elif data.domain[i].varType == Orange.core.VarTypes.Continuous and not negative Indices[i]==7: # CONTINUOUS attribute2173 new_rules.append(tempRule) 2174 elif data.domain[i].varType == Orange.core.VarTypes.Continuous and not negative_indices[i]==7: # CONTINUOUS attribute 2083 2175 try: 2084 2176 at = data.domain[i] … … 2090 2182 for p in at_d.getValueFrom.transformer.points: 2091 2183 #LESS 2092 if not negative Indices[i]==3:2093 tempRule = self.getTempRule(oldRule,i,Orange.core.ValueFilter_continuous.LessEqual,p,target Class,3)2184 if not negative_indices[i]==3: 2185 tempRule = self.getTempRule(oldRule,i,Orange.core.ValueFilter_continuous.LessEqual,p,target_class,3) 2094 2186 if len(tempRule.examples)<len(oldRule.examples) and self.example[i]<=p:# and not inNotAllowedSelectors(tempRule): 2095 new Rules.append(tempRule)2187 new_rules.append(tempRule) 2096 2188 #GREATER 2097 if not negative Indices[i]==5:2098 tempRule = self.getTempRule(oldRule,i,Orange.core.ValueFilter_continuous.Greater,p,target Class,5)2189 if not negative_indices[i]==5: 2190 tempRule = self.getTempRule(oldRule,i,Orange.core.ValueFilter_continuous.Greater,p,target_class,5) 2099 2191 if len(tempRule.examples)<len(oldRule.examples) and self.example[i]>p:# and not inNotAllowedSelectors(tempRule): 2100 new Rules.append(tempRule)2101 for r in new Rules:2192 new_rules.append(tempRule) 2193 for r in new_rules: 2102 2194 r.parentRule = oldRule 2103 2195 r.valuesFilter = r.filter.filter 2104 return new Rules2105 2106 def getTempRule(self,oldRule,pos,oper,ref,target Class,atIndex):2196 return new_rules 2197 2198 def getTempRule(self,oldRule,pos,oper,ref,target_class,atIndex): 2107 2199 tempRule = oldRule.clone() 2108 2200 … … 2113 2205 tempRule.complexity += 1 2114 2206 tempRule.filter.indices[pos] = operator.or_(tempRule.filter.indices[pos],atIndex) # from ruleCoversArguments.conditionIndex 2115 tempRule.filterAndStore(oldRule.examples,tempRule.weight ID,targetClass)2207 tempRule.filterAndStore(oldRule.examples,tempRule.weight_id,target_class) 2116 2208 return tempRule 2117 2209 2118 def setCondition(self, oldRule, target Class, ci, condition):2210 def setCondition(self, oldRule, target_class, ci, condition): 2119 2211 tempRule = oldRule.clone() 2120 2212 tempRule.filter.conditions[ci] = condition 2121 2213 tempRule.filter.conditions[ci].setattr("specialized",1) 2122 tempRule.filterAndStore(oldRule.examples,oldRule.weight ID,targetClass)2214 tempRule.filterAndStore(oldRule.examples,oldRule.weight_id,target_class) 2123 2215 return tempRule 2124 2216 … … 2126 2218 # This filter is the ugliest code ever! Problem is with Orange, I had some problems with inheriting deepCopy 2127 2219 # I should take another look at it. 2220 @deprecated_members({"argumentID": "argument_id"}) 2128 2221 class ArgFilter(Orange.core.Filter): 2129 """ This class implements ABcovering principle. """ 2130 def __init__(self, argumentID=None, filter = Orange.core.Filter_values()): 2222 """ 2223 This class implements ABcovering principle. 2224 """ 2225 def __init__(self, argument_id=None, filter = Orange.core.Filter_values()): 2131 2226 self.filter = filter 2132 2227 self.indices = getattr(filter,"indices",[]) 2133 2228 if not self.indices and len(filter.conditions)>0: 2134 2229 self.indices = ruleCoversArguments.filterIndices(filter) 2135 self.argument ID = argumentID2230 self.argument_id = argument_id 2136 2231 self.debug = 0 2137 2232 self.domain = self.filter.domain … … 2149 2244 if self.filter(example): 2150 2245 try: 2151 if example[self.argument ID].value and len(example[self.argumentID].value.positiveArguments)>0: # example has positive arguments2246 if example[self.argument_id].value and len(example[self.argument_id].value.positiveArguments)>0: # example has positive arguments 2152 2247 # conditions should cover at least one of the positive arguments 2153 2248 oneArgCovered = False 2154 for pA in example[self.argument ID].value.positiveArguments:2249 for pA in example[self.argument_id].value.positiveArguments: 2155 2250 argCovered = [self.condIn(c) for c in pA.filter.conditions] 2156 2251 oneArgCovered = oneArgCovered or len(argCovered) == sum(argCovered) #argCovered … … 2159 2254 if not oneArgCovered: 2160 2255 return False 2161 if example[self.argument ID].value and len(example[self.argumentID].value.negativeArguments)>0: # example has negative arguments2256 if example[self.argument_id].value and len(example[self.argument_id].value.negativeArguments)>0: # example has negative arguments 2162 2257 # condition should not cover neither of negative arguments 2163 for pN in example[self.argument ID].value.negativeArguments:2258 for pN in example[self.argument_id].value.negativeArguments: 2164 2259 argCovered = [self.condIn(c) for c in pN.filter.conditions] 2165 2260 if len(argCovered)==sum(argCovered): … … 2176 2271 2177 2272 def deepCopy(self): 2178 newFilter = ArgFilter(argument ID=self.argumentID)2273 newFilter = ArgFilter(argument_id=self.argument_id) 2179 2274 newFilter.filter = Orange.core.Filter_values() #self.filter.deepCopy() 2180 2275 newFilter.filter.conditions = self.filter.conditions[:] … … 2191 2286 2192 2287 class SelectorArgConditions(Orange.core.RuleBeamRefiner): 2193 """ Selector adder, this function is a refiner function: 2194  refined rules are not consistent with any of negative arguments. """ 2288 """ 2289 Selector adder, this function is a refiner function: 2290  refined rules are not consistent with any of negative arguments. 2291 """ 2195 2292 def __init__(self, example, allowed_selectors): 2196 2293 # required values  needed values of attributes … … 2198 2295 self.allowed_selectors = allowed_selectors 2199 2296 2200 def __call__(self, oldRule, data, weight ID, targetClass=1):2297 def __call__(self, oldRule, data, weight_id, target_class=1): 2201 2298 if len(oldRule.filter.conditions) >= len(self.allowed_selectors): 2202 2299 return Orange.core.RuleList() 2203 new Rules = Orange.core.RuleList()2300 new_rules = Orange.core.RuleList() 2204 2301 for c in self.allowed_selectors: 2205 2302 # normal condition … … 2207 2304 tempRule = oldRule.clone() 2208 2305 tempRule.filter.conditions.append(c) 2209 tempRule.filterAndStore(oldRule.examples, oldRule.weight ID, targetClass)2306 tempRule.filterAndStore(oldRule.examples, oldRule.weight_id, target_class) 2210 2307 if len(tempRule.examples)<len(oldRule.examples): 2211 new Rules.append(tempRule)2308 new_rules.append(tempRule) 2212 2309 # unspecified condition 2213 2310 else: … … 2226 2323 acceptSpecial=0)) 2227 2324 if tempRule(self.example): 2228 tempRule.filterAndStore(oldRule.examples, oldRule.weight ID, targetClass)2325 tempRule.filterAndStore(oldRule.examples, oldRule.weight_id, target_class) 2229 2326 if len(tempRule.examples)<len(oldRule.examples): 2230 new Rules.append(tempRule)2327 new_rules.append(tempRule) 2231 2328 ## print " NEW RULES " 2232 ## for r in new Rules:2233 ## print Orange.classification.rules.rule ToString(r)2234 for r in new Rules:2329 ## for r in new_rules: 2330 ## print Orange.classification.rules.rule_to_string(r) 2331 for r in new_rules: 2235 2332 r.parentRule = oldRule 2236 ## print Orange.classification.rules.ruleToString(r) 2237 return newRules 2238 2239 2240 # ********************** # 2241 # Probabilistic covering # 2242 # ********************** # 2333 ## print Orange.classification.rules.rule_to_string(r) 2334 return new_rules 2335 2243 2336 2244 2337 class CovererAndRemover_Prob(Orange.core.RuleCovererAndRemover): 2245 """ This class impements probabilistic covering. """ 2246 2247 def __init__(self, examples, weightID, targetClass, apriori): 2338 """ 2339 This class impements probabilistic covering. 2340 """ 2341 def __init__(self, examples, weight_id, target_class, apriori): 2248 2342 self.bestRule = [None]*len(examples) 2249 2343 self.probAttribute = Orange.core.newmetaid() 2250 self.apriori Prob = apriori[targetClass]/apriori.abs2251 examples.addMetaAttribute(self.probAttribute, self.apriori Prob)2344 self.apriori_prob = apriori[target_class]/apriori.abs 2345 examples.addMetaAttribute(self.probAttribute, self.apriori_prob) 2252 2346 examples.domain.addmeta(self.probAttribute, Orange.core.FloatVariable("Probs")) 2253 2347 2254 def getBestRules(self, currentRules, examples, weight ID):2255 best Rules = Orange.core.RuleList()2348 def getBestRules(self, currentRules, examples, weight_id): 2349 best_rules = Orange.core.RuleList() 2256 2350 ## for r in currentRules: 2257 ## if hasattr(r.learner, "argumentRule") and not Orange.classification.rules.rule_in_set(r,best Rules):2258 ## best Rules.append(r)2351 ## if hasattr(r.learner, "argumentRule") and not Orange.classification.rules.rule_in_set(r,best_rules): 2352 ## best_rules.append(r) 2259 2353 for r_i,r in enumerate(self.bestRule): 2260 if r and not Orange.classification.rules.rule_in_set(r,best Rules) and int(examples[r_i].getclass())==int(r.classifier.defaultValue):2261 best Rules.append(r)2262 return best Rules2263 2264 def __call__(self, rule, examples, weights, target Class):2354 if r and not Orange.classification.rules.rule_in_set(r,best_rules) and int(examples[r_i].getclass())==int(r.classifier.default_value): 2355 best_rules.append(r) 2356 return best_rules 2357 2358 def __call__(self, rule, examples, weights, target_class): 2265 2359 if hasattr(rule, "learner") and hasattr(rule.learner, "arg_example"): 2266 2360 example = rule.learner.arg_example … … 2286 2380 p = 0.0 2287 2381 for ei, e in enumerate(examples): 2288 p += (e[self.probAttribute]  self.apriori Prob)/(1.0self.aprioriProb)2382 p += (e[self.probAttribute]  self.apriori_prob)/(1.0self.apriori_prob) 2289 2383 return p/len(examples) 2290 2384 2291 2292 # **************************************** #2293 # Estimation of extreme value distribution #2294 # **************************************** #2295 2296 # Miscellaneous  utility functions2297 def avg(l):2298 return sum(l)/len(l) if l else 0.2299 2300 def var(l):2301 if len(l)<2:2302 return 0.2303 av = avg(l)2304 return sum([math.pow(liav,2) for li in l])/(len(l)1)2305 2306 def perc(l,p):2307 l.sort()2308 return l[int(math.floor(p*len(l)))]2309 2310 2385 class EVDFitter: 2311 """ Randomizes a dataset and fits an extreme value distribution onto it. """ 2312 2386 """ 2387 Randomizes a dataset and fits an extreme value distribution onto it. 2388 """ 2313 2389 def __init__(self, learner, n=200, randomseed=100): 2314 2390 self.learner = learner … … 2321 2397 cl_num = newData.toNumpy("C") 2322 2398 random.shuffle(cl_num[0][:,0]) 2323 clData = Orange.core.ExampleTable(Orange.core.Domain([newData.domain.class Var]),cl_num[0])2399 clData = Orange.core.ExampleTable(Orange.core.Domain([newData.domain.class_var]),cl_num[0]) 2324 2400 for d_i,d in enumerate(newData): 2325 d[newData.domain.class Var] = clData[d_i][newData.domain.classVar]2401 d[newData.domain.class_var] = clData[d_i][newData.domain.class_var] 2326 2402 return newData 2327 2403 … … 2345 2421 2346 2422 def prepare_learner(self): 2347 self.oldStopper = self.learner.rule Finder.ruleStoppingValidator2348 self.evaluator = self.learner.rule Finder.evaluator2349 self.refiner = self.learner.rule Finder.refiner2350 self.validator = self.learner.rule Finder.validator2351 self.ruleFilter = self.learner.rule Finder.ruleFilter2352 self.learner.rule Finder.validator = None2353 self.learner.rule Finder.evaluator = Orange.core.RuleEvaluator_LRS()2354 self.learner.rule Finder.evaluator.storeRules = True2355 self.learner.rule Finder.ruleStoppingValidator = Orange.core.RuleValidator_LRS(alpha=1.0)2356 self.learner.rule Finder.ruleStoppingValidator.max_rule_complexity = 02357 self.learner.rule Finder.refiner = Orange.core.RuleBeamRefiner_Selector()2358 self.learner.rule Finder.ruleFilter = Orange.core.RuleBeamFilter_Width(width = 1)2423 self.oldStopper = self.learner.rule_finder.rule_stoppingValidator 2424 self.evaluator = self.learner.rule_finder.evaluator 2425 self.refiner = self.learner.rule_finder.refiner 2426 self.validator = self.learner.rule_finder.validator 2427 self.ruleFilter = self.learner.rule_finder.ruleFilter 2428 self.learner.rule_finder.validator = None 2429 self.learner.rule_finder.evaluator = Orange.core.RuleEvaluator_LRS() 2430 self.learner.rule_finder.evaluator.storeRules = True 2431 self.learner.rule_finder.rule_stoppingValidator = Orange.core.RuleValidator_LRS(alpha=1.0) 2432 self.learner.rule_finder.rule_stoppingValidator.max_rule_complexity = 0 2433 self.learner.rule_finder.refiner = Orange.core.RuleBeamRefiner_Selector() 2434 self.learner.rule_finder.ruleFilter = Orange.core.RuleBeamFilter_Width(width = 1) 2359 2435 2360 2436 2361 2437 def restore_learner(self): 2362 self.learner.rule Finder.evaluator = self.evaluator2363 self.learner.rule Finder.ruleStoppingValidator = self.oldStopper2364 self.learner.rule Finder.refiner = self.refiner2365 self.learner.rule Finder.validator = self.validator2366 self.learner.rule Finder.ruleFilter = self.ruleFilter2367 2368 def computeEVD(self, data, weight ID=0, target_class=0, progress=None):2438 self.learner.rule_finder.evaluator = self.evaluator 2439 self.learner.rule_finder.rule_stoppingValidator = self.oldStopper 2440 self.learner.rule_finder.refiner = self.refiner 2441 self.learner.rule_finder.validator = self.validator 2442 self.learner.rule_finder.ruleFilter = self.ruleFilter 2443 2444 def computeEVD(self, data, weight_id=0, target_class=0, progress=None): 2369 2445 # initialize random seed to make experiments repeatable 2370 2446 random.seed(self.randomseed) … … 2374 2450 2375 2451 # loop through N (sampling repetitions) 2376 extreme Dists=[(0, 1, [])]2377 self.learner.rule Finder.ruleStoppingValidator.max_rule_complexity = self.oldStopper.max_rule_complexity2452 extreme_dists=[(0, 1, [])] 2453 self.learner.rule_finder.rule_stoppingValidator.max_rule_complexity = self.oldStopper.max_rule_complexity 2378 2454 maxVals = [[] for l in range(self.oldStopper.max_rule_complexity)] 2379 2455 for d_i in range(self.n): … … 2384 2460 # create data set (remove and randomize) 2385 2461 tempData = self.createRandomDataSet(data) 2386 self.learner.rule Finder.evaluator.rules = Orange.core.RuleList()2462 self.learner.rule_finder.evaluator.rules = Orange.core.RuleList() 2387 2463 # Next, learn a rule 2388 self.learner.rule Finder(tempData,weightID,target_class, Orange.core.RuleList())2464 self.learner.rule_finder(tempData,weight_id,target_class, Orange.core.RuleList()) 2389 2465 for l in range(self.oldStopper.max_rule_complexity): 2390 qs = [r.quality for r in self.learner.rule Finder.evaluator.rules if r.complexity == l+1]2466 qs = [r.quality for r in self.learner.rule_finder.evaluator.rules if r.complexity == l+1] 2391 2467 if qs: 2392 2468 maxVals[l].append(max(qs)) … … 2397 2473 for mi,m in enumerate(maxVals): 2398 2474 mu, beta, perc = self.compParameters(m,mu,beta) 2399 extreme Dists.append((mu, beta, perc))2400 extreme Dists.extend([(0,1,[])]*(mi))2475 extreme_dists.append((mu, beta, perc)) 2476 extreme_dists.extend([(0,1,[])]*(mi)) 2401 2477 2402 2478 self.restore_learner() 2403 return self.createEVDistList(extremeDists) 2404 2405 # ************************* # 2406 # Rule based classification # 2407 # ************************* # 2479 return self.createEVDistList(extreme_dists) 2408 2480 2409 2481 class CrossValidation: 2410 def __init__(self, folds=5, random Generator = 150):2482 def __init__(self, folds=5, random_generator = 150): 2411 2483 self.folds = folds 2412 self.random Generator = randomGenerator2484 self.random_generator = random_generator 2413 2485 2414 2486 def __call__(self, learner, examples, weight): 2415 res = orngTest.crossValidation([learner], (examples, weight), folds = self.folds, random Generator = self.randomGenerator)2487 res = orngTest.crossValidation([learner], (examples, weight), folds = self.folds, random_generator = self.random_generator) 2416 2488 return self.get_prob_from_res(res, examples) 2417 2489 2418 2490 def get_prob_from_res(self, res, examples): 2419 prob Dist = Orange.core.DistributionList()2491 prob_dist = Orange.core.DistributionList() 2420 2492 for tex in res.results: 2421 d = Orange.core.Distribution(examples.domain.class Var)2493 d = Orange.core.Distribution(examples.domain.class_var) 2422 2494 for di in range(len(d)): 2423 2495 d[di] = tex.probabilities[0][di] 2424 probDist.append(d) 2425 return probDist 2426 2496 prob_dist.append(d) 2497 return prob_dist 2498 2499 @deprecated_members({"sortRules": "sort_rules"}) 2427 2500 class PILAR: 2428 """ PILAR (Probabilistic improvement of learning algorithms with rules) """ 2501 """ 2502 PILAR (Probabilistic improvement of learning algorithms with rules). 2503 """ 2429 2504 def __init__(self, alternative_learner = None, min_cl_sig = 0.5, min_beta = 0.0, set_prefix_rules = False, optimize_betas = True): 2430 2505 self.alternative_learner = alternative_learner … … 2438 2513 rules = self.add_null_rule(rules, examples, weight) 2439 2514 if self.alternative_learner: 2440 prob Dist = self.selected_evaluation(self.alternative_learner, examples, weight)2515 prob_dist = self.selected_evaluation(self.alternative_learner, examples, weight) 2441 2516 classifier = self.alternative_learner(examples,weight) 2442 ## prob Dist = Orange.core.DistributionList()2517 ## prob_dist = Orange.core.DistributionList() 2443 2518 ## for e in examples: 2444 ## prob Dist.append(classifier(e,Orange.core.GetProbabilities))2445 cl = Orange.core.RuleClassifier_logit(rules, self.min_cl_sig, self.min_beta, examples, weight, self.set_prefix_rules, self.optimize_betas, classifier, prob Dist)2519 ## prob_dist.append(classifier(e,Orange.core.GetProbabilities)) 2520 cl = Orange.core.RuleClassifier_logit(rules, self.min_cl_sig, self.min_beta, examples, weight, self.set_prefix_rules, self.optimize_betas, classifier, prob_dist) 2446 2521 else: 2447 2522 cl = Orange.core.RuleClassifier_logit(rules, self.min_cl_sig, self.min_beta, examples, weight, self.set_prefix_rules, self.optimize_betas) … … 2451 2526 cl.rules[ri].setattr("beta",cl.ruleBetas[ri]) 2452 2527 ## if cl.ruleBetas[ri] > 0: 2453 ## print Orange.classification.rules.rule ToString(r), r.quality, cl.ruleBetas[ri]2528 ## print Orange.classification.rules.rule_to_string(r), r.quality, cl.ruleBetas[ri] 2454 2529 cl.all_rules = cl.rules 2455 cl.rules = self.sort Rules(cl.rules)2530 cl.rules = self.sort_rules(cl.rules) 2456 2531 cl.ruleBetas = [r.beta for r in cl.rules] 2457 2532 cl.setattr("data", examples) … … 2459 2534 2460 2535 def add_null_rule(self, rules, examples, weight): 2461 for cl in examples.domain.class Var:2536 for cl in examples.domain.class_var: 2462 2537 tmpRle = Orange.core.Rule() 2463 2538 tmpRle.filter = Orange.core.Filter_values(domain = examples.domain) 2464 2539 tmpRle.parentRule = None 2465 2540 tmpRle.filterAndStore(examples,weight,int(cl)) 2466 tmpRle.quality = tmpRle.class Distribution[int(cl)]/tmpRle.classDistribution.abs2541 tmpRle.quality = tmpRle.class_distribution[int(cl)]/tmpRle.class_distribution.abs 2467 2542 rules.append(tmpRle) 2468 2543 return rules 2469 2544 2470 def sort Rules(self, rules):2471 new Rules = Orange.core.RuleList()2545 def sort_rules(self, rules): 2546 new_rules = Orange.core.RuleList() 2472 2547 foundRule = True 2473 2548 while foundRule: … … 2475 2550 bestRule = None 2476 2551 for r in rules: 2477 if r in new Rules:2552 if r in new_rules: 2478 2553 continue 2479 2554 if r.beta < 0.01 and r.beta > 0.01: … … 2492 2567 continue 2493 2568 if bestRule: 2494 newRules.append(bestRule) 2495 return newRules 2496 2497 2498 class CN2UnorderedClassifier(Orange.core.RuleClassifier): 2499 """ Classification from rules as in CN2. """ 2500 def __init__(self, rules, examples, weightID = 0, **argkw): 2569 new_rules.append(bestRule) 2570 return new_rules 2571 2572 2573 @deprecated_members({"defaultClassIndex": "default_class_index"}) 2574 class RuleClassifier_bestRule(Orange.core.RuleClassifier): 2575 """ 2576 A very simple classifier, it takes the best rule of each class and 2577 normalizes probabilities. 2578 """ 2579 def __init__(self, rules, examples, weight_id = 0, **argkw): 2501 2580 self.rules = rules 2502 2581 self.examples = examples 2503 self.weightID = weightID 2504 self.prior = Orange.core.Distribution(examples.domain.classVar, examples, weightID) 2582 self.apriori = Orange.core.Distribution(examples.domain.class_var,examples,weight_id) 2583 self.apriori_prob = [a/self.apriori.abs for a in self.apriori] 2584 self.weight_id = weight_id 2505 2585 self.__dict__.update(argkw) 2506 2507 def __call__(self, example, result_type=Orange.core.GetValue, retRules = False): 2508 # iterate through the set of induced rules: self.rules and sum their distributions 2509 ret_dist = self.sum_distributions([r for r in self.rules if r(example)]) 2510 # normalize 2511 a = sum(ret_dist) 2512 for ri, r in enumerate(ret_dist): 2513 ret_dist[ri] = ret_dist[ri]/a 2514 ## ret_dist.normalize() 2515 # return value 2516 if result_type == Orange.core.GetValue: 2517 return ret_dist.modus() 2518 if result_type == Orange.core.GetProbabilities: 2519 return ret_dist 2520 return (ret_dist.modus(),ret_dist) 2521 2522 def sum_distributions(self, rules): 2523 if not rules: 2524 return self.prior 2525 empty_disc = Orange.core.Distribution(rules[0].examples.domain.classVar) 2526 for r in rules: 2527 for i,d in enumerate(r.classDistribution): 2528 empty_disc[i] = empty_disc[i] + d 2529 return empty_disc 2530 2531 def __str__(self): 2532 retStr = "" 2586 self.default_class_index = 1 2587 2588 def __call__(self, example, result_type=Orange.classification.Classifier.GetValue, retRules = False): 2589 example = Orange.core.Example(self.examples.domain,example) 2590 tempDist = Orange.core.Distribution(example.domain.class_var) 2591 best_rules = [None]*len(example.domain.class_var.values) 2592 2533 2593 for r in self.rules: 2534 retStr += Orange.classification.rules.ruleToString(r)+" "+str(r.classDistribution)+"\n" 2535 return retStr 2536 2537 2538 class RuleClassifier_bestRule(Orange.core.RuleClassifier): 2539 """ A very simple classifier, it takes the best rule of each class and normalizes probabilities. """ 2540 def __init__(self, rules, examples, weightID = 0, **argkw): 2541 self.rules = rules 2542 self.examples = examples 2543 self.apriori = Orange.core.Distribution(examples.domain.classVar,examples,weightID) 2544 self.aprioriProb = [a/self.apriori.abs for a in self.apriori] 2545 self.weightID = weightID 2546 self.__dict__.update(argkw) 2547 self.defaultClassIndex = 1 2548 2549 def __call__(self, example, result_type=Orange.core.GetValue, retRules = False): 2550 example = Orange.core.Example(self.examples.domain,example) 2551 tempDist = Orange.core.Distribution(example.domain.classVar) 2552 bestRules = [None]*len(example.domain.classVar.values) 2553 2554 for r in self.rules: 2555 if r(example) and not self.defaultClassIndex == int(r.classifier.defaultVal) and \ 2556 (not bestRules[int(r.classifier.defaultVal)] or r.quality>tempDist[r.classifier.defaultVal]): 2557 tempDist[r.classifier.defaultVal] = r.quality 2558 bestRules[int(r.classifier.defaultVal)] = r 2559 for b in bestRules: 2594 if r(example) and not self.default_class_index == int(r.classifier.default_val) and \ 2595 (not best_rules[int(r.classifier.default_val)] or r.quality>tempDist[r.classifier.default_val]): 2596 tempDist[r.classifier.default_val] = r.quality 2597 best_rules[int(r.classifier.default_val)] = r 2598 for b in best_rules: 2560 2599 if b: 2561 2600 used = getattr(b,"used",0.0) 2562 2601 b.setattr("used",used+1) 2563 nonCovPriorSum = sum([tempDist[i] == 0. and self.apriori Prob[i] or 0. for i in range(len(self.aprioriProb))])2602 nonCovPriorSum = sum([tempDist[i] == 0. and self.apriori_prob[i] or 0. for i in range(len(self.apriori_prob))]) 2564 2603 if tempDist.abs < 1.: 2565 2604 residue = 1.  tempDist.abs 2566 for a_i,a in enumerate(self.apriori Prob):2605 for a_i,a in enumerate(self.apriori_prob): 2567 2606 if tempDist[a_i] == 0.: 2568 tempDist[a_i]=self.apriori Prob[a_i]*residue/nonCovPriorSum2569 final Dist = tempDist #Orange.core.Distribution(example.domain.classVar)2607 tempDist[a_i]=self.apriori_prob[a_i]*residue/nonCovPriorSum 2608 final_dist = tempDist #Orange.core.Distribution(example.domain.class_var) 2570 2609 else: 2571 2610 tempDist.normalize() # prior probability 2572 tmp Examples = Orange.core.ExampleTable(self.examples)2573 for r in best Rules:2611 tmp_examples = Orange.core.ExampleTable(self.examples) 2612 for r in best_rules: 2574 2613 if r: 2575 tmp Examples = r.filter(tmpExamples)2576 tmpDist = Orange.core.Distribution(tmp Examples.domain.classVar,tmpExamples,self.weightID)2614 tmp_examples = r.filter(tmp_examples) 2615 tmpDist = Orange.core.Distribution(tmp_examples.domain.class_var,tmp_examples,self.weight_id) 2577 2616 tmpDist.normalize() 2578 probs = [0.]*len(self.examples.domain.class Var.values)2579 for i in range(len(self.examples.domain.class Var.values)):2617 probs = [0.]*len(self.examples.domain.class_var.values) 2618 for i in range(len(self.examples.domain.class_var.values)): 2580 2619 probs[i] = tmpDist[i]+tempDist[i]*2 2581 final Dist = Orange.core.Distribution(self.examples.domain.classVar)2582 for cl_i,cl in enumerate(self.examples.domain.class Var):2583 final Dist[cl] = probs[cl_i]2584 final Dist.normalize()2620 final_dist = Orange.core.Distribution(self.examples.domain.class_var) 2621 for cl_i,cl in enumerate(self.examples.domain.class_var): 2622 final_dist[cl] = probs[cl_i] 2623 final_dist.normalize() 2585 2624 2586 2625 if retRules: # Do you want to return rules with classification? 2587 if result_type == Orange.c ore.GetValue:2588 return (final Dist.modus(),bestRules)2626 if result_type == Orange.classification.Classifier.GetValue: 2627 return (final_dist.modus(),best_rules) 2589 2628 if result_type == Orange.core.GetProbabilities: 2590 return (final Dist, bestRules)2591 return (final Dist.modus(),finalDist, bestRules)2592 if result_type == Orange.c ore.GetValue:2593 return final Dist.modus()2629 return (final_dist, best_rules) 2630 return (final_dist.modus(),final_dist, best_rules) 2631 if result_type == Orange.classification.Classifier.GetValue: 2632 return final_dist.modus() 2594 2633 if result_type == Orange.core.GetProbabilities: 2595 return final Dist2596 return (final Dist.modus(),finalDist)2634 return final_dist 2635 return (final_dist.modus(),final_dist) 
orange/doc/Orange/rst/code/rulescn2.py
r7366 r7802 18 18 # All rulebase classifiers can have their rules printed out like this: 19 19 for r in cn2_classifier.rules: 20 print Orange.classification.rules.rule ToString(r)20 print Orange.classification.rules.rule_to_string(r) 
orange/doc/Orange/rst/code/rulescustomized.py
r7366 r7802 8 8 9 9 learner = Orange.classification.rules.RuleLearner() 10 learner.rule Finder = Orange.classification.rules.RuleBeamFinder()11 learner.rule Finder.evaluator = Orange.classification.rules.MEstimateEvaluator(m=50)10 learner.rule_finder = Orange.classification.rules.RuleBeamFinder() 11 learner.rule_finder.evaluator = Orange.classification.rules.MEstimateEvaluator(m=50) 12 12 13 13 table = Orange.data.Table("titanic") … … 15 15 16 16 for r in classifier.rules: 17 print Orange.classification.rules.rule ToString(r)17 print Orange.classification.rules.rule_to_string(r) 18 18 19 learner.rule Finder.ruleStoppingValidator = \19 learner.rule_finder.rule_stopping_validator = \ 20 20 Orange.classification.rules.RuleValidator_LRS(alpha=0.01, 21 21 min_coverage=10, max_rule_complexity = 2) 22 learner.rule Finder.ruleFilter = \22 learner.rule_finder.rule_filter = \ 23 23 Orange.classification.rules.RuleBeamFilter_Width(width = 50) 24 24 … … 26 26 27 27 for r in classifier.rules: 28 print Orange.classification.rules.rule ToString(r)28 print Orange.classification.rules.rule_to_string(r) 
orange/orngCN2.py
r7367 r7802 1 from Orange.classification.rules import rule ToString1 from Orange.classification.rules import rule_to_string as ruleToString 2 2 from Orange.classification.rules import LaplaceEvaluator 3 3 from Orange.classification.rules import WRACCEvaluator … … 23 23 from Orange.classification.rules import perc 24 24 from Orange.classification.rules import createRandomDataSet 25 from Orange.classification.rules import compParameters 25 from Orange.classification.rules import compParameters 26 26 from Orange.classification.rules import computeDists 27 27 from Orange.classification.rules import createEVDistList
Note: See TracChangeset
for help on using the changeset viewer.