Changeset 7802:72d408a32b16 in orange


Ignore:
Timestamp:
04/02/11 22:45:14 (3 years ago)
Author:
matija <matija.polajnar@…>
Branch:
default
Convert:
10db50c800fcbb808366aeba541cb8d52895feaa
Message:

Orange.classification.rules: Completion (maybe?) of transition to Orange 2.5.

Location:
orange
Files:
4 edited

Legend:

Unmodified
Added
Removed
  • orange/Orange/classification/rules.py

    r7690 r7802  
    1313and rule-based classification methods. First, there is an implementation of the classic  
    1414`CN2 induction algorithm <http://www.springerlink.com/content/k6q2v76736w5039r/>`_.  
    15 The implementation of CN2 is modular, providing the oportunity to change, specialize 
     15The implementation of CN2 is modular, providing the opportunity to change, specialize 
    1616and improve the algorithm. The implementation is thus based on the rule induction  
    1717framework that we describe below. 
     
    100100  <http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.24.1700>`_. In 
    101101  Machine Learning - EWSL-91. Proceedings of the European Working Session on 
    102   Learning., pages 151--163, Porto, Portugal, March 1991. 
     102  Learning, pp 151--163, Porto, Portugal, March 1991. 
    103103* Lavrac, Kavsek, Flach, Todorovski: `Subgroup Discovery with CN2-SD 
    104104  <http://jmlr.csail.mit.edu/papers/volume5/lavrac04a/lavrac04a.pdf>`_. Journal 
    105105  of Machine Learning Research 5: 153-188, 2004. 
     106 
     107 
     108Argument based CN2 
     109================== 
     110 
     111Orange also supports argument-based CN2 learning. 
     112 
     113.. autoclass:: Orange.classification.rules.ABCN2 
     114   :members: 
     115   :show-inheritance: 
     116    
     117   This class has many more undocumented methods; see the source code for 
     118   reference. 
     119    
     120.. autoclass:: Orange.classification.rules.ABCN2Ordered 
     121   :members: 
     122   :show-inheritance: 
     123    
     124.. autoclass:: Orange.classification.rules.ABCN2M 
     125   :members: 
     126   :show-inheritance: 
     127 
     128Thismodule has many more undocumented argument-based learning related classed; 
     129see the source code for reference. 
     130 
     131References 
     132---------- 
     133 
     134* Bratko, Mozina, Zabkar. `Argument-Based Machine Learning 
     135  <http://www.springerlink.com/content/f41g17t1259006k4/>`_. Lecture Notes in 
     136  Computer Science: vol. 4203/2006, 11-17, 2006. 
    106137 
    107138 
     
    135166    IF TRUE THEN survived=yes<0.000, 5.000> 
    136167 
    137 Notice that we first need to set the ruleFinder component, because the default 
     168Notice that we first need to set the rule_finder component, because the default 
    138169components are not constructed when the learner is constructed, but only when 
    139170we run it on data. At that time, the algorithm checks which components are 
     
    166197      each rule can be used as a classical Orange like 
    167198      classifier. Must be of type :class:`Orange.classification.Classifier`. 
    168       By default, an instance of :class:`Orange.core.DefaultClassifier` is used. 
     199      By default, an instance of :class:`Orange.classification.ConstantClassifier` is used. 
    169200    
    170201   .. attribute:: learner 
    171202       
    172203      learner to be used for making a classifier. Must be of type 
    173       :class:`Orange.core.learner`. By default, 
    174       :class:`Orange.core.MajorityLearner` is used. 
    175     
    176    .. attribute:: classDistribution 
     204      :class:`Orange.classification.Learner`. By default, 
     205      :class:`Orange.classification.majority.MajorityLearner` is used. 
     206    
     207   .. attribute:: class_distribution 
    177208       
    178209      distribution of class in data instances covered by this rule 
    179       (:class:`Orange.core.Distribution`). 
     210      (:class:`Orange.statistics.distribution.Distribution`). 
    180211    
    181212   .. attribute:: examples 
     
    183214      data instances covered by this rule (:class:`Orange.data.Table`). 
    184215    
    185    .. attribute:: weightID 
     216   .. attribute:: weight_id 
    186217    
    187218      ID of the weight meta-attribute for the stored data instances (int). 
     
    199230      but, obviously, any other measure can be applied. 
    200231    
    201    .. method:: filterAndStore(instances, weightID=0, targetClass=-1) 
     232   .. method:: filterAndStore(instances, weight_id=0, target_class=-1) 
    202233    
    203234      Filter passed data instances and store them in the attribute 'examples'. 
    204       Also, compute classDistribution, set weight of stored examples and create 
     235      Also, compute class_distribution, set weight of stored examples and create 
    205236      a new classifier using 'learner' attribute. 
    206237       
    207       :param weightID: ID of the weight meta-attribute. 
    208       :type weightID: int 
    209       :param targetClass: index of target class; -1 for all. 
    210       :type targetClass: int 
     238      :param weight_id: ID of the weight meta-attribute. 
     239      :type weight_id: int 
     240      :param target_class: index of target class; -1 for all. 
     241      :type target_class: int 
    211242    
    212243   Objects of this class can be invoked: 
    213244 
    214    .. method:: __call__(instance, instances, weightID=0, targetClass=-1) 
     245   .. method:: __call__(instance, instances, weight_id=0, target_class=-1) 
    215246    
    216247      There are two ways of invoking this method. One way is only passing the 
     
    232263      :type negate: bool 
    233264 
    234 .. py:class:: Orange.classification.rules.RuleLearner(storeInstances = true, targetClass = -1, baseRules = Orange.classification.rules.RuleList()) 
    235     
    236    Bases: :class:`Orange.core.Learner` 
     265.. py:class:: Orange.classification.rules.RuleLearner(store_instances = true, target_class = -1, base_rules = Orange.classification.rules.RuleList()) 
     266    
     267   Bases: :class:`Orange.classification.Learner` 
    237268    
    238269   A base rule induction learner. The algorithm follows separate-and-conquer 
     
    249280   .. parsed-literal:: 
    250281 
    251       def \_\_call\_\_(self, instances, weightID=0): 
    252           ruleList = Orange.classification.rules.RuleList() 
    253           allInstances = Orange.data.Table(instances) 
    254           while not self.\ **dataStopping**\ (instances, weightID, self.targetClass): 
    255               newRule = self.\ **ruleFinder**\ (instances, weightID, self.targetClass, 
    256                                         self.baseRules) 
    257               if self.\ **ruleStopping**\ (ruleList, newRule, instances, weightID): 
     282      def \_\_call\_\_(self, instances, weight_id=0): 
     283          rule_list = Orange.classification.rules.RuleList() 
     284          all_instances = Orange.data.Table(instances) 
     285          while not self.\ **data_stopping**\ (instances, weight_id, self.target_class): 
     286              new_rule = self.\ **rule_finder**\ (instances, weight_id, self.target_class, 
     287                                        self.base_rules) 
     288              if self.\ **rule_stopping**\ (rule_list, new_rule, instances, weight_id): 
    258289                  break 
    259               instances, weightID = self.\ **coverAndRemove**\ (newRule, instances, 
    260                                                       weightID, self.targetClass) 
    261               ruleList.append(newRule) 
     290              instances, weight_id = self.\ **cover_and_remove**\ (new_rule, instances, 
     291                                                      weight_id, self.target_class) 
     292              rule_list.append(new_rule) 
    262293          return Orange.classification.rules.RuleClassifier_FirstRule( 
    263               rules=ruleList, instances=allInstances) 
     294              rules=rule_list, instances=all_instances) 
    264295                 
    265    The four customizable components here are the invoked dataStopping, 
    266    ruleFinder, coverAndRemove and ruleStopping objects. By default, components 
     296   The four customizable components here are the invoked data_stopping, 
     297   rule_finder, cover_and_remove and rule_stopping objects. By default, components 
    267298   of the original CN2 algorithm will be used, but this can be changed by 
    268299   modifying those attributes: 
    269300    
    270    .. attribute:: dataStopping 
     301   .. attribute:: data_stopping 
    271302    
    272303      an object of class 
     
    278309      returns True if there are no more instances of given class.  
    279310    
    280    .. attribute:: ruleStopping 
     311   .. attribute:: rule_stopping 
    281312       
    282313      an object of class  
     
    284315      that decides from the last rule learned if it is worthwhile to use the 
    285316      rule and learn more rules. By default, no rule stopping criteria is 
    286       used (ruleStopping==None), thus accepting all rules. 
     317      used (rule_stopping==None), thus accepting all rules. 
    287318        
    288    .. attribute:: coverAndRemove 
     319   .. attribute:: cover_and_remove 
    289320        
    290321      an object of class 
     
    294325      (:class:`Orange.classification.rules.RuleCovererAndRemover_Default`) 
    295326      only removes the instances that belong to given target class, except if 
    296       it is not given (ie. targetClass==-1). 
    297      
    298    .. attribute:: ruleFinder 
     327      it is not given (ie. target_class==-1). 
     328     
     329   .. attribute:: rule_finder 
    299330       
    300331      an object of class 
     
    305336   Constructor can be given the following parameters: 
    306337     
    307    :param storeInstances: if set to True, the rules will have data instances 
     338   :param store_instances: if set to True, the rules will have data instances 
    308339       stored. 
    309    :type storeInstances: bool 
    310      
    311    :param targetClass: index of a specific class being learned; -1 for all. 
    312    :type targetClass: int 
    313     
    314    :param baseRules: Rules that we would like to use in ruleFinder to 
     340   :type store_instances: bool 
     341     
     342   :param target_class: index of a specific class being learned; -1 for all. 
     343   :type target_class: int 
     344    
     345   :param base_rules: Rules that we would like to use in rule_finder to 
    315346       constrain the learning space. If not set, it will be set to a set 
    316347       containing only an empty rule. 
    317    :type baseRules: :class:`Orange.classification.rules.RuleList` 
     348   :type base_rules: :class:`Orange.classification.rules.RuleList` 
    318349 
    319350Rule finders 
     
    327358   Rule finders are invokable in the following manner: 
    328359    
    329    .. method:: __call__(table, weightID, targetClass, baseRules) 
     360   .. method:: __call__(table, weight_id, target_class, base_rules) 
    330361    
    331362      Return a new rule, induced from instances in the given table. 
     
    334365      :type table: :class:`Orange.data.Table` 
    335366       
    336       :param weightID: ID of the weight meta-attribute for the stored data 
     367      :param weight_id: ID of the weight meta-attribute for the stored data 
    337368          instances. 
    338       :type weightID: int 
     369      :type weight_id: int 
    339370       
    340       :param targetClass: index of a specific class being learned; -1 for all. 
    341       :type targetClass: int  
     371      :param target_class: index of a specific class being learned; -1 for all. 
     372      :type target_class: int  
    342373       
    343       :param baseRules: Rules that we would like to use in ruleFinder to 
     374      :param base_rules: Rules that we would like to use in rule_finder to 
    344375          constrain the learning space. If not set, it will be set to a set 
    345376          containing only an empty rule. 
    346       :type baseRules: :class:`Orange.classification.rules.RuleList` 
     377      :type base_rules: :class:`Orange.classification.rules.RuleList` 
    347378 
    348379.. class:: Orange.classification.rules.RuleBeamFinder 
     
    355386   .. parsed-literal:: 
    356387 
    357       def \_\_call\_\_(self, table, weightID, targetClass, baseRules): 
    358           prior = orange.Distribution(table.domain.classVar, table, weightID) 
    359           rulesStar, bestRule = self.\ **initializer**\ (table, weightID, targetClass, baseRules, self.evaluator, prior) 
    360           \# compute quality of rules in rulesStar and bestRule 
     388      def \_\_call\_\_(self, table, weight_id, target_class, base_rules): 
     389          prior = Orange.statistics.distribution.Distribution(table.domain.class_var, table, weight_id) 
     390          rules_star, best_rule = self.\ **initializer**\ (table, weight_id, target_class, base_rules, self.evaluator, prior) 
     391          \# compute quality of rules in rules_star and best_rule 
    361392          ... 
    362           while len(rulesStar) \> 0: 
    363               candidates, rulesStar = self.\ **candidateSelector**\ (rulesStar, table, weightID) 
     393          while len(rules_star) \> 0: 
     394              candidates, rules_star = self.\ **candidate_selector**\ (rules_star, table, weight_id) 
    364395              for cand in candidates: 
    365                   newRules = self.\ **refiner**\ (cand, table, weightID, targetClass) 
    366                   for newRule in newRules: 
    367                       if self.\ **ruleStoppingValidator**\ (newRule, table, weightID, targetClass, cand.classDistribution): 
    368                           newRule.quality = self.\ **evaluator**\ (newRule, table, weightID, targetClass, prior) 
    369                           rulesStar.append(newRule) 
    370                           if self.\ **validator**\ (newRule, table, weightID, targetClass, prior) and 
    371                               newRule.quality \> bestRule.quality: 
    372                               bestRule = newRule 
    373               rulesStar = self.\ **ruleFilter**\ (rulesStar, table, weightID) 
    374           return bestRule 
     396                  new_rules = self.\ **refiner**\ (cand, table, weight_id, target_class) 
     397                  for new_rule in new_rules: 
     398                      if self.\ **rule_stopping_validator**\ (new_rule, table, weight_id, target_class, cand.class_distribution): 
     399                          new_rule.quality = self.\ **evaluator**\ (new_rule, table, weight_id, target_class, prior) 
     400                          rules_star.append(new_rule) 
     401                          if self.\ **validator**\ (new_rule, table, weight_id, target_class, prior) and 
     402                              new_rule.quality \> best_rule.quality: 
     403                              best_rule = new_rule 
     404              rules_star = self.\ **rule_filter**\ (rules_star, table, weight_id) 
     405          return best_rule 
    375406 
    376407   Bolded in the pseudo-code are several exchangeable components, exposed as 
     
    381412      an object of class 
    382413      :class:`Orange.classification.rules.RuleBeamInitializer` 
    383       used to initialize rulesStar and for selecting the 
     414      used to initialize rules_star and for selecting the 
    384415      initial best rule. By default 
    385416      (:class:`Orange.classification.rules.RuleBeamInitializer_Default`), 
    386       baseRules are returned as starting rulesSet and the best from baseRules 
    387       is set as bestRule. If baseRules are not set, this class will return 
    388       rulesStar with rule that covers all instances (has no selectors) and 
    389       this rule will be also used as bestRule. 
    390     
    391    .. attribute:: candidateSelector 
     417      base_rules are returned as starting rulesSet and the best from base_rules 
     418      is set as best_rule. If base_rules are not set, this class will return 
     419      rules_star with rule that covers all instances (has no selectors) and 
     420      this rule will be also used as best_rule. 
     421    
     422   .. attribute:: candidate_selector 
    392423    
    393424      an object of class 
    394425      :class:`Orange.classification.rules.RuleBeamCandidateSelector` 
    395426      used to separate a subset from the current 
    396       rulesStar and return it. These rules will be used in the next 
     427      rules_star and return it. These rules will be used in the next 
    397428      specification step. Default component (an instance of 
    398429      :class:`Orange.classification.rules.RuleBeamCandidateSelector_TakeAll`) 
    399       takes all rules in rulesStar 
     430      takes all rules in rules_star 
    400431     
    401432   .. attribute:: refiner 
     
    408439      a conjunctive selector to selectors present in the rule. 
    409440     
    410    .. attribute:: ruleFilter 
     441   .. attribute:: rule_filter 
    411442    
    412443      an object of class 
     
    416447      :class:`Orange.classification.rules.RuleBeamFilter_Width`\ *(m=5)*\ . 
    417448 
    418    .. method:: __call__(data, weightID, targetClass, baseRules) 
     449   .. method:: __call__(data, weight_id, target_class, base_rules) 
    419450 
    420451   Determines the next best rule to cover the remaining data instances. 
     
    423454   :type data: :class:`Orange.data.Table` 
    424455    
    425    :param weightID: index of the weight meta-attribute. 
    426    :type weightID: int 
    427     
    428    :param targetClass: index of the target class. 
    429    :type targetClass: int 
    430     
    431    :param baseRules: existing rules. 
    432    :type baseRules: :class:`Orange.classification.rules.RuleList` 
     456   :param weight_id: index of the weight meta-attribute. 
     457   :type weight_id: int 
     458    
     459   :param target_class: index of the target class. 
     460   :type target_class: int 
     461    
     462   :param base_rules: existing rules. 
     463   :type base_rules: :class:`Orange.classification.rules.RuleList` 
    433464 
    434465Rule evaluators 
     
    441472   following manner: 
    442473    
    443    .. method:: __call__(rule, instances, weightID, targetClass, prior) 
     474   .. method:: __call__(rule, instances, weight_id, target_class, prior) 
    444475    
    445476      Calculates a non-negative rule quality. 
     
    451482      :type instances: :class:`Orange.data.Table` 
    452483       
    453       :param weightID: index of the weight meta-attribute. 
    454       :type weightID: int 
     484      :param weight_id: index of the weight meta-attribute. 
     485      :type weight_id: int 
    455486       
    456       :param targetClass: index of target class of this rule. 
    457       :type targetClass: int 
     487      :param target_class: index of target class of this rule. 
     488      :type target_class: int 
    458489       
    459490      :param prior: prior class distribution. 
    460       :type prior: :class:`Orange.core.Distribution` 
     491      :type prior: :class:`Orange.statistics.distribution.Distribution` 
    461492 
    462493.. autoclass:: Orange.classification.rules.LaplaceEvaluator 
     
    492523   instances covered by the rule and return remaining instances. 
    493524 
    494    .. method:: __call__(rule, instances, weights, targetClass) 
     525   .. method:: __call__(rule, instances, weights, target_class) 
    495526    
    496527      Calculates a non-negative rule quality. 
     
    505536      :type weights: int 
    506537       
    507       :param targetClass: index of target class of this rule. 
    508       :type targetClass: int 
     538      :param target_class: index of target class of this rule. 
     539      :type target_class: int 
    509540 
    510541.. autoclass:: CovererAndRemover_MultWeights 
     
    515546----------------------- 
    516547 
    517 .. automethod:: Orange.classification.rules.ruleToString 
     548.. automethod:: Orange.classification.rules.rule_to_string 
    518549 
    519550.. 
     
    528559""" 
    529560 
     561import random 
     562import math 
     563import operator 
     564import numpy 
     565 
     566import Orange 
     567import Orange.core 
    530568from Orange.core import \ 
    531569    AssociationClassifier, \ 
     
    561599    RuleValidator, \ 
    562600    RuleValidator_LRS 
    563  
    564 import Orange.core 
    565 import random 
    566 import math 
    567  
    568  
     601from Orange.misc import deprecated_keywords 
     602from Orange.misc import deprecated_members 
     603 
     604from orngABML import * 
     605 
     606 
     607@deprecated_members({"weightID": "weight_id", "targetClass": "target_class"}) 
    569608class LaplaceEvaluator(RuleEvaluator): 
    570609    """ 
    571610    Laplace's rule of succession. 
    572611    """ 
    573     def __call__(self, rule, data, weightID, targetClass, apriori): 
    574         if not rule.classDistribution: 
     612    def __call__(self, rule, data, weight_id, target_class, apriori): 
     613        if not rule.class_distribution: 
    575614            return 0. 
    576         sumDist = rule.classDistribution.cases 
    577         if not sumDist or (targetClass>-1 and not rule.classDistribution[targetClass]): 
     615        sumDist = rule.class_distribution.cases 
     616        if not sumDist or (target_class>-1 and not rule.class_distribution[target_class]): 
    578617            return 0. 
    579618        # get distribution 
    580         if targetClass>-1: 
    581             return (rule.classDistribution[targetClass]+1)/(sumDist+2) 
     619        if target_class>-1: 
     620            return (rule.class_distribution[target_class]+1)/(sumDist+2) 
    582621        else: 
    583             return (max(rule.classDistribution)+1)/(sumDist+len(data.domain.classVar.values)) 
    584  
    585  
     622            return (max(rule.class_distribution)+1)/(sumDist+len(data.domain.class_var.values)) 
     623 
     624 
     625@deprecated_members({"weightID": "weight_id", "targetClass": "target_class"}) 
    586626class WRACCEvaluator(RuleEvaluator): 
    587627    """ 
    588628    Weighted relative accuracy. 
    589629    """ 
    590     def __call__(self, rule, data, weightID, targetClass, apriori): 
    591         if not rule.classDistribution: 
     630    def __call__(self, rule, data, weight_id, target_class, apriori): 
     631        if not rule.class_distribution: 
    592632            return 0. 
    593         sumDist = rule.classDistribution.cases 
    594         if not sumDist or (targetClass>-1 and not rule.classDistribution[targetClass]): 
     633        sumDist = rule.class_distribution.cases 
     634        if not sumDist or (target_class>-1 and not rule.class_distribution[target_class]): 
    595635            return 0. 
    596636        # get distribution 
    597         if targetClass>-1: 
    598             pRule = rule.classDistribution[targetClass]/apriori[targetClass] 
    599             pTruePositive = rule.classDistribution[targetClass]/sumDist 
    600             pClass = apriori[targetClass]/apriori.cases 
     637        if target_class>-1: 
     638            pRule = rule.class_distribution[target_class]/apriori[target_class] 
     639            pTruePositive = rule.class_distribution[target_class]/sumDist 
     640            pClass = apriori[target_class]/apriori.cases 
    601641        else: 
    602642            pRule = sumDist/apriori.cases 
    603             pTruePositive = max(rule.classDistribution)/sumDist 
    604             pClass = apriori[rule.classDistribution.modus()]/sum(apriori) 
     643            pTruePositive = max(rule.class_distribution)/sumDist 
     644            pClass = apriori[rule.class_distribution.modus()]/sum(apriori) 
    605645        if pTruePositive>pClass: 
    606646            return pRule*(pTruePositive-pClass) 
     
    608648 
    609649 
     650@deprecated_members({"weightID": "weight_id", "targetClass": "target_class"}) 
    610651class MEstimateEvaluator(RuleEvaluator): 
    611652    """ 
     
    618659    def __init__(self, m=2): 
    619660        self.m = m 
    620     def __call__(self, rule, data, weightID, targetClass, apriori): 
    621         if not rule.classDistribution: 
     661    def __call__(self, rule, data, weight_id, target_class, apriori): 
     662        if not rule.class_distribution: 
    622663            return 0. 
    623         sumDist = rule.classDistribution.abs 
     664        sumDist = rule.class_distribution.abs 
    624665        if self.m == 0 and not sumDist: 
    625666            return 0. 
    626667        # get distribution 
    627         if targetClass>-1: 
    628             p = rule.classDistribution[targetClass]+self.m*apriori[targetClass]/apriori.abs 
    629             p = p / (rule.classDistribution.abs + self.m) 
     668        if target_class>-1: 
     669            p = rule.class_distribution[target_class]+self.m*apriori[target_class]/apriori.abs 
     670            p = p / (rule.class_distribution.abs + self.m) 
    630671        else: 
    631             p = max(rule.classDistribution)+self.m*apriori[rule.\ 
    632                 classDistribution.modus()]/apriori.abs 
    633             p = p / (rule.classDistribution.abs + self.m)       
     672            p = max(rule.class_distribution)+self.m*apriori[rule.\ 
     673                class_distribution.modus()]/apriori.abs 
     674            p = p / (rule.class_distribution.abs + self.m)       
    634675        return p 
    635676 
    636677 
     678@deprecated_members({"beamWidth": "beam_width", 
     679                     "ruleFinder": "rule_finder", 
     680                     "ruleStopping": "rule_stopping", 
     681                     "dataStopping": "data_stopping", 
     682                     "coverAndRemove": "cover_and_remove", 
     683                     "ruleFinder": "rule_finder", 
     684                     "storeInstances": "store_instances", 
     685                     "targetClass": "target_class", 
     686                     "baseRules": "base_rules", 
     687                     "weightID": "weight_id"}) 
    637688class CN2Learner(RuleLearner): 
    638689    """ 
     
    649700        By default, entropy is used as a measure.  
    650701    :type evaluator: :class:`Orange.classification.rules.RuleEvaluator` 
    651     :param beamWidth: width of the search beam. 
    652     :type beamWidth: int 
     702    :param beam_width: width of the search beam. 
     703    :type beam_width: int 
    653704    :param alpha: significance level of the statistical test to determine 
    654705        whether rule is good enough to be returned by rulefinder. Likelihood 
     
    659710    """ 
    660711     
    661     def __new__(cls, instances=None, weightID=0, **kwargs): 
     712    def __new__(cls, instances=None, weight_id=0, **kwargs): 
    662713        self = RuleLearner.__new__(cls, **kwargs) 
    663714        if instances is not None: 
    664715            self.__init__(**kwargs) 
    665             return self.__call__(instances, weightID) 
     716            return self.__call__(instances, weight_id) 
    666717        else: 
    667718            return self 
    668719         
    669     def __init__(self, evaluator = RuleEvaluator_Entropy(), beamWidth = 5, 
     720    def __init__(self, evaluator = RuleEvaluator_Entropy(), beam_width = 5, 
    670721        alpha = 1.0, **kwds): 
    671722        self.__dict__.update(kwds) 
    672         self.ruleFinder = RuleBeamFinder() 
    673         self.ruleFinder.ruleFilter = RuleBeamFilter_Width(width = beamWidth) 
    674         self.ruleFinder.evaluator = evaluator 
    675         self.ruleFinder.validator = RuleValidator_LRS(alpha = alpha) 
     723        self.rule_finder = RuleBeamFinder() 
     724        self.rule_finder.ruleFilter = RuleBeamFilter_Width(width = beam_width) 
     725        self.rule_finder.evaluator = evaluator 
     726        self.rule_finder.validator = RuleValidator_LRS(alpha = alpha) 
    676727         
    677728    def __call__(self, instances, weight=0): 
     
    683734 
    684735 
     736@deprecated_members({"resultType": "result_type", "beamWidth": "beam_width"}) 
    685737class CN2Classifier(RuleClassifier): 
    686738    """ 
     
    699751    :type instances: :class:`Orange.data.Table` 
    700752     
    701     :param weightID: ID of the weight meta-attribute. 
    702     :type weightID: int 
    703  
    704     """ 
    705      
    706     def __init__(self, rules=None, instances=None, weightID = 0, **argkw): 
     753    :param weight_id: ID of the weight meta-attribute. 
     754    :type weight_id: int 
     755 
     756    """ 
     757     
     758    @deprecated_keywords({"examples": "instances"}) 
     759    def __init__(self, rules=None, instances=None, weight_id = 0, **argkw): 
    707760        self.rules = rules 
    708761        self.examples = instances 
    709         self.weightID = weightID 
    710         self.classVar = None if instances is None else instances.domain.classVar 
     762        self.weight_id = weight_id 
     763        self.class_var = None if instances is None else instances.domain.class_var 
    711764        self.__dict__.update(argkw) 
    712765        if instances is not None: 
    713             self.prior = Orange.core.Distribution(instances.domain.classVar,instances) 
     766            self.prior = Orange.statistics.distribution.Distribution(instances.domain.class_var,instances) 
    714767 
    715768    def __call__(self, instance, result_type=Orange.classification.Classifier.GetValue): 
     
    730783            if r(instance) and r.classifier: 
    731784                classifier = r.classifier 
    732                 classifier.defaultDistribution = r.classDistribution 
     785                classifier.defaultDistribution = r.class_distribution 
    733786                break 
    734787        if not classifier: 
    735             classifier = Orange.core.DefaultClassifier(instance.domain.classVar,\ 
     788            classifier = Orange.classification.ConstantClassifier(instance.domain.class_var,\ 
    736789                self.prior.modus()) 
    737790            classifier.defaultDistribution = self.prior 
     
    740793          return classifier(instance) 
    741794        if result_type == Orange.classification.Classifier.GetProbabilities: 
    742           return classifier.defaultDistribution 
    743         return (classifier(instance),classifier.defaultDistribution) 
     795          return classifier.default_distribution 
     796        return (classifier(instance),classifier.default_distribution) 
    744797 
    745798    def __str__(self): 
    746         retStr = ruleToString(self.rules[0])+" "+str(self.rules[0].\ 
    747             classDistribution)+"\n" 
     799        ret_str = rule_to_string(self.rules[0])+" "+str(self.rules[0].\ 
     800            class_distribution)+"\n" 
    748801        for r in self.rules[1:]: 
    749             retStr += "ELSE "+ruleToString(r)+" "+str(r.classDistribution)+"\n" 
    750         return retStr 
    751  
    752  
     802            ret_str += "ELSE "+rule_to_string(r)+" "+str(r.class_distribution)+"\n" 
     803        return ret_str 
     804 
     805 
     806@deprecated_members({"beamWidth": "beam_width", 
     807                     "ruleFinder": "rule_finder", 
     808                     "ruleStopping": "rule_stopping", 
     809                     "dataStopping": "data_stopping", 
     810                     "coverAndRemove": "cover_and_remove", 
     811                     "ruleFinder": "rule_finder", 
     812                     "storeInstances": "store_instances", 
     813                     "targetClass": "target_class", 
     814                     "baseRules": "base_rules", 
     815                     "weightID": "weight_id"}) 
    753816class CN2UnorderedLearner(RuleLearner): 
    754817    """ 
     
    767830        By default, Laplace's rule of succession is used as a measure.  
    768831    :type evaluator: :class:`Orange.classification.rules.RuleEvaluator` 
    769     :param beamWidth: width of the search beam. 
    770     :type beamWidth: int 
     832    :param beam_width: width of the search beam. 
     833    :type beam_width: int 
    771834    :param alpha: significance level of the statistical test to determine 
    772835        whether rule is good enough to be returned by rulefinder. Likelihood 
     
    775838    :type alpha: float 
    776839    """ 
    777     def __new__(cls, instances=None, weightID=0, **kwargs): 
     840    def __new__(cls, instances=None, weight_id=0, **kwargs): 
    778841        self = RuleLearner.__new__(cls, **kwargs) 
    779842        if instances is not None: 
    780843            self.__init__(**kwargs) 
    781             return self.__call__(instances, weightID) 
     844            return self.__call__(instances, weight_id) 
    782845        else: 
    783846            return self 
    784847             
    785     def __init__(self, evaluator = RuleEvaluator_Laplace(), beamWidth = 5, 
     848    def __init__(self, evaluator = RuleEvaluator_Laplace(), beam_width = 5, 
    786849        alpha = 1.0, **kwds): 
    787850        self.__dict__.update(kwds) 
    788         self.ruleFinder = RuleBeamFinder() 
    789         self.ruleFinder.ruleFilter = RuleBeamFilter_Width(width = beamWidth) 
    790         self.ruleFinder.evaluator = evaluator 
    791         self.ruleFinder.validator = RuleValidator_LRS(alpha = alpha) 
    792         self.ruleFinder.ruleStoppingValidator = RuleValidator_LRS(alpha = 1.0) 
    793         self.ruleStopping = RuleStopping_Apriori() 
    794         self.dataStopping = RuleDataStoppingCriteria_NoPositives() 
    795          
    796     def __call__(self, instances, weight=0): 
     851        self.rule_finder = RuleBeamFinder() 
     852        self.rule_finder.ruleFilter = RuleBeamFilter_Width(width = beam_width) 
     853        self.rule_finder.evaluator = evaluator 
     854        self.rule_finder.validator = RuleValidator_LRS(alpha = alpha) 
     855        self.rule_finder.rule_stoppingValidator = RuleValidator_LRS(alpha = 1.0) 
     856        self.rule_stopping = RuleStopping_Apriori() 
     857        self.data_stopping = RuleDataStoppingCriteria_NoPositives() 
     858     
     859    @deprecated_keywords({"weight": "weight_id"}) 
     860    def __call__(self, instances, weight_id=0): 
    797861        supervisedClassCheck(instances) 
    798862         
    799863        rules = RuleList() 
    800         self.ruleStopping.apriori = Orange.core.Distribution(instances.\ 
    801             domain.classVar,instances) 
     864        self.rule_stopping.apriori = Orange.statistics.distribution.Distribution( 
     865            instances.domain.class_var,instances) 
    802866        progress=getattr(self,"progressCallback",None) 
    803867        if progress: 
    804868            progress.start = 0.0 
    805869            progress.end = 0.0 
    806             distrib = Orange.core.Distribution(instances.domain.classVar,\ 
    807                 instances, weight) 
     870            distrib = Orange.statistics.distribution.Distribution( 
     871                instances.domain.class_var, instances, weight_id) 
    808872            distrib.normalize() 
    809         for targetClass in instances.domain.classVar: 
     873        for target_class in instances.domain.class_var: 
    810874            if progress: 
    811875                progress.start = progress.end 
    812                 progress.end += distrib[targetClass] 
    813             self.targetClass = targetClass 
    814             cl = RuleLearner.__call__(self,instances,weight) 
     876                progress.end += distrib[target_class] 
     877            self.target_class = target_class 
     878            cl = RuleLearner.__call__(self,instances,weight_id) 
    815879            for r in cl.rules: 
    816880                rules.append(r) 
    817881        if progress: 
    818882            progress(1.0,None) 
    819         return CN2UnorderedClassifier(rules, instances, weight) 
     883        return CN2UnorderedClassifier(rules, instances, weight_id) 
    820884 
    821885 
     
    836900    :type instances: :class:`Orange.data.Table` 
    837901     
    838     :param weightID: ID of the weight meta-attribute. 
    839     :type weightID: int 
    840  
    841     """ 
    842     def __init__(self, rules = None, instances = None, weightID = 0, **argkw): 
     902    :param weight_id: ID of the weight meta-attribute. 
     903    :type weight_id: int 
     904 
     905    """ 
     906 
     907    @deprecated_keywords({"examples": "instances"}) 
     908    def __init__(self, rules = None, instances = None, weight_id = 0, **argkw): 
    843909        self.rules = rules 
    844910        self.examples = instances 
    845         self.weightID = weightID 
    846         self.classVar = instances.domain.classVar if instances is not None else None 
     911        self.weight_id = weight_id 
     912        self.class_var = instances.domain.class_var if instances is not None else None 
    847913        self.__dict__.update(argkw) 
    848914        if instances is not None: 
    849             self.prior = Orange.core.Distribution(instances.domain.classVar, instances) 
    850  
    851     def __call__(self, instance, result_type=Orange.core.GetValue, retRules = False): 
     915            self.prior = Orange.statistics.distribution.Distribution( 
     916                                instances.domain.class_var, instances) 
     917 
     918    def __call__(self, instance, result_type=Orange.classification.Classifier.GetValue, retRules = False): 
    852919        """ 
    853920        :param instance: instance to be classified. 
     
    862929        """ 
    863930        def add(disc1, disc2, sumd): 
    864             disc = Orange.core.DiscDistribution(disc1) 
     931            disc = Orange.statistics.distribution.Discrete(disc1) 
    865932            sumdisc = sumd 
    866933            for i,d in enumerate(disc): 
     
    870937 
    871938        # create empty distribution 
    872         retDist = Orange.core.DiscDistribution(self.examples.domain.classVar) 
     939        retDist = Orange.statistics.distribution.Discrete(self.examples.domain.class_var) 
    873940        covRules = RuleList() 
    874941        # iterate through instances - add distributions 
    875942        sumdisc = 0. 
    876943        for r in self.rules: 
    877             if r(instance) and r.classDistribution: 
    878                 retDist, sumdisc = add(retDist, r.classDistribution, sumdisc) 
     944            if r(instance) and r.class_distribution: 
     945                retDist, sumdisc = add(retDist, r.class_distribution, sumdisc) 
    879946                covRules.append(r) 
    880947        if not sumdisc: 
     
    883950             
    884951        if sumdisc > 0.0: 
    885             for c in self.examples.domain.classVar: 
     952            for c in self.examples.domain.class_var: 
    886953                retDist[c] /= sumdisc 
    887954        else: 
     
    903970        retStr = "" 
    904971        for r in self.rules: 
    905             retStr += ruleToString(r)+" "+str(r.classDistribution)+"\n" 
     972            retStr += rule_to_string(r)+" "+str(r.class_distribution)+"\n" 
    906973        return retStr 
    907974 
     
    927994        By default, weighted relative accuracy is used. 
    928995    :type evaluator: :class:`Orange.classification.rules.RuleEvaluator` 
    929     :param beamWidth: width of the search beam. 
    930     :type beamWidth: int 
     996    :param beam_width: width of the search beam. 
     997    :type beam_width: int 
    931998    :param alpha: significance level of the statistical test to determine 
    932999        whether rule is good enough to be returned by rulefinder. Likelihood 
     
    9371004    :type mult: float 
    9381005    """ 
    939     def __new__(cls, instances=None, weightID=0, **kwargs): 
     1006    def __new__(cls, instances=None, weight_id=0, **kwargs): 
    9401007        self = CN2UnorderedLearner.__new__(cls, **kwargs) 
    9411008        if instances is not None: 
    9421009            self.__init__(**kwargs) 
    943             return self.__call__(instances, weightID) 
     1010            return self.__call__(instances, weight_id) 
    9441011        else: 
    9451012            return self 
    9461013         
    947     def __init__(self, evaluator = WRACCEvaluator(), beamWidth = 5, 
     1014    def __init__(self, evaluator = WRACCEvaluator(), beam_width = 5, 
    9481015                alpha = 0.05, mult=0.7, **kwds): 
    9491016        CN2UnorderedLearner.__init__(self, evaluator = evaluator, 
    950                                           beamWidth = beamWidth, alpha = alpha, **kwds) 
    951         self.coverAndRemove = CovererAndRemover_MultWeights(mult=mult) 
     1017                                          beam_width = beam_width, alpha = alpha, **kwds) 
     1018        self.cover_and_remove = CovererAndRemover_MultWeights(mult=mult) 
    9521019 
    9531020    def __call__(self, instances, weight=0):         
     
    9571024        classifier = CN2UnorderedLearner.__call__(self,instances,weight) 
    9581025        for r in classifier.rules: 
    959             r.filterAndStore(oldInstances,weight,r.classifier.defaultVal) 
     1026            r.filterAndStore(oldInstances,weight,r.classifier.default_val) 
    9601027        return classifier 
    9611028 
    9621029 
    963  
    964 # Main ABCN2 class 
    965 class ABCN2(Orange.core.RuleLearner): 
    966     """COPIED&PASTED FROM orngABCN2 -- REFACTOR AND DOCUMENT ASAP! 
    967     This is implementation of ABCN2 + EVC as evaluation + LRC classification. 
    968     """ 
    969      
    970     def __init__(self, argumentID=0, width=5, m=2, opt_reduction=2, nsampling=100, max_rule_complexity=5, 
     1030@deprecated_members({"beamWidth": "beam_width", 
     1031                     "ruleFinder": "rule_finder", 
     1032                     "ruleStopping": "rule_stopping", 
     1033                     "dataStopping": "data_stopping", 
     1034                     "coverAndRemove": "cover_and_remove", 
     1035                     "ruleFinder": "rule_finder", 
     1036                     "storeInstances": "store_instances", 
     1037                     "targetClass": "target_class", 
     1038                     "baseRules": "base_rules", 
     1039                     "weightID": "weight_id", 
     1040                     "argumentID": "argument_id"}) 
     1041class ABCN2(RuleLearner): 
     1042    """ 
     1043    This is an implementation of argument-based CN2 using EVC as evaluation 
     1044    and LRC classification. 
     1045     
     1046    Rule learning parameters that can be passed to constructor: 
     1047     
     1048    :param width: beam width (default 5). 
     1049    :type width: int 
     1050    :param learn_for_class: class for which to learn; None (default) if all 
     1051       classes are to be learnt. 
     1052    :param learn_one_rule: decides whether to rule one rule only (default 
     1053       False). 
     1054    :type learn_one_rule: boolean 
     1055    :param analyse_argument: index of argument to analyse; -1 to learn normally 
     1056       (default) 
     1057    :type analyse_argument: int 
     1058     
     1059    The following evaluator related arguments are supported: 
     1060     
     1061    :param m: m for m-estimate to be corrected with EVC (default 2). 
     1062    :type m: int 
     1063    :param opt_reduction: type of EVC correction: 0=no correction, 
     1064       1=pessimistic, 2=normal (default 2). 
     1065    :type opt_reduction: int 
     1066    :param nsampling: number of samples in estimating extreme value 
     1067       distribution for EVC (default 100). 
     1068    :type nsampling: int 
     1069    :param evd: pre-given extreme value distributions. 
     1070    :param evd_arguments: pre-given extreme value distributions for arguments. 
     1071     
     1072    Those parameters control rule validation: 
     1073     
     1074    :param rule_sig: minimal rule significance (default 1.0). 
     1075    :type rule_sig: float 
     1076    :param att_sig: minimal attribute significance in rule (default 1.0). 
     1077    :type att_sig: float 
     1078    :param max_rule_complexity: maximum number of conditions in rule (default 5). 
     1079    :type max_rule_complexity: int 
     1080    :param min_coverage: minimal number of covered instances (default 5). 
     1081    :type min_coverage: int 
     1082     
     1083    Probabilistic covering can be controlled using: 
     1084     
     1085    :param min_improved: minimal number of instances improved in probabilistic covering (default 1). 
     1086    :type min_improved: int 
     1087    :param min_improved_perc: minimal percentage of covered instances that need to be improved (default 0.0). 
     1088    :type min_improved_perc: float 
     1089     
     1090    Finally, LRC (classifier) related parameters are: 
     1091     
     1092    :param add_sub_rules: decides whether to add sub-rules. 
     1093    :type add_sub_rules: boolean 
     1094    :param min_cl_sig: minimal significance of beta in classifier (default 0.5). 
     1095    :type min_cl_sig: float 
     1096    :param min_beta: minimal beta value (default 0.0). 
     1097    :type min_beta: float 
     1098    :param set_prefix_rules: decides whether ordered prefix rules should be 
     1099       added (default False). 
     1100    :type set_prefix_rules: boolean 
     1101    :param alternative_learner: use rule-learner as a correction method for 
     1102       other machine learning methods (default None). 
     1103    """ 
     1104     
     1105    def __init__(self, argument_id=0, width=5, m=2, opt_reduction=2, nsampling=100, max_rule_complexity=5, 
    9711106                 rule_sig=1.0, att_sig=1.0, postpruning=None, min_quality=0., min_coverage=1, min_improved=1, min_improved_perc=0.0, 
    9721107                 learn_for_class = None, learn_one_rule = False, evd=None, evd_arguments=None, prune_arguments=False, analyse_argument=-1, 
    9731108                 alternative_learner = None, min_cl_sig = 0.5, min_beta = 0.0, set_prefix_rules = False, add_sub_rules = False, 
    9741109                 **kwds): 
    975         """ 
    976         Parameters: 
    977             General rule learning: 
    978                 width               ... beam width (default 5) 
    979                 learn_for_class     ... learner rules for one class? otherwise None 
    980                 learn_one_rule      ... learn one rule only ? 
    981                 analyse_argument    ... learner only analyses argument with this index; if set to -1, then it learns normally 
    982                  
    983             Evaluator related: 
    984                 m                   ... m-estimate to be corrected with EVC (default 2) 
    985                 opt_reduction       ... types of EVC correction; 0=no correction, 1=pessimistic, 2=normal (default 2) 
    986                 nsampling           ... number of samples in estimating extreme value distribution (for EVC) (default 100) 
    987                 evd                 ... pre given extreme value distributions 
    988                 evd_arguments       ... pre given extreme value distributions for arguments 
    989  
    990             Rule Validation: 
    991                 rule_sig            ... minimal rule significance (default 1.0) 
    992                 att_sig             ... minimal attribute significance in rule (default 1.0) 
    993                 max_rule_complexity ... maximum number of conditions in rule (default 5) 
    994                 min_coverage        ... minimal number of covered examples (default 5) 
    995  
    996             Probabilistic covering: 
    997                 min_improved        ... minimal number of examples improved in probabilistic covering (default 1) 
    998                 min_improved_perc   ... minimal percentage of covered examples that need to be improved (default 0.0) 
    999  
    1000             Classifier (LCR) related: 
    1001                 add_sub_rules       ... add sub rules ? (default False) 
    1002                 min_cl_sig          ... minimal significance of beta in classifier (default 0.5) 
    1003                 min_beta            ... minimal beta value (default 0.0) 
    1004                 set_prefix_rules    ... should ordered prefix rules be added? (default False) 
    1005                 alternative_learner ... use rule-learner as a correction method for other machine learning methods. (default None) 
    1006  
    1007         """ 
    1008  
    10091110         
    10101111        # argument ID which is passed to abcn2 learner 
    1011         self.argumentID = argumentID 
     1112        self.argument_id = argument_id 
    10121113        # learn for specific class only?         
    10131114        self.learn_for_class = learn_for_class 
     
    10181119        self.postpruning = postpruning 
    10191120        # rule finder 
    1020         self.ruleFinder = Orange.core.RuleBeamFinder() 
    1021         self.ruleFilter = Orange.core.RuleBeamFilter_Width(width=width) 
     1121        self.rule_finder = RuleBeamFinder() 
     1122        self.ruleFilter = RuleBeamFilter_Width(width=width) 
    10221123        self.ruleFilter_arguments = ABBeamFilter(width=width) 
    10231124        if max_rule_complexity - 1 < 0: 
    10241125            max_rule_complexity = 10 
    1025         self.ruleFinder.ruleStoppingValidator = Orange.core.RuleValidator_LRS(alpha = 1.0, min_quality = 0., max_rule_complexity = max_rule_complexity - 1, min_coverage=min_coverage) 
    1026         self.refiner = Orange.core.RuleBeamRefiner_Selector() 
    1027         self.refiner_arguments = SelectorAdder(discretizer = Orange.core.EntropyDiscretization(forceAttribute = 1, 
     1126        self.rule_finder.rule_stoppingValidator = RuleValidator_LRS(alpha = 1.0, min_quality = 0., max_rule_complexity = max_rule_complexity - 1, min_coverage=min_coverage) 
     1127        self.refiner = RuleBeamRefiner_Selector() 
     1128        self.refiner_arguments = SelectorAdder(discretizer = Orange.feature.discretization.EntropyDiscretization(forceAttribute = 1, 
    10281129                                                                                           maxNumberOfIntervals = 2)) 
    10291130        self.prune_arguments = prune_arguments 
    10301131        # evc evaluator 
    10311132        evdGet = Orange.core.EVDistGetter_Standard() 
    1032         self.ruleFinder.evaluator = Orange.core.RuleEvaluator_mEVC(m=m, evDistGetter = evdGet, min_improved = min_improved, min_improved_perc = min_improved_perc) 
    1033         self.ruleFinder.evaluator.returnExpectedProb = True 
    1034         self.ruleFinder.evaluator.optimismReduction = opt_reduction 
    1035         self.ruleFinder.evaluator.ruleAlpha = rule_sig 
    1036         self.ruleFinder.evaluator.attributeAlpha = att_sig 
    1037         self.ruleFinder.evaluator.validator = Orange.core.RuleValidator_LRS(alpha = 1.0, min_quality = min_quality, min_coverage=min_coverage, max_rule_complexity = max_rule_complexity - 1) 
     1133        self.rule_finder.evaluator = RuleEvaluator_mEVC(m=m, evDistGetter = evdGet, min_improved = min_improved, min_improved_perc = min_improved_perc) 
     1134        self.rule_finder.evaluator.returnExpectedProb = True 
     1135        self.rule_finder.evaluator.optimismReduction = opt_reduction 
     1136        self.rule_finder.evaluator.ruleAlpha = rule_sig 
     1137        self.rule_finder.evaluator.attributeAlpha = att_sig 
     1138        self.rule_finder.evaluator.validator = RuleValidator_LRS(alpha = 1.0, min_quality = min_quality, min_coverage=min_coverage, max_rule_complexity = max_rule_complexity - 1) 
    10381139 
    10391140        # learn stopping criteria 
    1040         self.ruleStopping = None 
    1041         self.dataStopping = Orange.core.RuleDataStoppingCriteria_NoPositives() 
     1141        self.rule_stopping = None 
     1142        self.data_stopping = RuleDataStoppingCriteria_NoPositives() 
    10421143        # evd fitting 
    10431144        self.evd_creator = EVDFitter(self,n=nsampling) 
     
    10511152 
    10521153 
    1053     def __call__(self, examples, weightID=0): 
     1154    def __call__(self, examples, weight_id=0): 
    10541155        # initialize progress bar 
    10551156        progress=getattr(self,"progressCallback",None) 
     
    10571158            progress.start = 0.0 
    10581159            progress.end = 0.0 
    1059             distrib = Orange.core.Distribution(examples.domain.classVar, examples, weightID) 
     1160            distrib = Orange.statistics.distribution.Distribution( 
     1161                             examples.domain.class_var, examples, weight_id) 
    10601162            distrib.normalize() 
    10611163         
    10621164        # we begin with an empty set of rules 
    1063         all_rules = Orange.core.RuleList() 
     1165        all_rules = RuleList() 
    10641166 
    10651167        # th en, iterate through all classes and learn rule for each class separately 
    1066         for cl_i,cl in enumerate(examples.domain.classVar): 
     1168        for cl_i,cl in enumerate(examples.domain.class_var): 
    10671169            if progress: 
    10681170                step = distrib[cl] / 2. 
     
    10741176 
    10751177            # rules for this class only 
    1076             rules, arg_rules = Orange.core.RuleList(), Orange.core.RuleList() 
     1178            rules, arg_rules = RuleList(), RuleList() 
    10771179 
    10781180            # create dichotomous class 
     
    10801182 
    10811183            # preparation of the learner (covering, evd, etc.) 
    1082             self.prepare_settings(dich_data, weightID, cl_i, progress) 
     1184            self.prepare_settings(dich_data, weight_id, cl_i, progress) 
    10831185 
    10841186            # learn argumented rules first ... 
    1085             self.turn_ABML_mode(dich_data, weightID, cl_i) 
     1187            self.turn_ABML_mode(dich_data, weight_id, cl_i) 
    10861188            # first specialize all unspecialized arguments 
    1087             # dich_data = self.specialise_arguments(dich_data, weightID) 
     1189            # dich_data = self.specialise_arguments(dich_data, weight_id) 
    10881190            # comment: specialisation of arguments is within learning of an argumented rule; 
    10891191            #          this is now different from the published algorithm 
     
    10991201                    continue 
    11001202                ae = aes[0] 
    1101                 rule = self.learn_argumented_rule(ae, dich_data, weightID) # target class is always first class (0) 
     1203                rule = self.learn_argumented_rule(ae, dich_data, weight_id) # target class is always first class (0) 
    11021204                if not progress: 
    1103                     print "learned rule", Orange.classification.rules.ruleToString(rule) 
     1205                    print "learned rule", Orange.classification.rules.rule_to_string(rule) 
    11041206                if rule: 
    11051207                    arg_rules.append(rule) 
     
    11121214            # remove all examples covered by rules 
    11131215##            for rule in rules: 
    1114 ##                dich_data = self.remove_covered_examples(rule, dich_data, weightID) 
     1216##                dich_data = self.remove_covered_examples(rule, dich_data, weight_id) 
    11151217##            if progress: 
    11161218##                progress(self.remaining_probability(dich_data),None) 
     
    11181220            # learn normal rules on remaining examples 
    11191221            if self.analyse_argument == -1: 
    1120                 self.turn_normal_mode(dich_data, weightID, cl_i) 
     1222                self.turn_normal_mode(dich_data, weight_id, cl_i) 
    11211223                while dich_data: 
    11221224                    # learn a rule 
    1123                     rule = self.learn_normal_rule(dich_data, weightID, self.apriori) 
     1225                    rule = self.learn_normal_rule(dich_data, weight_id, self.apriori) 
    11241226                    if not rule: 
    11251227                        break 
    11261228                    if not progress: 
    1127                         print "rule learned: ", Orange.classification.rules.ruleToString(rule), rule.quality 
    1128                     dich_data = self.remove_covered_examples(rule, dich_data, weightID) 
     1229                        print "rule learned: ", Orange.classification.rules.rule_to_string(rule), rule.quality 
     1230                    dich_data = self.remove_covered_examples(rule, dich_data, weight_id) 
    11291231                    if progress: 
    11301232                        progress(self.remaining_probability(dich_data),None) 
     
    11341236 
    11351237            for r in arg_rules: 
    1136                 dich_data = self.remove_covered_examples(r, dich_data, weightID) 
     1238                dich_data = self.remove_covered_examples(r, dich_data, weight_id) 
    11371239                rules.append(r) 
    11381240 
    11391241            # prune unnecessary rules 
    1140             rules = self.prune_unnecessary_rules(rules, dich_data, weightID) 
     1242            rules = self.prune_unnecessary_rules(rules, dich_data, weight_id) 
    11411243 
    11421244            if self.add_sub_rules: 
    1143                 rules = self.add_sub_rules_call(rules, dich_data, weightID) 
     1245                rules = self.add_sub_rules_call(rules, dich_data, weight_id) 
    11441246 
    11451247            # restore domain and class in rules, add them to all_rules 
    11461248            for r in rules: 
    1147                 all_rules.append(self.change_domain(r, cl, examples, weightID)) 
     1249                all_rules.append(self.change_domain(r, cl, examples, weight_id)) 
    11481250 
    11491251            if progress: 
    11501252                progress(1.0,None) 
    11511253        # create a classifier from all rules         
    1152         return self.create_classifier(all_rules, examples, weightID) 
    1153  
    1154     def learn_argumented_rule(self, ae, examples, weightID): 
     1254        return self.create_classifier(all_rules, examples, weight_id) 
     1255 
     1256    def learn_argumented_rule(self, ae, examples, weight_id): 
    11551257        # prepare roots of rules from arguments 
    1156         positive_args = self.init_pos_args(ae, examples, weightID) 
     1258        positive_args = self.init_pos_args(ae, examples, weight_id) 
    11571259        if not positive_args: # something wrong 
    11581260            raise "There is a problem with argumented example %s"%str(ae) 
    11591261            return None 
    1160         negative_args = self.init_neg_args(ae, examples, weightID) 
     1262        negative_args = self.init_neg_args(ae, examples, weight_id) 
    11611263 
    11621264        # set negative arguments in refiner 
    1163         self.ruleFinder.refiner.notAllowedSelectors = negative_args 
    1164         self.ruleFinder.refiner.example = ae 
     1265        self.rule_finder.refiner.notAllowedSelectors = negative_args 
     1266        self.rule_finder.refiner.example = ae 
    11651267        # set arguments to filter 
    1166         self.ruleFinder.ruleFilter.setArguments(examples.domain,positive_args) 
     1268        self.rule_finder.ruleFilter.setArguments(examples.domain,positive_args) 
    11671269 
    11681270        # learn a rule 
    1169         self.ruleFinder.evaluator.bestRule = None 
    1170         self.ruleFinder.evaluator.returnBestFuture = True 
    1171         self.ruleFinder(examples,weightID,0,positive_args) 
    1172 ##        self.ruleFinder.evaluator.bestRule.quality = 0.8 
     1271        self.rule_finder.evaluator.bestRule = None 
     1272        self.rule_finder.evaluator.returnBestFuture = True 
     1273        self.rule_finder(examples,weight_id,0,positive_args) 
     1274##        self.rule_finder.evaluator.bestRule.quality = 0.8 
    11731275         
    11741276        # return best rule 
    1175         return self.ruleFinder.evaluator.bestRule 
     1277        return self.rule_finder.evaluator.bestRule 
    11761278         
    1177     def prepare_settings(self, examples, weightID, cl_i, progress): 
     1279    def prepare_settings(self, examples, weight_id, cl_i, progress): 
    11781280        # apriori distribution 
    1179         self.apriori = Orange.core.Distribution(examples.domain.classVar,examples,weightID) 
     1281        self.apriori = Orange.statistics.distribution.Distribution( 
     1282                                examples.domain.class_var,examples,weight_id) 
    11801283         
    11811284        # prepare covering mechanism 
    1182         self.coverAndRemove = CovererAndRemover_Prob(examples, weightID, 0, self.apriori) 
    1183         self.ruleFinder.evaluator.probVar = examples.domain.getmeta(self.coverAndRemove.probAttribute) 
     1285        self.cover_and_remove = CovererAndRemover_Prob(examples, weight_id, 0, self.apriori) 
     1286        self.rule_finder.evaluator.probVar = examples.domain.getmeta(self.cover_and_remove.probAttribute) 
    11841287 
    11851288        # compute extreme distributions 
    11861289        # TODO: why evd and evd_this???? 
    1187         if self.ruleFinder.evaluator.optimismReduction > 0 and not self.evd: 
    1188             self.evd_this = self.evd_creator.computeEVD(examples, weightID, target_class=0, progress = progress) 
     1290        if self.rule_finder.evaluator.optimismReduction > 0 and not self.evd: 
     1291            self.evd_this = self.evd_creator.computeEVD(examples, weight_id, target_class=0, progress = progress) 
    11891292        if self.evd: 
    11901293            self.evd_this = self.evd[cl_i] 
    11911294 
    1192     def turn_ABML_mode(self, examples, weightID, cl_i): 
     1295    def turn_ABML_mode(self, examples, weight_id, cl_i): 
    11931296        # evaluator 
    1194         if self.ruleFinder.evaluator.optimismReduction > 0 and self.argumentID: 
     1297        if self.rule_finder.evaluator.optimismReduction > 0 and self.argument_id: 
    11951298            if self.evd_arguments: 
    1196                 self.ruleFinder.evaluator.evDistGetter.dists = self.evd_arguments[cl_i] 
     1299                self.rule_finder.evaluator.evDistGetter.dists = self.evd_arguments[cl_i] 
    11971300            else: 
    1198                 self.ruleFinder.evaluator.evDistGetter.dists = self.evd_this # self.evd_creator.computeEVD_example(examples, weightID, target_class=0) 
     1301                self.rule_finder.evaluator.evDistGetter.dists = self.evd_this # self.evd_creator.computeEVD_example(examples, weight_id, target_class=0) 
    11991302        # rule refiner 
    1200         self.ruleFinder.refiner = self.refiner_arguments 
    1201         self.ruleFinder.refiner.argumentID = self.argumentID 
    1202         self.ruleFinder.ruleFilter = self.ruleFilter_arguments 
     1303        self.rule_finder.refiner = self.refiner_arguments 
     1304        self.rule_finder.refiner.argument_id = self.argument_id 
     1305        self.rule_finder.ruleFilter = self.ruleFilter_arguments 
    12031306 
    12041307    def create_dich_class(self, examples, cl): 
    1205         """ create dichotomous class. """ 
    1206         (newDomain, targetVal) = createDichotomousClass(examples.domain, examples.domain.classVar, str(cl), negate=0) 
     1308        """ 
     1309        Create dichotomous class. 
     1310        """ 
     1311        (newDomain, targetVal) = createDichotomousClass(examples.domain, examples.domain.class_var, str(cl), negate=0) 
    12071312        newDomainmetas = newDomain.getmetas() 
    1208         newDomain.addmeta(Orange.core.newmetaid(), examples.domain.classVar) # old class as meta 
     1313        newDomain.addmeta(Orange.core.newmetaid(), examples.domain.class_var) # old class as meta 
    12091314        dichData = examples.select(newDomain) 
    1210         if self.argumentID: 
     1315        if self.argument_id: 
    12111316            for d in dichData: # remove arguments given to other classes 
    12121317                if not d.getclass() == targetVal: 
    1213                     d[self.argumentID] = "?" 
     1318                    d[self.argument_id] = "?" 
    12141319        return dichData 
    12151320 
    12161321    def get_argumented_examples(self, examples): 
    1217         if not self.argumentID: 
     1322        if not self.argument_id: 
    12181323            return None 
    12191324         
    12201325        # get argumentated examples 
    1221         return ArgumentFilter_hasSpecial()(examples, self.argumentID, targetClass = 0) 
     1326        return ArgumentFilter_hasSpecial()(examples, self.argument_id, target_class = 0) 
    12221327 
    12231328    def sort_arguments(self, arg_examples, examples): 
    1224         if not self.argumentID: 
     1329        if not self.argument_id: 
    12251330            return None 
    1226         evaluateAndSortArguments(examples, self.argumentID) 
     1331        evaluateAndSortArguments(examples, self.argument_id) 
    12271332        if len(arg_examples)>0: 
    12281333            # sort examples by their arguments quality (using first argument as it has already been sorted) 
    12291334            sorted = arg_examples.native() 
    1230             sorted.sort(lambda x,y: -cmp(x[self.argumentID].value.positiveArguments[0].quality, 
    1231                                          y[self.argumentID].value.positiveArguments[0].quality)) 
    1232             return Orange.core.ExampleTable(examples.domain, sorted) 
     1335            sorted.sort(lambda x,y: -cmp(x[self.argument_id].value.positiveArguments[0].quality, 
     1336                                         y[self.argument_id].value.positiveArguments[0].quality)) 
     1337            return Orange.data.Table(examples.domain, sorted) 
    12331338        else: 
    12341339            return None 
    12351340 
    1236     def turn_normal_mode(self, examples, weightID, cl_i): 
     1341    def turn_normal_mode(self, examples, weight_id, cl_i): 
    12371342        # evaluator 
    1238         if self.ruleFinder.evaluator.optimismReduction > 0: 
     1343        if self.rule_finder.evaluator.optimismReduction > 0: 
    12391344            if self.evd: 
    1240                 self.ruleFinder.evaluator.evDistGetter.dists = self.evd[cl_i] 
     1345                self.rule_finder.evaluator.evDistGetter.dists = self.evd[cl_i] 
    12411346            else: 
    1242                 self.ruleFinder.evaluator.evDistGetter.dists = self.evd_this # self.evd_creator.computeEVD(examples, weightID, target_class=0) 
     1347                self.rule_finder.evaluator.evDistGetter.dists = self.evd_this # self.evd_creator.computeEVD(examples, weight_id, target_class=0) 
    12431348        # rule refiner 
    1244         self.ruleFinder.refiner = self.refiner 
    1245         self.ruleFinder.ruleFilter = self.ruleFilter 
     1349        self.rule_finder.refiner = self.refiner 
     1350        self.rule_finder.ruleFilter = self.ruleFilter 
    12461351         
    1247     def learn_normal_rule(self, examples, weightID, apriori): 
    1248         if hasattr(self.ruleFinder.evaluator, "bestRule"): 
    1249             self.ruleFinder.evaluator.bestRule = None 
    1250         rule = self.ruleFinder(examples,weightID,0,Orange.core.RuleList()) 
    1251         if hasattr(self.ruleFinder.evaluator, "bestRule") and self.ruleFinder.evaluator.returnExpectedProb: 
    1252             rule = self.ruleFinder.evaluator.bestRule 
    1253             self.ruleFinder.evaluator.bestRule = None 
     1352    def learn_normal_rule(self, examples, weight_id, apriori): 
     1353        if hasattr(self.rule_finder.evaluator, "bestRule"): 
     1354            self.rule_finder.evaluator.bestRule = None 
     1355        rule = self.rule_finder(examples,weight_id,0,RuleList()) 
     1356        if hasattr(self.rule_finder.evaluator, "bestRule") and self.rule_finder.evaluator.returnExpectedProb: 
     1357            rule = self.rule_finder.evaluator.bestRule 
     1358            self.rule_finder.evaluator.bestRule = None 
    12541359        if self.postpruning: 
    1255             rule = self.postpruning(rule,examples,weightID,0, aprior) 
     1360            rule = self.postpruning(rule,examples,weight_id,0, aprior) 
    12561361        return rule 
    12571362 
    1258     def remove_covered_examples(self, rule, examples, weightID): 
    1259         nexamples, nweight = self.coverAndRemove(rule,examples,weightID,0) 
     1363    def remove_covered_examples(self, rule, examples, weight_id): 
     1364        nexamples, nweight = self.cover_and_remove(rule,examples,weight_id,0) 
    12601365        return nexamples 
    12611366 
    12621367 
    1263     def prune_unnecessary_rules(self, rules, examples, weightID): 
    1264         return self.coverAndRemove.getBestRules(rules,examples,weightID) 
    1265  
    1266     def change_domain(self, rule, cl, examples, weightID): 
     1368    def prune_unnecessary_rules(self, rules, examples, weight_id): 
     1369        return self.cover_and_remove.getBestRules(rules,examples,weight_id) 
     1370 
     1371    def change_domain(self, rule, cl, examples, weight_id): 
    12671372        rule.examples = rule.examples.select(examples.domain) 
    1268         rule.classDistribution = Orange.core.Distribution(rule.examples.domain.classVar,rule.examples,weightID) # adapt distribution 
    1269         rule.classifier = Orange.core.DefaultClassifier(cl) # adapt classifier 
     1373        rule.class_distribution = Orange.statistics.distribution.Distribution( 
     1374                     rule.examples.domain.class_var,rule.examples,weight_id) # adapt distribution 
     1375        rule.classifier = Orange.classification.ConstantClassifier(cl) # adapt classifier 
    12701376        rule.filter = Orange.core.Filter_values(domain = examples.domain, 
    12711377                                        conditions = rule.filter.conditions) 
    12721378        if hasattr(rule, "learner") and hasattr(rule.learner, "arg_example"): 
    1273             rule.learner.arg_example = Orange.core.Example(examples.domain, rule.learner.arg_example) 
     1379            rule.learner.arg_example = Orange.data.Instance( 
     1380                          examples.domain, rule.learner.arg_example) 
    12741381        return rule 
    12751382 
    1276     def create_classifier(self, rules, examples, weightID): 
    1277         return self.classifier(rules, examples, weightID) 
    1278  
    1279     def add_sub_rules_call(self, rules, examples, weightID): 
    1280         apriori = Orange.core.Distribution(examples.domain.classVar,examples,weightID) 
    1281         newRules = Orange.core.RuleList() 
     1383    def create_classifier(self, rules, examples, weight_id): 
     1384        return self.classifier(rules, examples, weight_id) 
     1385 
     1386    def add_sub_rules_call(self, rules, examples, weight_id): 
     1387        apriori = Orange.statistics.distribution.Distribution( 
     1388                            examples.domain.class_var,examples,weight_id) 
     1389        new_rules = RuleList() 
    12821390        for r in rules: 
    1283             newRules.append(r) 
     1391            new_rules.append(r) 
    12841392 
    12851393        # loop through rules 
    12861394        for r in rules: 
    1287             tmpList = Orange.core.RuleList() 
     1395            tmpList = RuleList() 
    12881396            tmpRle = r.clone() 
    12891397            tmpRle.filter.conditions = r.filter.conditions[:r.requiredConditions] # do not split argument 
    12901398            tmpRle.parentRule = None 
    1291             tmpRle.filterAndStore(examples,weightID,r.classifier.defaultVal) 
     1399            tmpRle.filterAndStore(examples,weight_id,r.classifier.default_val) 
    12921400            tmpRle.complexity = 0 
    12931401            tmpList.append(tmpRle) 
    12941402            while tmpList and len(tmpList[0].filter.conditions) <= len(r.filter.conditions): 
    1295                 tmpList2 = Orange.core.RuleList() 
     1403                tmpList2 = RuleList() 
    12961404                for tmpRule in tmpList: 
    12971405                    # evaluate tmpRule 
    1298                     oldREP = self.ruleFinder.evaluator.returnExpectedProb 
    1299                     self.ruleFinder.evaluator.returnExpectedProb = False 
    1300                     tmpRule.quality = self.ruleFinder.evaluator(tmpRule,examples,weightID,r.classifier.defaultVal,apriori) 
    1301                     self.ruleFinder.evaluator.returnExpectedProb = oldREP 
     1406                    oldREP = self.rule_finder.evaluator.returnExpectedProb 
     1407                    self.rule_finder.evaluator.returnExpectedProb = False 
     1408                    tmpRule.quality = self.rule_finder.evaluator(tmpRule,examples,weight_id,r.classifier.default_val,apriori) 
     1409                    self.rule_finder.evaluator.returnExpectedProb = oldREP 
    13021410                    # if rule not in rules already, add it to the list 
    1303                     if not True in [Orange.classification.rules.rules_equal(ri,tmpRule) for ri in newRules] and len(tmpRule.filter.conditions)>0 and tmpRule.quality > apriori[r.classifier.defaultVal]/apriori.abs: 
    1304                         newRules.append(tmpRule) 
     1411                    if not True in [Orange.classification.rules.rules_equal(ri,tmpRule) for ri in new_rules] and len(tmpRule.filter.conditions)>0 and tmpRule.quality > apriori[r.classifier.default_val]/apriori.abs: 
     1412                        new_rules.append(tmpRule) 
    13051413                    # create new tmpRules, set parent Rule, append them to tmpList2 
    1306                     if not True in [Orange.classification.rules.rules_equal(ri,tmpRule) for ri in newRules]: 
     1414                    if not True in [Orange.classification.rules.rules_equal(ri,tmpRule) for ri in new_rules]: 
    13071415                        for c in r.filter.conditions: 
    13081416                            tmpRule2 = tmpRule.clone() 
    13091417                            tmpRule2.parentRule = tmpRule 
    13101418                            tmpRule2.filter.conditions.append(c) 
    1311                             tmpRule2.filterAndStore(examples,weightID,r.classifier.defaultVal) 
     1419                            tmpRule2.filterAndStore(examples,weight_id,r.classifier.default_val) 
    13121420                            tmpRule2.complexity += 1 
    1313                             if tmpRule2.classDistribution.abs < tmpRule.classDistribution.abs: 
     1421                            if tmpRule2.class_distribution.abs < tmprule.class_distribution.abs: 
    13141422                                tmpList2.append(tmpRule2) 
    13151423                tmpList = tmpList2 
    1316         return newRules 
    1317  
    1318  
    1319     def init_pos_args(self, ae, examples, weightID): 
    1320         pos_args = Orange.core.RuleList() 
     1424        return new_rules 
     1425 
     1426 
     1427    def init_pos_args(self, ae, examples, weight_id): 
     1428        pos_args = RuleList() 
    13211429        # prepare arguments 
    1322         for p in ae[self.argumentID].value.positiveArguments: 
    1323             new_arg = Orange.core.Rule(filter=ArgFilter(argumentID = self.argumentID, 
     1430        for p in ae[self.argument_id].value.positiveArguments: 
     1431            new_arg = Rule(filter=ArgFilter(argument_id = self.argument_id, 
    13241432                                                   filter = self.newFilter_values(p.filter)), 
    13251433                                                   complexity = 0) 
     
    13281436 
    13291437 
    1330         if hasattr(self.ruleFinder.evaluator, "returnExpectedProb"): 
    1331             old_exp = self.ruleFinder.evaluator.returnExpectedProb 
    1332             self.ruleFinder.evaluator.returnExpectedProb = False 
     1438        if hasattr(self.rule_finder.evaluator, "returnExpectedProb"): 
     1439            old_exp = self.rule_finder.evaluator.returnExpectedProb 
     1440            self.rule_finder.evaluator.returnExpectedProb = False 
    13331441             
    13341442        # argument pruning (all or just unfinished arguments) 
    13351443        # if pruning is chosen, then prune arguments if possible 
    13361444        for p in pos_args: 
    1337             p.filterAndStore(examples, weightID, 0) 
     1445            p.filterAndStore(examples, weight_id, 0) 
    13381446            # pruning on: we check on all conditions and take only best 
    13391447            if self.prune_arguments: 
    13401448                allowed_conditions = [c for c in p.filter.conditions] 
    1341                 pruned_conditions = self.prune_arg_conditions(ae, allowed_conditions, examples, weightID) 
     1449                pruned_conditions = self.prune_arg_conditions(ae, allowed_conditions, examples, weight_id) 
    13421450                p.filter.conditions = pruned_conditions 
    13431451            else: # prune only unspecified conditions 
     
    13461454                # let rule cover now all examples filtered by specified conditions 
    13471455                p.filter.conditions = spec_conditions 
    1348                 p.filterAndStore(examples, weightID, 0) 
    1349                 pruned_conditions = self.prune_arg_conditions(ae, unspec_conditions, p.examples, p.weightID) 
     1456                p.filterAndStore(examples, weight_id, 0) 
     1457                pruned_conditions = self.prune_arg_conditions(ae, unspec_conditions, p.examples, p.weight_id) 
    13501458                p.filter.conditions.extend(pruned_conditions) 
    13511459                p.filter.filter.conditions.extend(pruned_conditions) 
     
    13621470        # set parameters to arguments 
    13631471        for p_i,p in enumerate(pos_args): 
    1364             p.filterAndStore(examples,weightID,0) 
     1472            p.filterAndStore(examples,weight_id,0) 
    13651473            p.filter.domain = examples.domain 
    13661474            if not p.learner: 
    1367                 p.learner = DefaultLearner(defaultValue=ae.getclass()) 
    1368             p.classifier = p.learner(p.examples, p.weightID) 
    1369             p.baseDist = p.classDistribution 
     1475                p.learner = DefaultLearner(default_value=ae.getclass()) 
     1476            p.classifier = p.learner(p.examples, p.weight_id) 
     1477            p.baseDist = p.class_distribution 
    13701478            p.requiredConditions = len(p.filter.conditions) 
    13711479            p.learner.setattr("arg_length", len(p.filter.conditions)) 
     
    13731481            p.complexity = len(p.filter.conditions) 
    13741482             
    1375         if hasattr(self.ruleFinder.evaluator, "returnExpectedProb"): 
    1376             self.ruleFinder.evaluator.returnExpectedProb = old_exp 
     1483        if hasattr(self.rule_finder.evaluator, "returnExpectedProb"): 
     1484            self.rule_finder.evaluator.returnExpectedProb = old_exp 
    13771485 
    13781486        return pos_args 
     
    13861494        return newFilter 
    13871495 
    1388     def init_neg_args(self, ae, examples, weightID): 
    1389         return ae[self.argumentID].value.negativeArguments 
     1496    def init_neg_args(self, ae, examples, weight_id): 
     1497        return ae[self.argument_id].value.negativeArguments 
    13901498 
    13911499    def remaining_probability(self, examples): 
    1392         return self.coverAndRemove.covered_percentage(examples) 
    1393  
    1394     def prune_arg_conditions(self, crit_example, allowed_conditions, examples, weightID): 
     1500        return self.cover_and_remove.covered_percentage(examples) 
     1501 
     1502    def prune_arg_conditions(self, crit_example, allowed_conditions, examples, weight_id): 
    13951503        if not allowed_conditions: 
    13961504            return [] 
    13971505        cn2_learner = Orange.classification.rules.CN2UnorderedLearner() 
    1398         cn2_learner.ruleFinder = Orange.core.RuleBeamFinder() 
    1399         cn2_learner.ruleFinder.refiner = SelectorArgConditions(crit_example, allowed_conditions) 
    1400         cn2_learner.ruleFinder.evaluator = Orange.classification.rules.MEstimate(self.ruleFinder.evaluator.m) 
    1401         rule = cn2_learner.ruleFinder(examples,weightID,0,Orange.core.RuleList()) 
     1506        cn2_learner.rule_finder = RuleBeamFinder() 
     1507        cn2_learner.rule_finder.refiner = SelectorArgConditions(crit_example, allowed_conditions) 
     1508        cn2_learner.rule_finder.evaluator = Orange.classification.rules.MEstimate(self.rule_finder.evaluator.m) 
     1509        rule = cn2_learner.rule_finder(examples,weight_id,0,RuleList()) 
    14021510        return rule.filter.conditions 
    14031511 
     
    14181526        By default, weighted relative accuracy is used. 
    14191527    :type evaluator: :class:`Orange.classification.rules.RuleEvaluator` 
    1420     :param beamWidth: width of the search beam. 
    1421     :type beamWidth: int 
     1528    :param beam_width: width of the search beam. 
     1529    :type beam_width: int 
    14221530    :param alpha: significance level of the statistical test to determine 
    14231531        whether rule is good enough to be returned by rulefinder. Likelihood 
     
    14421550        if not self.apriori: 
    14431551            return False 
    1444         if not type(rule.classifier) == Orange.core.DefaultClassifier: 
     1552        if not type(rule.classifier) == Orange.classification.ConstantClassifier: 
    14451553            return False 
    1446         ruleAcc = rule.classDistribution[rule.classifier.defaultVal]/rule.classDistribution.abs 
    1447         aprioriAcc = self.apriori[rule.classifier.defaultVal]/self.apriori.abs 
     1554        ruleAcc = rule.class_distribution[rule.classifier.default_val]/rule.class_distribution.abs 
     1555        aprioriAcc = self.apriori[rule.classifier.default_val]/self.apriori.abs 
    14481556        if ruleAcc>aprioriAcc: 
    14491557            return False 
     
    14531561class RuleStopping_SetRules(RuleStoppingCriteria): 
    14541562    def __init__(self,validator): 
    1455         self.ruleStopping = RuleStoppingCriteria_NegativeDistribution() 
     1563        self.rule_stopping = RuleStoppingCriteria_NegativeDistribution() 
    14561564        self.validator = validator 
    14571565 
    14581566    def __call__(self,rules,rule,instances,data):         
    1459         ru_st = self.ruleStopping(rules,rule,instances,data) 
     1567        ru_st = self.rule_stopping(rules,rule,instances,data) 
    14601568        if not ru_st: 
    14611569            self.validator.rules.append(rule) 
     
    14681576        self.length = length 
    14691577         
    1470     def __call__(self, rule, data, weightID, targetClass, apriori): 
     1578    def __call__(self, rule, data, weight_id, target_class, apriori): 
    14711579        if self.length >= 0: 
    14721580            return len(rule.filter.conditions) <= self.length 
     
    14801588            min_coverage=min_coverage,max_rule_length=max_rule_length) 
    14811589         
    1482     def __call__(self, rule, data, weightID, targetClass, apriori): 
     1590    def __call__(self, rule, data, weight_id, target_class, apriori): 
    14831591        if rule_in_set(rule,self.rules): 
    14841592            return False 
    1485         return bool(self.validator(rule,data,weightID,targetClass,apriori)) 
     1593        return bool(self.validator(rule,data,weight_id,target_class,apriori)) 
    14861594                 
    14871595 
    14881596 
    14891597class RuleClassifier_BestRule(RuleClassifier): 
    1490     def __init__(self, rules, instances, weightID = 0, **argkw): 
     1598    def __init__(self, rules, instances, weight_id = 0, **argkw): 
    14911599        self.rules = rules 
    14921600        self.examples = instances 
    1493         self.classVar = instances.domain.classVar 
     1601        self.class_var = instances.domain.class_var 
    14941602        self.__dict__.update(argkw) 
    1495         self.prior = Orange.core.Distribution(instances.domain.classVar, instances) 
     1603        self.prior = Orange.statistics.distribution.Distribution( 
     1604                    instances.domain.class_var, instances) 
    14961605 
    14971606    def __call__(self, instance, result_type=Orange.classification.Classifier.GetValue): 
    1498         retDist = Orange.core.Distribution(instance.domain.classVar) 
     1607        retDist = Orange.statistics.distribution.Distribution(instance.domain.class_var) 
    14991608        bestRule = None 
    15001609        for r in self.rules: 
    15011610            if r(instance) and (not bestRule or r.quality>bestRule.quality): 
    1502                 for v_i,v in enumerate(instance.domain.classVar): 
    1503                     retDist[v_i] = r.classDistribution[v_i] 
     1611                for v_i,v in enumerate(instance.domain.class_var): 
     1612                    retDist[v_i] = r.class_distribution[v_i] 
    15041613                bestRule = r 
    15051614        if not bestRule: 
     
    15091618        sumdist = sum(retDist) 
    15101619        if sumdist > 0.0: 
    1511             for c in self.examples.domain.classVar: 
     1620            for c in self.examples.domain.class_var: 
    15121621                retDist[c] /= sumdisc 
    15131622        else: 
     
    15231632        retStr = "" 
    15241633        for r in self.rules: 
    1525             retStr += ruleToString(r)+" "+str(r.classDistribution)+"\n" 
     1634            retStr += rule_to_string(r)+" "+str(r.class_distribution)+"\n" 
    15261635        return retStr     
    15271636 
     
    15371646    def __init__(self, mult = 0.7): 
    15381647        self.mult = mult 
    1539     def __call__(self, rule, instances, weights, targetClass): 
     1648    def __call__(self, rule, instances, weights, target_class): 
    15401649        if not weights: 
    15411650            weights = Orange.core.newmetaid() 
     
    15621671    """ 
    15631672     
    1564     def __call__(self, rule, instances, weights, targetClass): 
     1673    def __call__(self, rule, instances, weights, target_class): 
    15651674        if not weights: 
    15661675            weights = Orange.core.newmetaid() 
     
    15981707        self.bestRule = [] 
    15991708 
    1600     def initialize(self, instances, weightID, targetClass, apriori): 
     1709    def initialize(self, instances, weight_id, target_class, apriori): 
    16011710        self.bestRule = [None]*len(instances) 
    16021711        self.probAttribute = Orange.core.newmetaid() 
     
    16051714            Orange.data.variable.Continuous("Probs")) 
    16061715        for instance in instances: 
    1607 ##            if targetClass<0 or (instance.getclass() == targetClass): 
    1608             instance[self.probAttribute] = apriori[targetClass]/apriori.abs 
     1716##            if target_class<0 or (instance.getclass() == target_class): 
     1717            instance[self.probAttribute] = apriori[target_class]/apriori.abs 
    16091718        return instances 
    16101719 
    1611     def getBestRules(self, currentRules, instances, weightID): 
    1612         bestRules = RuleList() 
     1720    def getBestRules(self, currentRules, instances, weight_id): 
     1721        best_rules = RuleList() 
    16131722        for r in currentRules: 
    1614             if hasattr(r.learner, "argumentRule") and not orngCN2.rule_in_set(r,bestRules): 
    1615                 bestRules.append(r) 
     1723            if hasattr(r.learner, "argumentRule") and not orngCN2.rule_in_set(r,best_rules): 
     1724                best_rules.append(r) 
    16161725        for r_i,r in enumerate(self.bestRule): 
    1617             if r and not rule_in_set(r,bestRules) and instances[r_i].\ 
    1618                 getclass()==r.classifier.defaultValue: 
    1619                 bestRules.append(r) 
    1620         return bestRules 
    1621  
    1622     def remainingInstancesP(self, instances, targetClass): 
     1726            if r and not rule_in_set(r,best_rules) and instances[r_i].\ 
     1727                getclass()==r.classifier.default_value: 
     1728                best_rules.append(r) 
     1729        return best_rules 
     1730 
     1731    def remainingInstancesP(self, instances, target_class): 
    16231732        pSum, pAll = 0.0, 0.0 
    16241733        for ex in instances: 
    1625             if ex.getclass() == targetClass: 
     1734            if ex.getclass() == target_class: 
    16261735                pSum += ex[self.probAttribute] 
    16271736                pAll += 1.0 
    16281737        return pSum/pAll 
    16291738 
    1630     def __call__(self, rule, instances, weights, targetClass): 
    1631         if targetClass<0: 
     1739    def __call__(self, rule, instances, weights, target_class): 
     1740        if target_class<0: 
    16321741            for instance_i, instance in enumerate(instances): 
    16331742                if rule(instance) and rule.quality>instance[self.probAttribute]-0.01: 
     
    16351744                    self.bestRule[instance_i]=rule 
    16361745        else: 
    1637             for instance_i, instance in enumerate(instances): #rule.classifier.defaultVal == instance.getclass() and 
     1746            for instance_i, instance in enumerate(instances): #rule.classifier.default_val == instance.getclass() and 
    16381747                if rule(instance) and rule.quality>instance[self.probAttribute]: 
    16391748                    instance[self.probAttribute] = rule.quality+0.001 
    16401749                    self.bestRule[instance_i]=rule 
    1641 ##                if rule.classifier.defaultVal == instance.getclass(): 
     1750##                if rule.classifier.default_val == instance.getclass(): 
    16421751##                    print instance[self.probAttribute] 
    16431752        # compute factor 
     
    16451754 
    16461755 
    1647 def ruleToString(rule, showDistribution = True): 
     1756@deprecated_keywords({"showDistribution": "show_distribution"}) 
     1757def rule_to_string(rule, show_distribution = True): 
    16481758    """ 
    16491759    Write a string presentation of rule in human readable format. 
     
    16521762    :type rule: :class:`Orange.classification.rules.Rule` 
    16531763     
    1654     :param showDistribution: determines whether presentation should also 
     1764    :param show_distribution: determines whether presentation should also 
    16551765        contain the distribution of covered instances 
    1656     :type showDistribution: bool 
     1766    :type show_distribution: bool 
    16571767     
    16581768    """ 
     
    16851795        elif type(c) == Orange.core.ValueFilter_continuous: 
    16861796            ret += domain[c.position].name + selectSign(c.oper) + str(c.ref) 
    1687     if rule.classifier and type(rule.classifier) == Orange.core.DefaultClassifier\ 
    1688             and rule.classifier.defaultVal: 
    1689         ret = ret + " THEN "+domain.classVar.name+"="+\ 
    1690         str(rule.classifier.defaultValue) 
    1691         if showDistribution: 
    1692             ret += str(rule.classDistribution) 
    1693     elif rule.classifier and type(rule.classifier) == Orange.core.DefaultClassifier\ 
    1694             and type(domain.classVar) == Orange.core.EnumVariable: 
    1695         ret = ret + " THEN "+domain.classVar.name+"="+\ 
    1696         str(rule.classDistribution.modus()) 
    1697         if showDistribution: 
    1698             ret += str(rule.classDistribution) 
     1797    if rule.classifier and type(rule.classifier) == Orange.classification.ConstantClassifier\ 
     1798            and rule.classifier.default_val: 
     1799        ret = ret + " THEN "+domain.class_var.name+"="+\ 
     1800        str(rule.classifier.default_value) 
     1801        if show_distribution: 
     1802            ret += str(rule.class_distribution) 
     1803    elif rule.classifier and type(rule.classifier) == Orange.classification.ConstantClassifier\ 
     1804            and type(domain.class_var) == Orange.core.EnumVariable: 
     1805        ret = ret + " THEN "+domain.class_var.name+"="+\ 
     1806        str(rule.class_distribution.modus()) 
     1807        if show_distribution: 
     1808            ret += str(rule.class_distribution) 
    16991809    return ret         
    17001810 
    17011811def supervisedClassCheck(instances): 
    1702     if not instances.domain.classVar: 
     1812    if not instances.domain.class_var: 
    17031813        raise Exception("Class variable is required!") 
    1704     if instances.domain.classVar.varType == Orange.core.VarTypes.Continuous: 
     1814    if instances.domain.class_var.varType == Orange.core.VarTypes.Continuous: 
    17051815        raise Exception("CN2 requires a discrete class!") 
    17061816     
     
    17671877    cl_num = newData.toNumeric("C") 
    17681878    random.shuffle(cl_num[0][:,0]) 
    1769     clData = Orange.data.Table(Orange.data.Domain([newData.domain.classVar]),cl_num[0]) 
     1879    clData = Orange.data.Table(Orange.data.Domain([newData.domain.class_var]),cl_num[0]) 
    17701880    for d_i,d in enumerate(newData): 
    1771         d[newData.domain.classVar] = clData[d_i][newData.domain.classVar] 
     1881        d[newData.domain.class_var] = clData[d_i][newData.domain.class_var] 
    17721882    return newData 
    17731883 
     
    17851895    return mi, beta, percs 
    17861896 
    1787 def computeDists(data, weight=0, targetClass=0, N=100, learner=None): 
     1897def computeDists(data, weight=0, target_class=0, N=100, learner=None): 
    17881898    """ Compute distributions of likelihood ratio statistics of extreme (best) rules.""" 
    17891899    if not learner: 
     
    17931903    ## Learner preparation ## 
    17941904    ######################### 
    1795     oldStopper = learner.ruleFinder.ruleStoppingValidator 
    1796     evaluator = learner.ruleFinder.evaluator 
    1797     learner.ruleFinder.evaluator = RuleEvaluator_LRS() 
    1798     learner.ruleFinder.evaluator.storeRules = True 
    1799     learner.ruleFinder.ruleStoppingValidator = RuleValidator_LRS(alpha=1.0) 
    1800     learner.ruleFinder.ruleStoppingValidator.max_rule_complexity = 0   
     1905    oldStopper = learner.rule_finder.rule_stoppingValidator 
     1906    evaluator = learner.rule_finder.evaluator 
     1907    learner.rule_finder.evaluator = RuleEvaluator_LRS() 
     1908    learner.rule_finder.evaluator.storeRules = True 
     1909    learner.rule_finder.rule_stoppingValidator = RuleValidator_LRS(alpha=1.0) 
     1910    learner.rule_finder.rule_stoppingValidator.max_rule_complexity = 0   
    18011911 
    18021912    # loop through N (sampling repetitions) 
     
    18051915        # create data set (remove and randomize) 
    18061916        tempData = createRandomDataSet(data) 
    1807         learner.ruleFinder.evaluator.rules = RuleList() 
     1917        learner.rule_finder.evaluator.rules = RuleList() 
    18081918        # Next, learn a rule 
    1809         bestRule = learner.ruleFinder(tempData,weight,targetClass,RuleList()) 
     1919        bestRule = learner.rule_finder(tempData,weight,target_class,RuleList()) 
    18101920        maxVals.append(bestRule.quality) 
    1811     extremeDists=[compParameters(maxVals,1.0,1.0)] 
     1921    extreme_dists=[compParameters(maxVals,1.0,1.0)] 
    18121922 
    18131923    ##################### 
    18141924    ## Restore learner ## 
    18151925    ##################### 
    1816     learner.ruleFinder.evaluator = evaluator 
    1817     learner.ruleFinder.ruleStoppingValidator = oldStopper 
    1818     return extremeDists 
     1926    learner.rule_finder.evaluator = evaluator 
     1927    learner.rule_finder.rule_stoppingValidator = oldStopper 
     1928    return extreme_dists 
    18191929 
    18201930def createEVDistList(evdList): 
     
    18251935 
    18261936def add_sub_rules(rules, instances, weight, learner, dists): 
    1827     apriori = Orange.core.Distribution(instances.domain.classVar,instances,weight) 
    1828     newRules = RuleList() 
     1937    apriori = Orange.core.Distribution(instances.domain.class_var,instances,weight) 
     1938    new_rules = RuleList() 
    18291939    for r in rules: 
    1830         newRules.append(r) 
     1940        new_rules.append(r) 
    18311941 
    18321942    # loop through rules 
     
    18361946        tmpRle.filter.conditions = [] 
    18371947        tmpRle.parentRule = None 
    1838         tmpRle.filterAndStore(instances,weight,r.classifier.defaultVal) 
     1948        tmpRle.filterAndStore(instances,weight,r.classifier.default_val) 
    18391949        tmpList.append(tmpRle) 
    18401950        while tmpList and len(tmpList[0].filter.conditions) <= len(r.filter.conditions): 
     
    18421952            for tmpRule in tmpList: 
    18431953                # evaluate tmpRule 
    1844                 oldREP = learner.ruleFinder.evaluator.returnExpectedProb 
    1845                 learner.ruleFinder.evaluator.returnExpectedProb = False 
    1846                 learner.ruleFinder.evaluator.evDistGetter.dists = createEVDistList(\ 
    1847                         dists[int(r.classifier.defaultVal)]) 
    1848                 tmpRule.quality = learner.ruleFinder.evaluator(tmpRule, 
    1849                         instances,weight,r.classifier.defaultVal,apriori) 
    1850                 learner.ruleFinder.evaluator.returnExpectedProb = oldREP 
     1954                oldREP = learner.rule_finder.evaluator.returnExpectedProb 
     1955                learner.rule_finder.evaluator.returnExpectedProb = False 
     1956                learner.rule_finder.evaluator.evDistGetter.dists = createEVDistList(\ 
     1957                        dists[int(r.classifier.default_val)]) 
     1958                tmpRule.quality = learner.rule_finder.evaluator(tmpRule, 
     1959                        instances,weight,r.classifier.default_val,apriori) 
     1960                learner.rule_finder.evaluator.returnExpectedProb = oldREP 
    18511961                # if rule not in rules already, add it to the list 
    1852                 if not True in [rules_equal(ri,tmpRule) for ri in newRules] and\ 
     1962                if not True in [rules_equal(ri,tmpRule) for ri in new_rules] and\ 
    18531963                        len(tmpRule.filter.conditions)>0 and tmpRule.quality >\ 
    1854                             apriori[r.classifier.defaultVal]/apriori.abs: 
    1855                     newRules.append(tmpRule) 
     1964                            apriori[r.classifier.default_val]/apriori.abs: 
     1965                    new_rules.append(tmpRule) 
    18561966                # create new tmpRules, set parent Rule, append them to tmpList2 
    1857                 if not True in [rules_equal(ri,tmpRule) for ri in newRules]: 
     1967                if not True in [rules_equal(ri,tmpRule) for ri in new_rules]: 
    18581968                    for c in r.filter.conditions: 
    18591969                        tmpRule2 = tmpRule.clone() 
    18601970                        tmpRule2.parentRule = tmpRule 
    18611971                        tmpRule2.filter.conditions.append(c) 
    1862                         tmpRule2.filterAndStore(instances,weight,r.classifier.defaultVal) 
    1863                         if tmpRule2.classDistribution.abs < tmpRule.classDistribution.abs: 
     1972                        tmpRule2.filterAndStore(instances,weight,r.classifier.default_val) 
     1973                        if tmpRule2.class_distribution.abs < tmprule.class_distribution.abs: 
    18641974                            tmpList2.append(tmpRule2) 
    18651975            tmpList = tmpList2 
    1866     for cl in instances.domain.classVar: 
     1976    for cl in instances.domain.class_var: 
    18671977        tmpRle = Rule() 
    18681978        tmpRle.filter = Orange.core.Filter_values(domain = instances.domain) 
    18691979        tmpRle.parentRule = None 
    18701980        tmpRle.filterAndStore(instances,weight,int(cl)) 
    1871         tmpRle.quality = tmpRle.classDistribution[int(cl)]/tmpRle.classDistribution.abs 
    1872         newRules.append(tmpRle) 
    1873     return newRules 
    1874  
    1875  
    1876  
    1877  
    1878  
    1879  
    1880  
    1881  
    1882  
    1883 ################################################################################ 
    1884 ################################################################################ 
    1885 ##  This has been copyed&pasted from orngABCN2.py and not yet appropriately   ## 
    1886 ##  refactored and documented.                                                ## 
    1887 ################################################################################ 
    1888 ################################################################################ 
    1889  
    1890  
    1891 """ This module implements argument based rule learning. 
    1892 The main learner class is ABCN2. The first few classes are some variants of ABCN2 with reasonable settings.  """ 
    1893  
    1894  
    1895 import operator 
    1896 import random 
    1897 import numpy 
    1898 import math 
    1899  
    1900 from orngABML import * 
    1901  
    1902 # Default learner - returns     # 
    1903 # default classifier with pre-  # 
    1904 # defined output  class         # 
     1981        tmpRle.quality = tmpRle.class_distribution[int(cl)]/tmpRle.class_distribution.abs 
     1982        new_rules.append(tmpRle) 
     1983    return new_rules 
     1984 
     1985 
    19051986class DefaultLearner(Orange.core.Learner): 
    1906     def __init__(self,defaultValue = None): 
    1907         self.defaultValue = defaultValue 
    1908     def __call__(self,examples,weightID=0): 
    1909         return Orange.core.DefaultClassifier(self.defaultValue,defaultDistribution = Orange.core.Distribution(examples.domain.classVar,examples,weightID)) 
     1987    """ 
     1988    Default lerner - returns default classifier with predefined output class. 
     1989    """ 
     1990    def __init__(self,default_value = None): 
     1991        self.default_value = default_value 
     1992    def __call__(self,examples,weight_id=0): 
     1993        return Orange.classification.majority.ConstantClassifier(self.default_value,defaultDistribution = Orange.core.Distribution(examples.domain.class_var,examples,weight_id)) 
    19101994 
    19111995class ABCN2Ordered(ABCN2): 
    1912     """ Rules learned by ABCN2 are ordered and used as a decision list. """ 
    1913     def __init__(self, argumentID=0, **kwds): 
    1914         ABCN2.__init__(self, argumentID=argumentID, **kwds) 
     1996    """ 
     1997    Rules learned by ABCN2 are ordered and used as a decision list. 
     1998    """ 
     1999    def __init__(self, argument_id=0, **kwds): 
     2000        ABCN2.__init__(self, argument_id=argument_id, **kwds) 
    19152001        self.classifier.set_prefix_rules = True 
    19162002        self.classifier.optimize_betas = False 
    19172003 
    19182004class ABCN2M(ABCN2): 
    1919     """ Argument based rule learning with m-estimate as evaluation function. """ 
    1920     def __init__(self, argumentID=0, **kwds): 
    1921         ABCN2.__init__(self, argumentID=argumentID, **kwds) 
     2005    """ 
     2006    Argument based rule learning with m-estimate as evaluation function. 
     2007    """ 
     2008    def __init__(self, argument_id=0, **kwds): 
     2009        ABCN2.__init__(self, argument_id=argument_id, **kwds) 
    19222010        self.opt_reduction = 0 
    19232011     
    19242012 
    1925 # *********************** # 
    1926 # Argument based covering # 
    1927 # *********************** # 
    1928  
    19292013class ABBeamFilter(Orange.core.RuleBeamFilter): 
    1930     """ ABBeamFilter: Filters beam; 
     2014    """ 
     2015    ABBeamFilter: Filters beam; 
    19312016        - leaves first N rules (by quality) 
    1932         - leaves first N rules that have only of arguments in condition part  
     2017        - leaves first N rules that have only of arguments in condition part 
    19332018    """ 
    19342019    def __init__(self,width=5): 
     
    19362021        self.pArgs=None 
    19372022 
    1938     def __call__(self,rulesStar,examples,weightID): 
     2023    def __call__(self,rulesStar,examples,weight_id): 
    19392024        newStar=Orange.core.RuleList() 
    19402025        rulesStar.sort(lambda x,y: -cmp(x.quality,y.quality)) 
     
    19682053 
    19692054class ruleCoversArguments: 
    1970     """ Class determines if rule covers one out of a set of arguments. """ 
     2055    """ 
     2056    Class determines if rule covers one out of a set of arguments. 
     2057    """ 
    19712058    def __init__(self, arguments): 
    19722059        self.arguments = arguments 
     
    20282115                        at,type=r_i,3 
    20292116        return at,type 
    2030     oneSelectorToCover = staticmethod(oneSelectorToCover)                  
    2031  
     2117    oneSelectorToCover = staticmethod(oneSelectorToCover) 
     2118 
     2119 
     2120@deprecated_members({"notAllowedSelectors": "not_allowed_selectors", 
     2121                     "argumentID": "argument_id"}) 
    20322122class SelectorAdder(Orange.core.RuleBeamRefiner): 
    2033     """ Selector adder, this function is a refiner function: 
    2034        - refined rules are not consistent with any of negative arguments. """ 
    2035     def __init__(self, example=None, notAllowedSelectors=[], argumentID = None, 
     2123    """ 
     2124    Selector adder, this function is a refiner function: 
     2125       - refined rules are not consistent with any of negative arguments. 
     2126    """ 
     2127    def __init__(self, example=None, not_allowed_selectors=[], argument_id = None, 
    20362128                 discretizer = Orange.core.EntropyDiscretization(forceAttribute=True)): 
    20372129        # required values - needed values of attributes 
    20382130        self.example = example 
    2039         self.argumentID = argumentID 
    2040         self.notAllowedSelectors = notAllowedSelectors 
     2131        self.argument_id = argument_id 
     2132        self.not_allowed_selectors = not_allowed_selectors 
    20412133        self.discretizer = discretizer 
    20422134         
    2043     def __call__(self, oldRule, data, weightID, targetClass=-1): 
    2044         inNotAllowedSelectors = ruleCoversArguments(self.notAllowedSelectors) 
    2045         newRules = Orange.core.RuleList() 
     2135    def __call__(self, oldRule, data, weight_id, target_class=-1): 
     2136        inNotAllowedSelectors = ruleCoversArguments(self.not_allowed_selectors) 
     2137        new_rules = Orange.core.RuleList() 
    20462138 
    20472139        # get positive indices (selectors already in the rule) 
     
    20522144 
    20532145        # get negative indices (selectors that should not be in the rule) 
    2054         negativeIndices = [0]*len(data.domain.attributes) 
    2055         for nA in self.notAllowedSelectors: 
     2146        negative_indices = [0]*len(data.domain.attributes) 
     2147        for nA in self.not_allowed_selectors: 
    20562148            #print indices, nA.filter.indices 
    20572149            at_i,type_na = ruleCoversArguments.oneSelectorToCover(indices, nA.filter.indices) 
    20582150            if at_i>-1: 
    2059                 negativeIndices[at_i] = operator.or_(negativeIndices[at_i],type_na) 
     2151                negative_indices[at_i] = operator.or_(negative_indices[at_i],type_na) 
    20602152 
    20612153        #iterate through indices = attributes  
     
    20652157            if ind == 1:  
    20662158                continue 
    2067             if data.domain[i].varType == Orange.core.VarTypes.Discrete and not negativeIndices[i]==1: # DISCRETE attribute 
     2159            if data.domain[i].varType == Orange.core.VarTypes.Discrete and not negative_indices[i]==1: # DISCRETE attribute 
    20682160                if self.example: 
    20692161                    values = [self.example[i]] 
     
    20772169                    tempRule.complexity += 1 
    20782170                    tempRule.filter.indices[i] = 1 # 1 stands for discrete attribute (see ruleCoversArguments.conditionIndex) 
    2079                     tempRule.filterAndStore(oldRule.examples, oldRule.weightID, targetClass) 
     2171                    tempRule.filterAndStore(oldRule.examples, oldRule.weight_id, target_class) 
    20802172                    if len(tempRule.examples)<len(oldRule.examples): 
    2081                         newRules.append(tempRule) 
    2082             elif data.domain[i].varType == Orange.core.VarTypes.Continuous and not negativeIndices[i]==7: # CONTINUOUS attribute 
     2173                        new_rules.append(tempRule) 
     2174            elif data.domain[i].varType == Orange.core.VarTypes.Continuous and not negative_indices[i]==7: # CONTINUOUS attribute 
    20832175                try: 
    20842176                    at = data.domain[i] 
     
    20902182                    for p in at_d.getValueFrom.transformer.points: 
    20912183                        #LESS 
    2092                         if not negativeIndices[i]==3: 
    2093                             tempRule = self.getTempRule(oldRule,i,Orange.core.ValueFilter_continuous.LessEqual,p,targetClass,3) 
     2184                        if not negative_indices[i]==3: 
     2185                            tempRule = self.getTempRule(oldRule,i,Orange.core.ValueFilter_continuous.LessEqual,p,target_class,3) 
    20942186                            if len(tempRule.examples)<len(oldRule.examples) and self.example[i]<=p:# and not inNotAllowedSelectors(tempRule): 
    2095                                 newRules.append(tempRule) 
     2187                                new_rules.append(tempRule) 
    20962188                        #GREATER 
    2097                         if not negativeIndices[i]==5: 
    2098                             tempRule = self.getTempRule(oldRule,i,Orange.core.ValueFilter_continuous.Greater,p,targetClass,5) 
     2189                        if not negative_indices[i]==5: 
     2190                            tempRule = self.getTempRule(oldRule,i,Orange.core.ValueFilter_continuous.Greater,p,target_class,5) 
    20992191                            if len(tempRule.examples)<len(oldRule.examples) and self.example[i]>p:# and not inNotAllowedSelectors(tempRule): 
    2100                                 newRules.append(tempRule) 
    2101         for r in newRules: 
     2192                                new_rules.append(tempRule) 
     2193        for r in new_rules: 
    21022194            r.parentRule = oldRule 
    21032195            r.valuesFilter = r.filter.filter 
    2104         return newRules 
    2105  
    2106     def getTempRule(self,oldRule,pos,oper,ref,targetClass,atIndex): 
     2196        return new_rules 
     2197 
     2198    def getTempRule(self,oldRule,pos,oper,ref,target_class,atIndex): 
    21072199        tempRule = oldRule.clone() 
    21082200 
     
    21132205        tempRule.complexity += 1 
    21142206        tempRule.filter.indices[pos] = operator.or_(tempRule.filter.indices[pos],atIndex) # from ruleCoversArguments.conditionIndex 
    2115         tempRule.filterAndStore(oldRule.examples,tempRule.weightID,targetClass) 
     2207        tempRule.filterAndStore(oldRule.examples,tempRule.weight_id,target_class) 
    21162208        return tempRule 
    21172209 
    2118     def setCondition(self, oldRule, targetClass, ci, condition): 
     2210    def setCondition(self, oldRule, target_class, ci, condition): 
    21192211        tempRule = oldRule.clone() 
    21202212        tempRule.filter.conditions[ci] = condition 
    21212213        tempRule.filter.conditions[ci].setattr("specialized",1) 
    2122         tempRule.filterAndStore(oldRule.examples,oldRule.weightID,targetClass) 
     2214        tempRule.filterAndStore(oldRule.examples,oldRule.weight_id,target_class) 
    21232215        return tempRule 
    21242216 
     
    21262218# This filter is the ugliest code ever! Problem is with Orange, I had some problems with inheriting deepCopy 
    21272219# I should take another look at it. 
     2220@deprecated_members({"argumentID": "argument_id"}) 
    21282221class ArgFilter(Orange.core.Filter): 
    2129     """ This class implements AB-covering principle. """ 
    2130     def __init__(self, argumentID=None, filter = Orange.core.Filter_values()): 
     2222    """ 
     2223    This class implements AB-covering principle. 
     2224    """ 
     2225    def __init__(self, argument_id=None, filter = Orange.core.Filter_values()): 
    21312226        self.filter = filter 
    21322227        self.indices = getattr(filter,"indices",[]) 
    21332228        if not self.indices and len(filter.conditions)>0: 
    21342229            self.indices = ruleCoversArguments.filterIndices(filter) 
    2135         self.argumentID = argumentID 
     2230        self.argument_id = argument_id 
    21362231        self.debug = 0 
    21372232        self.domain = self.filter.domain 
     
    21492244        if self.filter(example): 
    21502245            try: 
    2151                 if example[self.argumentID].value and len(example[self.argumentID].value.positiveArguments)>0: # example has positive arguments 
     2246                if example[self.argument_id].value and len(example[self.argument_id].value.positiveArguments)>0: # example has positive arguments 
    21522247                    # conditions should cover at least one of the positive arguments 
    21532248                    oneArgCovered = False 
    2154                     for pA in example[self.argumentID].value.positiveArguments: 
     2249                    for pA in example[self.argument_id].value.positiveArguments: 
    21552250                        argCovered = [self.condIn(c) for c in pA.filter.conditions] 
    21562251                        oneArgCovered = oneArgCovered or len(argCovered) == sum(argCovered) #argCovered 
     
    21592254                    if not oneArgCovered: 
    21602255                        return False 
    2161                 if example[self.argumentID].value and len(example[self.argumentID].value.negativeArguments)>0: # example has negative arguments 
     2256                if example[self.argument_id].value and len(example[self.argument_id].value.negativeArguments)>0: # example has negative arguments 
    21622257                    # condition should not cover neither of negative arguments 
    2163                     for pN in example[self.argumentID].value.negativeArguments: 
     2258                    for pN in example[self.argument_id].value.negativeArguments: 
    21642259                        argCovered = [self.condIn(c) for c in pN.filter.conditions] 
    21652260                        if len(argCovered)==sum(argCovered): 
     
    21762271 
    21772272    def deepCopy(self): 
    2178         newFilter = ArgFilter(argumentID=self.argumentID) 
     2273        newFilter = ArgFilter(argument_id=self.argument_id) 
    21792274        newFilter.filter = Orange.core.Filter_values() #self.filter.deepCopy() 
    21802275        newFilter.filter.conditions = self.filter.conditions[:] 
     
    21912286 
    21922287class SelectorArgConditions(Orange.core.RuleBeamRefiner): 
    2193     """ Selector adder, this function is a refiner function: 
    2194        - refined rules are not consistent with any of negative arguments. """ 
     2288    """ 
     2289    Selector adder, this function is a refiner function: 
     2290      - refined rules are not consistent with any of negative arguments. 
     2291    """ 
    21952292    def __init__(self, example, allowed_selectors): 
    21962293        # required values - needed values of attributes 
     
    21982295        self.allowed_selectors = allowed_selectors 
    21992296 
    2200     def __call__(self, oldRule, data, weightID, targetClass=-1): 
     2297    def __call__(self, oldRule, data, weight_id, target_class=-1): 
    22012298        if len(oldRule.filter.conditions) >= len(self.allowed_selectors): 
    22022299            return Orange.core.RuleList() 
    2203         newRules = Orange.core.RuleList() 
     2300        new_rules = Orange.core.RuleList() 
    22042301        for c in self.allowed_selectors: 
    22052302            # normal condition 
     
    22072304                tempRule = oldRule.clone() 
    22082305                tempRule.filter.conditions.append(c) 
    2209                 tempRule.filterAndStore(oldRule.examples, oldRule.weightID, targetClass) 
     2306                tempRule.filterAndStore(oldRule.examples, oldRule.weight_id, target_class) 
    22102307                if len(tempRule.examples)<len(oldRule.examples): 
    2211                     newRules.append(tempRule) 
     2308                    new_rules.append(tempRule) 
    22122309            # unspecified condition 
    22132310            else: 
     
    22262323                                                                                    acceptSpecial=0)) 
    22272324                    if tempRule(self.example): 
    2228                         tempRule.filterAndStore(oldRule.examples, oldRule.weightID, targetClass) 
     2325                        tempRule.filterAndStore(oldRule.examples, oldRule.weight_id, target_class) 
    22292326                        if len(tempRule.examples)<len(oldRule.examples): 
    2230                             newRules.append(tempRule) 
     2327                            new_rules.append(tempRule) 
    22312328##        print " NEW RULES " 
    2232 ##        for r in newRules: 
    2233 ##            print Orange.classification.rules.ruleToString(r) 
    2234         for r in newRules: 
     2329##        for r in new_rules: 
     2330##            print Orange.classification.rules.rule_to_string(r) 
     2331        for r in new_rules: 
    22352332            r.parentRule = oldRule 
    2236 ##            print Orange.classification.rules.ruleToString(r) 
    2237         return newRules 
    2238  
    2239  
    2240 # ********************** # 
    2241 # Probabilistic covering # 
    2242 # ********************** # 
     2333##            print Orange.classification.rules.rule_to_string(r) 
     2334        return new_rules 
     2335 
    22432336 
    22442337class CovererAndRemover_Prob(Orange.core.RuleCovererAndRemover): 
    2245     """ This class impements probabilistic covering. """ 
    2246  
    2247     def __init__(self, examples, weightID, targetClass, apriori): 
     2338    """ 
     2339    This class impements probabilistic covering. 
     2340    """ 
     2341    def __init__(self, examples, weight_id, target_class, apriori): 
    22482342        self.bestRule = [None]*len(examples) 
    22492343        self.probAttribute = Orange.core.newmetaid() 
    2250         self.aprioriProb = apriori[targetClass]/apriori.abs 
    2251         examples.addMetaAttribute(self.probAttribute, self.aprioriProb) 
     2344        self.apriori_prob = apriori[target_class]/apriori.abs 
     2345        examples.addMetaAttribute(self.probAttribute, self.apriori_prob) 
    22522346        examples.domain.addmeta(self.probAttribute, Orange.core.FloatVariable("Probs")) 
    22532347 
    2254     def getBestRules(self, currentRules, examples, weightID): 
    2255         bestRules = Orange.core.RuleList() 
     2348    def getBestRules(self, currentRules, examples, weight_id): 
     2349        best_rules = Orange.core.RuleList() 
    22562350##        for r in currentRules: 
    2257 ##            if hasattr(r.learner, "argumentRule") and not Orange.classification.rules.rule_in_set(r,bestRules): 
    2258 ##                bestRules.append(r) 
     2351##            if hasattr(r.learner, "argumentRule") and not Orange.classification.rules.rule_in_set(r,best_rules): 
     2352##                best_rules.append(r) 
    22592353        for r_i,r in enumerate(self.bestRule): 
    2260             if r and not Orange.classification.rules.rule_in_set(r,bestRules) and int(examples[r_i].getclass())==int(r.classifier.defaultValue): 
    2261                 bestRules.append(r) 
    2262         return bestRules 
    2263  
    2264     def __call__(self, rule, examples, weights, targetClass): 
     2354            if r and not Orange.classification.rules.rule_in_set(r,best_rules) and int(examples[r_i].getclass())==int(r.classifier.default_value): 
     2355                best_rules.append(r) 
     2356        return best_rules 
     2357 
     2358    def __call__(self, rule, examples, weights, target_class): 
    22652359        if hasattr(rule, "learner") and hasattr(rule.learner, "arg_example"): 
    22662360            example = rule.learner.arg_example 
     
    22862380        p = 0.0 
    22872381        for ei, e in enumerate(examples): 
    2288             p += (e[self.probAttribute] - self.aprioriProb)/(1.0-self.aprioriProb) 
     2382            p += (e[self.probAttribute] - self.apriori_prob)/(1.0-self.apriori_prob) 
    22892383        return p/len(examples) 
    22902384 
    2291  
    2292 # **************************************** # 
    2293 # Estimation of extreme value distribution # 
    2294 # **************************************** # 
    2295  
    2296 # Miscellaneous - utility functions 
    2297 def avg(l): 
    2298     return sum(l)/len(l) if l else 0. 
    2299  
    2300 def var(l): 
    2301     if len(l)<2: 
    2302         return 0. 
    2303     av = avg(l) 
    2304     return sum([math.pow(li-av,2) for li in l])/(len(l)-1) 
    2305  
    2306 def perc(l,p): 
    2307     l.sort() 
    2308     return l[int(math.floor(p*len(l)))] 
    2309  
    23102385class EVDFitter: 
    2311     """ Randomizes a dataset and fits an extreme value distribution onto it. """ 
    2312  
     2386    """ 
     2387    Randomizes a dataset and fits an extreme value distribution onto it. 
     2388    """ 
    23132389    def __init__(self, learner, n=200, randomseed=100): 
    23142390        self.learner = learner 
     
    23212397        cl_num = newData.toNumpy("C") 
    23222398        random.shuffle(cl_num[0][:,0]) 
    2323         clData = Orange.core.ExampleTable(Orange.core.Domain([newData.domain.classVar]),cl_num[0]) 
     2399        clData = Orange.core.ExampleTable(Orange.core.Domain([newData.domain.class_var]),cl_num[0]) 
    23242400        for d_i,d in enumerate(newData): 
    2325             d[newData.domain.classVar] = clData[d_i][newData.domain.classVar] 
     2401            d[newData.domain.class_var] = clData[d_i][newData.domain.class_var] 
    23262402        return newData 
    23272403 
     
    23452421 
    23462422    def prepare_learner(self): 
    2347         self.oldStopper = self.learner.ruleFinder.ruleStoppingValidator 
    2348         self.evaluator = self.learner.ruleFinder.evaluator 
    2349         self.refiner = self.learner.ruleFinder.refiner 
    2350         self.validator = self.learner.ruleFinder.validator 
    2351         self.ruleFilter = self.learner.ruleFinder.ruleFilter 
    2352         self.learner.ruleFinder.validator = None 
    2353         self.learner.ruleFinder.evaluator = Orange.core.RuleEvaluator_LRS() 
    2354         self.learner.ruleFinder.evaluator.storeRules = True 
    2355         self.learner.ruleFinder.ruleStoppingValidator = Orange.core.RuleValidator_LRS(alpha=1.0) 
    2356         self.learner.ruleFinder.ruleStoppingValidator.max_rule_complexity = 0 
    2357         self.learner.ruleFinder.refiner = Orange.core.RuleBeamRefiner_Selector() 
    2358         self.learner.ruleFinder.ruleFilter = Orange.core.RuleBeamFilter_Width(width = 1) 
     2423        self.oldStopper = self.learner.rule_finder.rule_stoppingValidator 
     2424        self.evaluator = self.learner.rule_finder.evaluator 
     2425        self.refiner = self.learner.rule_finder.refiner 
     2426        self.validator = self.learner.rule_finder.validator 
     2427        self.ruleFilter = self.learner.rule_finder.ruleFilter 
     2428        self.learner.rule_finder.validator = None 
     2429        self.learner.rule_finder.evaluator = Orange.core.RuleEvaluator_LRS() 
     2430        self.learner.rule_finder.evaluator.storeRules = True 
     2431        self.learner.rule_finder.rule_stoppingValidator = Orange.core.RuleValidator_LRS(alpha=1.0) 
     2432        self.learner.rule_finder.rule_stoppingValidator.max_rule_complexity = 0 
     2433        self.learner.rule_finder.refiner = Orange.core.RuleBeamRefiner_Selector() 
     2434        self.learner.rule_finder.ruleFilter = Orange.core.RuleBeamFilter_Width(width = 1) 
    23592435 
    23602436 
    23612437    def restore_learner(self): 
    2362         self.learner.ruleFinder.evaluator = self.evaluator 
    2363         self.learner.ruleFinder.ruleStoppingValidator = self.oldStopper 
    2364         self.learner.ruleFinder.refiner = self.refiner 
    2365         self.learner.ruleFinder.validator = self.validator 
    2366         self.learner.ruleFinder.ruleFilter = self.ruleFilter 
    2367  
    2368     def computeEVD(self, data, weightID=0, target_class=0, progress=None): 
     2438        self.learner.rule_finder.evaluator = self.evaluator 
     2439        self.learner.rule_finder.rule_stoppingValidator = self.oldStopper 
     2440        self.learner.rule_finder.refiner = self.refiner 
     2441        self.learner.rule_finder.validator = self.validator 
     2442        self.learner.rule_finder.ruleFilter = self.ruleFilter 
     2443 
     2444    def computeEVD(self, data, weight_id=0, target_class=0, progress=None): 
    23692445        # initialize random seed to make experiments repeatable 
    23702446        random.seed(self.randomseed) 
     
    23742450 
    23752451        # loop through N (sampling repetitions) 
    2376         extremeDists=[(0, 1, [])] 
    2377         self.learner.ruleFinder.ruleStoppingValidator.max_rule_complexity = self.oldStopper.max_rule_complexity 
     2452        extreme_dists=[(0, 1, [])] 
     2453        self.learner.rule_finder.rule_stoppingValidator.max_rule_complexity = self.oldStopper.max_rule_complexity 
    23782454        maxVals = [[] for l in range(self.oldStopper.max_rule_complexity)] 
    23792455        for d_i in range(self.n): 
     
    23842460            # create data set (remove and randomize) 
    23852461            tempData = self.createRandomDataSet(data) 
    2386             self.learner.ruleFinder.evaluator.rules = Orange.core.RuleList() 
     2462            self.learner.rule_finder.evaluator.rules = Orange.core.RuleList() 
    23872463            # Next, learn a rule 
    2388             self.learner.ruleFinder(tempData,weightID,target_class, Orange.core.RuleList()) 
     2464            self.learner.rule_finder(tempData,weight_id,target_class, Orange.core.RuleList()) 
    23892465            for l in range(self.oldStopper.max_rule_complexity): 
    2390                 qs = [r.quality for r in self.learner.ruleFinder.evaluator.rules if r.complexity == l+1] 
     2466                qs = [r.quality for r in self.learner.rule_finder.evaluator.rules if r.complexity == l+1] 
    23912467                if qs: 
    23922468                    maxVals[l].append(max(qs)) 
     
    23972473        for mi,m in enumerate(maxVals): 
    23982474            mu, beta, perc = self.compParameters(m,mu,beta) 
    2399             extremeDists.append((mu, beta, perc)) 
    2400             extremeDists.extend([(0,1,[])]*(mi)) 
     2475            extreme_dists.append((mu, beta, perc)) 
     2476            extreme_dists.extend([(0,1,[])]*(mi)) 
    24012477 
    24022478        self.restore_learner() 
    2403         return self.createEVDistList(extremeDists) 
    2404  
    2405 # ************************* # 
    2406 # Rule based classification # 
    2407 # ************************* # 
     2479        return self.createEVDistList(extreme_dists) 
    24082480 
    24092481class CrossValidation: 
    2410     def __init__(self, folds=5, randomGenerator = 150): 
     2482    def __init__(self, folds=5, random_generator = 150): 
    24112483        self.folds = folds 
    2412         self.randomGenerator = randomGenerator 
     2484        self.random_generator = random_generator 
    24132485 
    24142486    def __call__(self, learner, examples, weight): 
    2415         res = orngTest.crossValidation([learner], (examples, weight), folds = self.folds, randomGenerator = self.randomGenerator) 
     2487        res = orngTest.crossValidation([learner], (examples, weight), folds = self.folds, random_generator = self.random_generator) 
    24162488        return self.get_prob_from_res(res, examples) 
    24172489 
    24182490    def get_prob_from_res(self, res, examples): 
    2419         probDist = Orange.core.DistributionList() 
     2491        prob_dist = Orange.core.DistributionList() 
    24202492        for tex in res.results: 
    2421             d = Orange.core.Distribution(examples.domain.classVar) 
     2493            d = Orange.core.Distribution(examples.domain.class_var) 
    24222494            for di in range(len(d)): 
    24232495                d[di] = tex.probabilities[0][di] 
    2424             probDist.append(d) 
    2425         return probDist 
    2426  
     2496            prob_dist.append(d) 
     2497        return prob_dist 
     2498 
     2499@deprecated_members({"sortRules": "sort_rules"}) 
    24272500class PILAR: 
    2428     """ PILAR (Probabilistic improvement of learning algorithms with rules) """ 
     2501    """ 
     2502    PILAR (Probabilistic improvement of learning algorithms with rules). 
     2503    """ 
    24292504    def __init__(self, alternative_learner = None, min_cl_sig = 0.5, min_beta = 0.0, set_prefix_rules = False, optimize_betas = True): 
    24302505        self.alternative_learner = alternative_learner 
     
    24382513        rules = self.add_null_rule(rules, examples, weight) 
    24392514        if self.alternative_learner: 
    2440             probDist = self.selected_evaluation(self.alternative_learner, examples, weight) 
     2515            prob_dist = self.selected_evaluation(self.alternative_learner, examples, weight) 
    24412516            classifier = self.alternative_learner(examples,weight) 
    2442 ##            probDist = Orange.core.DistributionList() 
     2517##            prob_dist = Orange.core.DistributionList() 
    24432518##            for e in examples: 
    2444 ##                probDist.append(classifier(e,Orange.core.GetProbabilities)) 
    2445             cl = Orange.core.RuleClassifier_logit(rules, self.min_cl_sig, self.min_beta, examples, weight, self.set_prefix_rules, self.optimize_betas, classifier, probDist) 
     2519##                prob_dist.append(classifier(e,Orange.core.GetProbabilities)) 
     2520            cl = Orange.core.RuleClassifier_logit(rules, self.min_cl_sig, self.min_beta, examples, weight, self.set_prefix_rules, self.optimize_betas, classifier, prob_dist) 
    24462521        else: 
    24472522            cl = Orange.core.RuleClassifier_logit(rules, self.min_cl_sig, self.min_beta, examples, weight, self.set_prefix_rules, self.optimize_betas) 
     
    24512526            cl.rules[ri].setattr("beta",cl.ruleBetas[ri]) 
    24522527##            if cl.ruleBetas[ri] > 0: 
    2453 ##                print Orange.classification.rules.ruleToString(r), r.quality, cl.ruleBetas[ri] 
     2528##                print Orange.classification.rules.rule_to_string(r), r.quality, cl.ruleBetas[ri] 
    24542529        cl.all_rules = cl.rules 
    2455         cl.rules = self.sortRules(cl.rules) 
     2530        cl.rules = self.sort_rules(cl.rules) 
    24562531        cl.ruleBetas = [r.beta for r in cl.rules] 
    24572532        cl.setattr("data", examples) 
     
    24592534 
    24602535    def add_null_rule(self, rules, examples, weight): 
    2461         for cl in examples.domain.classVar: 
     2536        for cl in examples.domain.class_var: 
    24622537            tmpRle = Orange.core.Rule() 
    24632538            tmpRle.filter = Orange.core.Filter_values(domain = examples.domain) 
    24642539            tmpRle.parentRule = None 
    24652540            tmpRle.filterAndStore(examples,weight,int(cl)) 
    2466             tmpRle.quality = tmpRle.classDistribution[int(cl)]/tmpRle.classDistribution.abs 
     2541            tmpRle.quality = tmpRle.class_distribution[int(cl)]/tmpRle.class_distribution.abs 
    24672542            rules.append(tmpRle) 
    24682543        return rules 
    24692544         
    2470     def sortRules(self, rules): 
    2471         newRules = Orange.core.RuleList() 
     2545    def sort_rules(self, rules): 
     2546        new_rules = Orange.core.RuleList() 
    24722547        foundRule = True 
    24732548        while foundRule: 
     
    24752550            bestRule = None 
    24762551            for r in rules: 
    2477                 if r in newRules: 
     2552                if r in new_rules: 
    24782553                    continue 
    24792554                if r.beta < 0.01 and r.beta > -0.01: 
     
    24922567                    continue 
    24932568            if bestRule: 
    2494                 newRules.append(bestRule) 
    2495         return newRules      
    2496  
    2497  
    2498 class CN2UnorderedClassifier(Orange.core.RuleClassifier): 
    2499     """ Classification from rules as in CN2. """ 
    2500     def __init__(self, rules, examples, weightID = 0, **argkw): 
     2569                new_rules.append(bestRule) 
     2570        return new_rules 
     2571 
     2572 
     2573@deprecated_members({"defaultClassIndex": "default_class_index"}) 
     2574class RuleClassifier_bestRule(Orange.core.RuleClassifier): 
     2575    """ 
     2576    A very simple classifier, it takes the best rule of each class and 
     2577    normalizes probabilities. 
     2578    """ 
     2579    def __init__(self, rules, examples, weight_id = 0, **argkw): 
    25012580        self.rules = rules 
    25022581        self.examples = examples 
    2503         self.weightID = weightID 
    2504         self.prior = Orange.core.Distribution(examples.domain.classVar, examples, weightID) 
     2582        self.apriori = Orange.core.Distribution(examples.domain.class_var,examples,weight_id) 
     2583        self.apriori_prob = [a/self.apriori.abs for a in self.apriori] 
     2584        self.weight_id = weight_id 
    25052585        self.__dict__.update(argkw) 
    2506  
    2507     def __call__(self, example, result_type=Orange.core.GetValue, retRules = False): 
    2508         # iterate through the set of induced rules: self.rules and sum their distributions  
    2509         ret_dist = self.sum_distributions([r for r in self.rules if r(example)]) 
    2510         # normalize 
    2511         a = sum(ret_dist) 
    2512         for ri, r in enumerate(ret_dist): 
    2513             ret_dist[ri] = ret_dist[ri]/a 
    2514 ##        ret_dist.normalize() 
    2515         # return value 
    2516         if result_type == Orange.core.GetValue: 
    2517           return ret_dist.modus() 
    2518         if result_type == Orange.core.GetProbabilities: 
    2519           return ret_dist 
    2520         return (ret_dist.modus(),ret_dist) 
    2521  
    2522     def sum_distributions(self, rules): 
    2523         if not rules: 
    2524             return self.prior 
    2525         empty_disc = Orange.core.Distribution(rules[0].examples.domain.classVar) 
    2526         for r in rules: 
    2527             for i,d in enumerate(r.classDistribution): 
    2528                 empty_disc[i] = empty_disc[i] + d 
    2529         return empty_disc 
    2530  
    2531     def __str__(self): 
    2532         retStr = "" 
     2586        self.default_class_index = -1 
     2587 
     2588    def __call__(self, example, result_type=Orange.classification.Classifier.GetValue, retRules = False): 
     2589        example = Orange.core.Example(self.examples.domain,example) 
     2590        tempDist = Orange.core.Distribution(example.domain.class_var) 
     2591        best_rules = [None]*len(example.domain.class_var.values) 
     2592 
    25332593        for r in self.rules: 
    2534             retStr += Orange.classification.rules.ruleToString(r)+" "+str(r.classDistribution)+"\n" 
    2535         return retStr 
    2536  
    2537  
    2538 class RuleClassifier_bestRule(Orange.core.RuleClassifier): 
    2539     """ A very simple classifier, it takes the best rule of each class and normalizes probabilities. """ 
    2540     def __init__(self, rules, examples, weightID = 0, **argkw): 
    2541         self.rules = rules 
    2542         self.examples = examples 
    2543         self.apriori = Orange.core.Distribution(examples.domain.classVar,examples,weightID) 
    2544         self.aprioriProb = [a/self.apriori.abs for a in self.apriori] 
    2545         self.weightID = weightID 
    2546         self.__dict__.update(argkw) 
    2547         self.defaultClassIndex = -1 
    2548  
    2549     def __call__(self, example, result_type=Orange.core.GetValue, retRules = False): 
    2550         example = Orange.core.Example(self.examples.domain,example) 
    2551         tempDist = Orange.core.Distribution(example.domain.classVar) 
    2552         bestRules = [None]*len(example.domain.classVar.values) 
    2553  
    2554         for r in self.rules: 
    2555             if r(example) and not self.defaultClassIndex == int(r.classifier.defaultVal) and \ 
    2556                (not bestRules[int(r.classifier.defaultVal)] or r.quality>tempDist[r.classifier.defaultVal]): 
    2557                 tempDist[r.classifier.defaultVal] = r.quality 
    2558                 bestRules[int(r.classifier.defaultVal)] = r 
    2559         for b in bestRules: 
     2594            if r(example) and not self.default_class_index == int(r.classifier.default_val) and \ 
     2595               (not best_rules[int(r.classifier.default_val)] or r.quality>tempDist[r.classifier.default_val]): 
     2596                tempDist[r.classifier.default_val] = r.quality 
     2597                best_rules[int(r.classifier.default_val)] = r 
     2598        for b in best_rules: 
    25602599            if b: 
    25612600                used = getattr(b,"used",0.0) 
    25622601                b.setattr("used",used+1) 
    2563         nonCovPriorSum = sum([tempDist[i] == 0. and self.aprioriProb[i] or 0. for i in range(len(self.aprioriProb))]) 
     2602        nonCovPriorSum = sum([tempDist[i] == 0. and self.apriori_prob[i] or 0. for i in range(len(self.apriori_prob))]) 
    25642603        if tempDist.abs < 1.: 
    25652604            residue = 1. - tempDist.abs 
    2566             for a_i,a in enumerate(self.aprioriProb): 
     2605            for a_i,a in enumerate(self.apriori_prob): 
    25672606                if tempDist[a_i] == 0.: 
    2568                     tempDist[a_i]=self.aprioriProb[a_i]*residue/nonCovPriorSum 
    2569             finalDist = tempDist #Orange.core.Distribution(example.domain.classVar) 
     2607                    tempDist[a_i]=self.apriori_prob[a_i]*residue/nonCovPriorSum 
     2608            final_dist = tempDist #Orange.core.Distribution(example.domain.class_var) 
    25702609        else: 
    25712610            tempDist.normalize() # prior probability 
    2572             tmpExamples = Orange.core.ExampleTable(self.examples) 
    2573             for r in bestRules: 
     2611            tmp_examples = Orange.core.ExampleTable(self.examples) 
     2612            for r in best_rules: 
    25742613                if r: 
    2575                     tmpExamples = r.filter(tmpExamples) 
    2576             tmpDist = Orange.core.Distribution(tmpExamples.domain.classVar,tmpExamples,self.weightID) 
     2614                    tmp_examples = r.filter(tmp_examples) 
     2615            tmpDist = Orange.core.Distribution(tmp_examples.domain.class_var,tmp_examples,self.weight_id) 
    25772616            tmpDist.normalize() 
    2578             probs = [0.]*len(self.examples.domain.classVar.values) 
    2579             for i in range(len(self.examples.domain.classVar.values)): 
     2617            probs = [0.]*len(self.examples.domain.class_var.values) 
     2618            for i in range(len(self.examples.domain.class_var.values)): 
    25802619                probs[i] = tmpDist[i]+tempDist[i]*2 
    2581             finalDist = Orange.core.Distribution(self.examples.domain.classVar) 
    2582             for cl_i,cl in enumerate(self.examples.domain.classVar): 
    2583                 finalDist[cl] = probs[cl_i] 
    2584             finalDist.normalize() 
     2620            final_dist = Orange.core.Distribution(self.examples.domain.class_var) 
     2621            for cl_i,cl in enumerate(self.examples.domain.class_var): 
     2622                final_dist[cl] = probs[cl_i] 
     2623            final_dist.normalize() 
    25852624                 
    25862625        if retRules: # Do you want to return rules with classification? 
    2587             if result_type == Orange.core.GetValue: 
    2588               return (finalDist.modus(),bestRules) 
     2626            if result_type == Orange.classification.Classifier.GetValue: 
     2627              return (final_dist.modus(),best_rules) 
    25892628            if result_type == Orange.core.GetProbabilities: 
    2590               return (finalDist, bestRules) 
    2591             return (finalDist.modus(),finalDist, bestRules) 
    2592         if result_type == Orange.core.GetValue: 
    2593           return finalDist.modus() 
     2629              return (final_dist, best_rules) 
     2630            return (final_dist.modus(),final_dist, best_rules) 
     2631        if result_type == Orange.classification.Classifier.GetValue: 
     2632          return final_dist.modus() 
    25942633        if result_type == Orange.core.GetProbabilities: 
    2595           return finalDist 
    2596         return (finalDist.modus(),finalDist) 
     2634          return final_dist 
     2635        return (final_dist.modus(),final_dist) 
  • orange/doc/Orange/rst/code/rules-cn2.py

    r7366 r7802  
    1818# All rule-base classifiers can have their rules printed out like this: 
    1919for r in cn2_classifier.rules: 
    20     print Orange.classification.rules.ruleToString(r) 
     20    print Orange.classification.rules.rule_to_string(r) 
  • orange/doc/Orange/rst/code/rules-customized.py

    r7366 r7802  
    88 
    99learner = Orange.classification.rules.RuleLearner() 
    10 learner.ruleFinder = Orange.classification.rules.RuleBeamFinder() 
    11 learner.ruleFinder.evaluator = Orange.classification.rules.MEstimateEvaluator(m=50) 
     10learner.rule_finder = Orange.classification.rules.RuleBeamFinder() 
     11learner.rule_finder.evaluator = Orange.classification.rules.MEstimateEvaluator(m=50) 
    1212 
    1313table =  Orange.data.Table("titanic") 
     
    1515 
    1616for r in classifier.rules: 
    17     print Orange.classification.rules.ruleToString(r) 
     17    print Orange.classification.rules.rule_to_string(r) 
    1818 
    19 learner.ruleFinder.ruleStoppingValidator = \ 
     19learner.rule_finder.rule_stopping_validator = \ 
    2020    Orange.classification.rules.RuleValidator_LRS(alpha=0.01, 
    2121                             min_coverage=10, max_rule_complexity = 2) 
    22 learner.ruleFinder.ruleFilter = \ 
     22learner.rule_finder.rule_filter = \ 
    2323    Orange.classification.rules.RuleBeamFilter_Width(width = 50) 
    2424 
     
    2626 
    2727for r in classifier.rules: 
    28     print Orange.classification.rules.ruleToString(r) 
     28    print Orange.classification.rules.rule_to_string(r) 
  • orange/orngCN2.py

    r7367 r7802  
    1 from Orange.classification.rules import ruleToString 
     1from Orange.classification.rules import rule_to_string as ruleToString 
    22from Orange.classification.rules import LaplaceEvaluator 
    33from Orange.classification.rules import WRACCEvaluator 
     
    2323from Orange.classification.rules import perc 
    2424from Orange.classification.rules import createRandomDataSet 
    25 from Orange.classification.rules import compParameters                     
     25from Orange.classification.rules import compParameters 
    2626from Orange.classification.rules import computeDists 
    2727from Orange.classification.rules import createEVDistList 
Note: See TracChangeset for help on using the changeset viewer.