Group : nz.ac.waikato.cms.weka

probabilisticSignificanceAE

nz.ac.waikato.cms.weka » probabilisticSignificanceAE

Evaluates the worth of an attribute by computing the Probabilistic Significance as a two-way function (attribute-classes and classes-attribute association). For more information see: Amir Ahmad, Lipika Dey (2004). A feature selection technique for classificatory analysis.

更新时间: 1970-01-01 08:00

racedIncrementalLogitBoost

nz.ac.waikato.cms.weka » racedIncrementalLogitBoost

Classifier for incremental learning of large datasets by way of racing logit-boosted committees. For more information see: Eibe Frank, Geoffrey Holmes, Richard Kirkby, Mark Hall: Racing committees for large datasets. In: Proceedings of the 5th International Conferenceon Discovery Science, 153-164, 2002.

更新时间: 1970-01-01 08:00

raceSearch

nz.ac.waikato.cms.weka » raceSearch

Races the cross validation error of competing attribute subsets. Use in conjuction with a ClassifierSubsetEval. RaceSearch has four modes: forward selection races all single attribute additions to a base set (initially no attributes), selects the winner to become the new base set and then iterates until there is no improvement over the base set. Backward elimination is similar but the initial base set has all attributes included and races all single attribute deletions. Schemata search is a bit different. Each iteration a series of races are run in parallel. Each race in a set determines whether a particular attribute should be included or not---ie the race is between the attribute being "in" or "out". The other attributes for this race are included or excluded randomly at each po

更新时间: 1970-01-01 08:00

realAdaBoost

nz.ac.waikato.cms.weka » realAdaBoost

Class for boosting a 2-class classifier using the Real Adaboost method. For more information, see J. Friedman, T. Hastie, R. Tibshirani (2000). Additive Logistic Regression: a Statistical View of Boosting. Annals of Statistics. 95(2):337-407.

更新时间: 1970-01-01 08:00

ridor

nz.ac.waikato.cms.weka » ridor

An implementation of a RIpple-DOwn Rule learner. It generates a default rule first and then the exceptions for the default rule with the least (weighted) error rate. Then it generates the "best" exceptions for each exception and iterates until pure. Thus it performs a tree-like expansion of exceptions.The exceptions are a set of rules that predict classes other than the default. IREP is used to generate the exceptions. For more information about Ripple-Down Rules, see: Brian R. Gaines, Paul Compton (1995). Induction of Ripple-Down Rules Applied to Modeling Large Databases. J. Intell. Inf. Syst. 5(3):211-228.

更新时间: 1970-01-01 08:00

rotationForest

nz.ac.waikato.cms.weka » rotationForest

An ensemble learning method inspired by bagging and random sub-spaces. Trains an ensemble of decision trees on random subspaces of the data, where each subspace has been transformed using principal components analysis.

更新时间: 1970-01-01 08:00

scriptingClassifiers

nz.ac.waikato.cms.weka » scriptingClassifiers

Wrapper classifiers for Jython and Groovy code. Even though the classifier is serializable, the trained classifier cannot be stored persistently. I.e., one cannot store a model file and re-load it at a later point in time again to make predictions.

更新时间: 1970-01-01 08:00

sequentialInformationalBottleneckClusterer

nz.ac.waikato.cms.weka » sequentialInformationalBottleneckClusterer

Cluster data using the sequential information bottleneck algorithm. Note: only hard clustering scheme is supported. sIB assign for each instance the cluster that have the minimum cost/distance to the instance. The trade-off beta is set to infinite so 1/beta is zero. For more information, see: Noam Slonim, Nir Friedman, Naftali Tishby: Unsupervised document classification using sequential information maximization. In: Proceedings of the 25th International ACM SIGIR Conference on Research and Development in Information Retrieval, 129-136, 2002.

更新时间: 1970-01-01 08:00

simpleCART

nz.ac.waikato.cms.weka » simpleCART

Class implementing minimal cost-complexity pruning. Note when dealing with missing values, use "fractional instances" method instead of surrogate split method. For more information, see: Leo Breiman, Jerome H. Friedman, Richard A. Olshen, Charles J. Stone (1984). Classification and Regression Trees. Wadsworth International Group, Belmont, California.

更新时间: 1970-01-01 08:00

simpleEducationalLearningSchemes

nz.ac.waikato.cms.weka » simpleEducationalLearningSchemes

Simple learning schemes for educational purposes (Prism, Id3, IB1 and NaiveBayesSimple).

更新时间: 1970-01-01 08:00

SPegasos

nz.ac.waikato.cms.weka » SPegasos

Implements the stochastic variant of the Pegasos (Primal Estimated sub-GrAdient SOlver for SVM) method of Shalev-Shwartz et al. (2007). This implementation globally replaces all missing values and transforms nominal attributes into binary ones. It also normalizes all attributes, so the coefficients in the output are based on the normalized data. Can either minimize the hinge loss (SVM) or log loss (logistic regression). For more information, see S. Shalev-Shwartz, Y. Singer, N. Srebro: Pegasos: Primal Estimated sub-GrAdient SOlver for SVM. In: 24th International Conference on MachineLearning, 807-814, 2007.

更新时间: 1970-01-01 08:00

SVMAttributeEval

nz.ac.waikato.cms.weka » SVMAttributeEval

Evaluates the worth of an attribute by using an SVM classifier. Attributes are ranked by the square of the weight assigned by the SVM. Attribute selection for multiclass problems is handled by ranking attributes for each class seperately using a one-vs-all method and then "dealing" from the top of each pile to give a final ranking. For more information see: I. Guyon, J. Weston, S. Barnhill, V. Vapnik (2002). Gene selection for cancer classification using support vector machines. Machine Learning. 46:389-422.

更新时间: 1970-01-01 08:00

tabuAndScatterSearch

nz.ac.waikato.cms.weka » tabuAndScatterSearch

Search methods contributed by Adrian Pino (ScatterSearchV1, TabuSearch). ScatterSearch: Performs an Scatter Search through the space of attribute subsets. Start with a population of many significants and diverses subset stops when the result is higher than a given treshold or there's not more improvement. For more information see: Felix Garcia Lopez (2004). Solving feature subset selection problem by a Parallel Scatter Search. Elsevier. Tabu Search: Abdel-Rahman Hedar, Jue Wangy, Masao Fukushima (2006). Tabu Search for Attribute Reduction in Rough Set Theory.

更新时间: 1970-01-01 08:00

tertius

nz.ac.waikato.cms.weka » tertius

Finds rules according to confirmation measure (Tertius-type algorithm). For more information see: P. A. Flach, N. Lachiche (1999). Confirmation-Guided Discovery of first-order rules with Tertius. Machine Learning. 42:61-95.

更新时间: 1970-01-01 08:00

timeSeriesFilters

nz.ac.waikato.cms.weka » timeSeriesFilters

Description=Provides a set of filters for time series. Currently contains PAA and SAX transformation filters and a filter that converts symbolic time series to string attribute values. The time series need to be given as values of a relation-valued attribute in the ARFF file. For example data in ARFF format, check the data directory of this package.

更新时间: 1970-01-01 08:00
共9页 , 总 123
索引仓库
仓库 个数
Central 592045