The forest is maintained incrementally as samples are added or removed - rather than fully rebuilt from scratch every time - to save effort.
It is not a streaming implementation, all the samples are stored and will be reseen when required to recursively rebuild invalidated subtrees. The effort to update each individual tree can vary substantially but the overall effort to update the forest is averaged across the trees so tends not to vary so much.
IRF is licensed under the MIT license.
npm install irf
var irf = ;var f = 99; // create forest of 99 treesf; // add a sample identified as '1' with the given feature values, classified as 0f; // features are stored sparsely, when a value is not given it will be taken as 0f; // but 0s can also be given explicitly// ...var y = f; // classify feature vector// the forest will be lazily updated before classificationf; // but you can force an update at any time// you get a probability estimate from 0 to 1 for belong to class 1var c = Math; // round to nearest to get class (0 or 1)f; // remove a samplef; // and add it again with new valuesconsole; // serialize to json (for classification, not suitable for incremental update)f;var b = f; // serialize (complete) to buffervar f2 = b; // construct from buffer contents
cd irf python setup.py install
import irff = # create forest of 99 trees# add a sample identified as '1' with the given feature values, classified as 0# features are stored sparsely, when a value is not given it will be taken as 0# but 0s can also be given explicitly# ...y = ; print y, # classify feature vector, round to nearest to get class# save forest to filef = # load forest from file# remove a sample# and add it again with new valuesy = ; print y, # the forest will be lazily updated before classification# f.commit() # but you can force itfor (sId, x, y) in : # iterate through samples in the forest, in lexicographic ID orderprint sId, x, y # and print them
to be written