Conversation
remove explicit AnomalyLikelihood
but Predictor learn() crashes in Release, however does not in Debug (-> hard to investigate)
breznak
left a comment
There was a problem hiding this comment.
Please review and advise with the issues.
- SDRClassifier, Predictor seems broken?
- anomaly bindings broken for new
setAnomalyMode, but that is not crucial. - improved anomaly tests
| )"); | ||
|
|
||
| py::enum_<TemporalMemory::ANMode>(m, "ANMode") | ||
| py::enum_<TemporalMemory::ANMode>(m, "ANMode") //TODO currently htm.bindings.algorithms.ANMode, make ANMode part of algorithms.TemporalMemory |
There was a problem hiding this comment.
nit, how would it be possible to make enum ANMode exposed as a part of htm.bindings.algorithms.TemporalMemory.ANMode, and not the current ..bindings.algorithms.ANMode ?
| "Anomaly score updated with each TM::compute() call. " | ||
| ); | ||
|
|
||
| py_HTM.def("setAnomalyMode", &HTM_t::setAnomalyMode); |
There was a problem hiding this comment.
defined, compiles, but not found in py test?!
|
|
||
| modes = [ANMode.RAW, ANMode.LIKELIHOOD, ANMode.LOGLIKELIHOOD] | ||
| for mod in modes: #this block test convergence of TM and anomaly score for select mode | ||
| #FIXME why not visible from bidngings? tm.setAnomalyMode(mod) |
There was a problem hiding this comment.
setAnomalyMode not found in bindings (see its definition there above)
| for _ in range(200): | ||
| inp.addNoise(0.02) #change 2% bits -> 98% overlap => anomaly should ideally be 2% | ||
| tm.compute(inp, learn=True) | ||
| self.assertLess(tm.anomaly, 0.08) |
There was a problem hiding this comment.
new test that TM actually learns and anomalies converge.
| void reset() { | ||
| anomaly_ = 0.5f; | ||
| mode_ = ANMode::RAW; | ||
| //TODO provide anomalyLikelihood_.reset(); |
There was a problem hiding this comment.
when calling setAnomalyMode during a running sequence, the likelihood score would be broken, should reset. But reset is not available now for Likelihood (and I don't intend to implement it in this PR, so it stays broken).
crash was on bad_alloc (memory allocation failed). Our encoding to labels (realToCategory_()) had undeflow which resulted in a huge UInt -> tried to allocate extremely large vector -> failed.
breznak
left a comment
There was a problem hiding this comment.
Figured where the problem was!
src/examples/hotgym/HelloSPTP.cpp
Outdated
| * helper to transform (Real) data to categories (UInt) for Classifier/Predictor | ||
| **/ | ||
| UInt realToCategory_(const Real r) { | ||
| return static_cast<UInt>((r+1.0f /*map sin(x):[-1,1] map it to [0,2]*/)*1000); //precision on 3 dec places |
There was a problem hiding this comment.
the std::bad_alloc bug (only on Release) was caused here, as the mapping to labels used to underflow (UInt -1), resulting in a huge label, which led to a huge PDF vectors. (not in Debug, because that runs only like 2 steps).
src/examples/hotgym/HelloSPTP.cpp
Outdated
|
|
||
| //Classifier, Predictor | ||
| tCls.start(); | ||
| pred.learn(e, outTM, { realToCategory_(data) }); //FIXME fails with bad_alloc if label is too large! PDF should use map, instead of a vector |
There was a problem hiding this comment.
FIXME: this should be resolved in the Predictor. A large label should not crash the program (caused by huge PDF if label is large). See if PDF can use map.
the 4 different step-ahead predictions took a lot of time
add assert to catch whether we learn something.
| cout << "Cls[0]= " << categoryToReal_(argmax(pred.infer(outTM)[0])) << endl; | ||
| cout << "Cls[100]= " << categoryToReal_(argmax(pred.infer(outTM)[100])) << endl; | ||
|
|
||
| NTA_CHECK( categoryToReal_(argmax(pred.infer(outTM)[0])) != -1) << "Classifier did not learn"; //FIXME Predictor is not learning, this should be ~ sin(49.99) |
There was a problem hiding this comment.
FIXME: this is always 0, predictor/classifier is not learning!
Although the unit-tests are passing.
@Thanh-Binh PR #667 should have fixed the symptoms you've described. But my predictor is still failing here.
Is this the same issue as you described, does it replicate your (failing) experiments?
There was a problem hiding this comment.
@Thanh-Binh ping, can you please have a look at this PR and the test here? I'm getting 0s from the Classifier inference and not sure why.
this crashes the Predictor. Likely a stack-overflow,
limit to unsingned short (from UInt)
to avoid overflow for large labels
|
I'm considering the following modifications to simplify Predictor, and fix the error with labels:
|
needed to specify the argument for the signiture to be correct
TM.anomalyand its convergenceThis PR provides tests for