Conversation
does not require parameter
for global it's always const max of dimensions
this results in 20% faster code!, but worse results 93,26% on MNIST (from 95.%)
into this method. Also remove updateDutyCycles_() and decuple it to updateDutyCyclesOverlaps_(), and update duty cycles active (which is a one liner) Sugestion to disable bumpUpWeakColumns, as it brings about 0.1% better SP results (in MNIST) but costs about 18% time!
does not require parameter
compared old/tested arrays, not the new values
the global inhibition should return numDesired active cols, but it returns much less once sub threshold cols are removed
This reverts commit 1be7dcd.
as inhibition is always ran first, this can be computed there.
This reverts commit 0e6a63f.
|
I updated your branch with the latest master. |
|
By the way, we only need to do this PyPi publish on one CI, not all of them. |
| //inhibition | ||
| //update inhibition radius if it's time, only changes in local inh | ||
| auto& activeVector = active.getSparse(); | ||
| if(!globalInhibition_ and isUpdateRound_() and learn) { |
There was a problem hiding this comment.
this PR breaks some tests (results). I think because of this change, now correct: update only IF: local & isUpdate & learning. Compare with former: https://github.com/htm-community/htm.core/pull/544/files#diff-75db947f594e8a477099a6c8bb86daa0L475
dkeeney
left a comment
There was a problem hiding this comment.
There are quite a few changes and I am not surprised that the test results are slightly different.
But if the results are close enough to correct I think the change is worth it to get that much speed increase.
As you know I don't follow the algorithms so I cannot confirm if you are doing the algorithm correctly but I have no complaint about any of the code.
Cleanup boosting in SP.
move it into more compact methods, we now have only these:
Where these methods are now more stand-alone.
maybe whole boosting (replaced by New method Connections::synapseCompetiton #466 ?)