Open BoltzmannBrain opened 8 years ago
@BoltzmannBrain
If I run:
python hello_classification_model.py -c data/network_configs/sensor_knn.json
I get 90%. Do you not get the same?
I'm running simple_labels.py now (going slowly because I need to rebuild my cache)
No I get 70%. I ran again without using my cache and get the same results.
I notice there's a difference between hello_classification.py and the simple_labels.py implementation: doc 2 uses "lettuce" in the former but "kale" in the latter. I think it should be "kale" in both b/c they both use "kale" in doc 6.
I just ran:
python simple_labels.py --dataPath ~/nta/grok-projects/nlp_experiments/yale/buckets/yale_data_buckets_ints.csv -m htm --numLabels 10
and got 66.2%
With regards to kale vs lettuce, I don't think it matters too much. Results should be the same either way.
The tests w/ TM (sensor_tm_simple_tp.json config) fail on saving the model in executeModelLifecycle() of htmresearch/support/nlp_model_test_helpers.py.
@BoltzmannBrain What is the exact command you used?
The tests w/ TM (sensor_tm_simple_tp.json config) fail on saving the model in executeModelLifecycle() of htmresearch/support/nlp_model_test_helpers.py.
@BoltzmannBrain What is the exact command you used?
This config file is something I've used locally and isn't included in the repo / API tests (yet). It's verys imilar to imbu_sensor_tm_simple_tp.json and the error isn't reproduced with that config. I've removed that part of this issue. Sorry for the confusion.
and got 66.2%
Okay it looks like I have something locally that is to blame. Just to be sure, what nupic and nupic.core are you using? I'm on nupic 0.5.3dev0 and nupic.core 0.4.3dev0.
The simple labels and hello classification tests for sensor_knn.json config fail -- 52.98% vs 66.2%, and 60% vs 80%, respectively.
@subutai would you please confirm you get the same?