Closed voldedore closed 6 years ago
Hello @voldedore
First of all, I'm pretty sure you have a problem somewhere. Reindexing Magento 1.14 EE sample data takes between 4 and 5 seconds, and we have website running in production where indexing 1.500.000 products take 1h30.
And also, it does not seem to index anything since there are no documents in your index. Could you check the Elasticsearch log files ? They should be somewhere near /var/log/elasticsearch
Regards
Thanks for your quick reply.
This is what showed in logs/elasticsearch.log. The action.bulk must be the data pushing, I guess.
[2017-10-10 13:06:50,518][INFO ][node ] [Onyxx] version[1.5.0], pid[244], build[5448160/2015-03-23T14:30:58Z]
[2017-10-10 13:06:50,518][INFO ][node ] [Onyxx] initializing ...
[2017-10-10 13:06:50,559][INFO ][plugins ] [Onyxx] loaded [analysis-phonetic, analysis-icu], sites [kopf, head]
[2017-10-10 13:06:52,828][INFO ][node ] [Onyxx] initialized
[2017-10-10 13:06:52,829][INFO ][node ] [Onyxx] starting ...
[2017-10-10 13:06:52,909][INFO ][transport ] [Onyxx] bound_address {inet[/0.0.0.0:9300]}, publish_address {inet[/172.17.0.4:9300]}
[2017-10-10 13:06:52,929][INFO ][discovery ] [Onyxx] elasticsearch/VHtNRReDTo61b-eVCf0Qfw
[2017-10-10 13:06:56,708][INFO ][cluster.service ] [Onyxx] new_master [Onyxx][VHtNRReDTo61b-eVCf0Qfw][30393020d425][inet[/172.17.0.4:9300]], reason: zen-disco-join (elected_as_master)
[2017-10-10 13:06:56,760][INFO ][gateway ] [Onyxx] recovered [0] indices into cluster_state
[2017-10-10 13:06:56,761][INFO ][http ] [Onyxx] bound_address {inet[/0.0.0.0:9200]}, publish_address {inet[/172.17.0.4:9200]}
[2017-10-10 13:06:56,762][INFO ][node ] [Onyxx] started
[2017-10-10 13:07:19,078][INFO ][cluster.metadata ] [Onyxx] [magento_dev-20171010-130718] creating index, cause [api], templates [], shards [5]/[2], mappings [product, category]
[2017-10-10 13:08:19,908][DEBUG][action.bulk ] [Onyxx] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]
[2017-10-10 13:08:19,909][DEBUG][action.bulk ] [Onyxx] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]
[2017-10-10 13:08:19,909][DEBUG][action.bulk ] [Onyxx] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]
[2017-10-10 13:08:19,914][DEBUG][action.bulk ] [Onyxx] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]
[2017-10-10 13:08:19,914][DEBUG][action.bulk ] [Onyxx] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]
[2017-10-10 13:09:20,292][DEBUG][action.bulk ] [Onyxx] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]
[2017-10-10 13:09:20,299][DEBUG][action.bulk ] [Onyxx] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]
[2017-10-10 13:09:20,300][DEBUG][action.bulk ] [Onyxx] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]
[2017-10-10 13:09:20,300][DEBUG][action.bulk ] [Onyxx] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]
[2017-10-10 13:09:20,300][DEBUG][action.bulk ] [Onyxx] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]
[2017-10-10 13:10:20,500][DEBUG][action.bulk ] [Onyxx] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]
There is more but a lot of action.bulk ] [Onyxx] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]
And this is from /var/log/es-queries.log, that I don't think it will be interesting.
2017-10-10T13:47:18+00:00 DEBUG (7): {"index":"magento_dev","type":"product","body":{"query":{"filtered":{"query":{"bool":{"must":[{"bool":{"must":[{"bool":{"should":[{"multi_match":{"query":"blazer","type":"best_fields","minimum_should_match":"100%","fields":["search_en.whitespace","category_name_en.whitespace^1","search_en.shingle"],"fuzziness":"0.75","prefix_length":"1","max_expansions":"10","cutoff_frequency":0.15}},{"multi_match":{"query":"blazer","type":"best_fields","minimum_should_match":"100%","analyzer":"phonetic_en","fields":["search_en.phonetic","category_name_en.phonetic^1"],"fuzziness":"0.9","prefix_length":"1","max_expansions":"2","cutoff_frequency":0.15}}]}}]}}]}},"filter":{"bool":{"must":[{"terms":{"visibility":[3,4]}},{"terms":{"status":[1]}},{"terms":{"in_stock":[1]}},{"fquery":{"query":{"query_string":{"query":"(categories:2) OR (show_in_categories:2)"}},"_cache":true}},{"terms":{"store_id":[1]}}],"_cache":true}}}},"facets":{"categories_4":{"query":{"query_string":{"query":"((categories:4) OR (show_in_categories:4))"}}},"categories_5":{"query":{"query_string":{"query":"((categories:5) OR (show_in_categories:5))"}}},"categories_6":{"query":{"query_string":{"query":"((categories:6) OR (show_in_categories:6))"}}},"categories_7":{"query":{"query_string":{"query":"((categories:7) OR (show_in_categories:7))"}}},"categories_8":{"query":{"query_string":{"query":"((categories:8) OR (show_in_categories:8))"}}},"categories_9":{"query":{"query_string":{"query":"((categories:9) OR (show_in_categories:9))"}}}},"from":0,"size":0}}
Edit:
This ES1.5 is installed from a docker container, but generally, it was running exactly what we have in the install-es.sh
, except for conf-templates.
I will remake my container to read the conf-templates as settings, and re-test if reindexing still get problems.
Edit2:
After remaking my docker container to read the conf-templates as settings, I still experience the same issue. The log found in /var/log/elasticsearch/<cluster_name>.log
has a bunch of
[2017-10-11 02:42:54,453][DEBUG][action.bulk ] [Pixie] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]
[2017-10-11 02:42:54,455][DEBUG][action.bulk ] [Pixie] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]
[2017-10-11 02:42:54,459][DEBUG][action.bulk ] [Pixie] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]
...
Hi, after re-install a fresh new magento 1.9 and this magento module, everything seems to perfectly be working as expected.
I think this issue can now be closed.
Anyway, php indexer.php --reindexall
returns some error
PHP Fatal error: Call to undefined method Mage_CatalogSearch_Model_Resource_Fulltext_Engine::getCurrentIndex() in /home/vagrant/Code/mage-es/app/code/community/Smile/ElasticSearch/Model/Indexer/Fulltext.php on line 181
but it's another issue.
Thanks.
Bien cordialement,
Hi,
After following the installation instruction, I'm able to run this module on Magento 1.9, ES1.5. Reindexing is running fine (although it is a bit slow, don't know why, about 10 min of Magento 1.9's data sample. I'm afraid of things are gonna be disaster when I deploy on our 20,000-product store.
But it's weird that after reindexing, with
http://192.168.1.30:9200/_cat/indices?v
, this is what I've gotSo, no document found in the index. Searching any keyword would not get any result other than empty set.
I can see every generated product's attributes in http://192.168.1.30:9200/_plugin/head/, so the reindexing seems to do its job fine. Or I have to wait for the ES, itself, running reindexing somewhen in the night?