Open zorel opened 1 day ago
@zorel
How much of memory do you have on your node (from where the curl is done ?) and also how much memory heap did you configured for your namenode ?
Regards,
Enough on the node, but indeed the NN is configured with 1GB only (which is the default at installation time). But interestingly, the message comes from curl
, not the NN process.
/var/lib/ambari-agent/tmp/yarn-ats/1.2.4.0-77/hbase.tar.gz
is 1.1GB
It would be interesting to try again with higher memory on the NN like 2 or 4Go
Regards
Indeed it works with higher heaps. 2GB works.
Hi @zorel,
Thank you ;) It may be linked to JDK 8 version (compared to old JDK 6/7 when the service advisor has been written) https://github.com/clemlabprojects/ambari/blob/c4f200b7994ff724e445cabaff5eaaa5367537b0/ambari-server/src/main/resources/stacks/ODP/1.0/services/HDFS/service_advisor.py#L199 is where the 1024 is set, we can update it to 2048
Regards
It does not work. I made the test on ambari-server node, and instead of a error cause the hbase.tar.gz does not exists on this node, curl does not complains and upload an emtpy file. heap size does not solve the issue.
odp 1.2.4.0 / Oracle linux 9 / Python 3
https://github.com/curl/curl/issues/1385 Hit
curl: option --data-binary: out of memory
when trying to start Yarn ResourceManager on this command line:Running manually with
-T
worked: