Closed marklit closed 4 years ago
Hey @marklit, a couple of comments:
dsql
issue, you probably have SQL disabled; try setting druid.sql.enable = true
in your common runtime properties.org.apache.hadoop.mapred.LocalJobRunner
-- it is not using YARN, it's running in-process. The in-process runner is pretty inefficient and can be really slow. You probably need mapred-site.xml
copied over too. It has a mapreduce.framework.name
parameter that controls whether the local or yarn runner gets used.This issue has been marked as stale due to 280 days of inactivity. It will be closed in 4 weeks if no further activity occurs. If this issue is still relevant, please simply write any comment. Even if closed, you can still revive the issue at any time or discuss it on the dev@druid.apache.org list. Thank you for your contributions.
This issue has been closed due to lack of activity. If you think that is incorrect, or the issue requires additional review, you can revive the issue at any time.
I'm running a single machine with 24 GB of RAM, 1 TB of disk, Ubuntu 16.04.2 LTS, Hadoop 2.8.3 with HDFS setup, Zookeeper 3.8.4, Druid 0.13.0 and MySQL and the MySQL Connector 5.1.28. I used https://tech.marksblogg.com/hadoop-3-single-node-install-guide.html for my Hadoop installation steps with the exception of using hadoop 2.8.3 instead of 3.0.3.
Hadoop's name node runs on port 9000.
These are my config file changes from the stock config files distributed with Druid:
/opt/druid/conf/druid/middleManager/runtime.properties
/opt/druid/conf/druid/_common/common.runtime.properties
/opt/druid/conf/druid/broker/jvm.config
/opt/druid/conf/druid/historical/jvm.config
I've setup segments storage on HDFS.
I've copied over the config files from Hadoop to Druid:
I've used the following to launch each of the nodes:
This is the index file I've created:
This is the sample CSV file I created and uploaded to HDFS:
This is the contents of that file:
This is the command I used to upload the CSV to HDFS:
I'm able to submit the job and see it running for some time in the Web UI / console.
After some time the job fails with the following:
This is what top reports after the job has been running for a minute:
I've also noticed the following returns
405 Method Not Allowed
Any ideas what might be wrong?