Open badalb opened 8 years ago
Using insert as INSERT INTO table VALUES ('') works only with LOCAL. If you need remote host to work use INSERT_HADOOP . However you use INSERT INTO .. FILE but that file should exist in the remote host (If you have NFS much better).
Thanks kalyan
On Fri, Jan 29, 2016 at 2:04 AM, Badal Baidya notifications@github.com wrote:
druidGParser.java line 1142 always returning 88 for query like above forcing to switch statement case(1) breaking the loop.
— Reply to this email directly or view it on GitHub https://github.com/srikalyc/Sql4D/issues/94#issuecomment-176676941.
Bye & regards C.Srikalyan
Hi, Thanks a lot for your response. Do you have any sample code example to demonstrate INSERT_HADOOP ? Could you please help.
Regards, Badal
Badal, If you see https://github.com/srikalyc/Sql4D/ and scroll down a bit, you will find a table with bunch of links to blogs. Hope this helps.
Thanks kalyan
On Sun, Jan 31, 2016 at 10:36 PM, Badal Baidya notifications@github.com wrote:
Hi, Thanks a lot for your response. Do you have any sample code example to demonstrate INSERT_HADOOP ? Could you please help.
Regards, Badal
— Reply to this email directly or view it on GitHub https://github.com/srikalyc/Sql4D/issues/94#issuecomment-177803705.
Bye & regards C.Srikalyan
Hi Kalyan,
Got it. Thanks a lot.
Regards, Badal
_Seems inserting into remote host is not working properly - _ My client code : - String host = "";
BasicInsertMeta.java
dataPath is always null. So creating the csv file in client local system.
public Map<String, Object> getFirehose() { Map<String, Object> finalMap = new LinkedHashMap<>();
finalMap.put("type", "local"); if (dataPath != null) { int folderEndIndex = dataPath.lastIndexOf("/"); finalMap.put("baseDir", dataPath.substring(0, folderEndIndex + 1)); finalMap.put("filter", (folderEndIndex == dataPath.length() - 1)?"*":dataPath.substring(folderEndIndex + 1)); if (dataPath.endsWith("json")) { dataFormat = "json"; } else if (dataPath.endsWith("csv")) { dataFormat = "csv"; } } else { finalMap.put("baseDir", tmpFolder); String fileName = UUID.randomUUID().toString() + ".csv"; finalMap.put("filter", fileName); dataFormat = "csv"; if (values.isEmpty()) { throw new IllegalStateException("No values to insert !!"); } try { File file = new File(tmpFolder + File.separator + fileName); FileUtils.write(file, Joiner.on(",").join(values)); System.out.println("Written to " + file); Object timestamp = values.get(0); timestampFormat = TimeUtils.detectFormat(timestamp.toString()); } catch (IOException ex) { Logger.getLogger(BasicInsertMeta.class.getName()).log(Level.SEVERE, null, ex); }
Error log:- Coordinator console error log -> looking for csv in remote server. Although its written in client 2016-01-29T06:15:19,189 INFO [task-runner-0] io.druid.segment.realtime.firehose.LocalFirehoseFactory - Searching for all [6e06a2e0-ae0e-479e-ad38-60769f27802f.csv] in and beneath [/var/folders/_y/hxlk4pz16qj2lsp6gs_0yjxnbt3pf8/T] 2016-01-29T06:15:19,197 ERROR [task-runner-0] io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running task[IndexTask{id=index_abc_2016-01-29T06:15:11.269Z, type=index, dataSource=abc}] java.lang.IllegalArgumentException: Parameter 'directory' is not a directory at org.apache.commons.io.FileUtils.listFiles(FileUtils.java:358) ~[commons-io-2.0.1.jar:2.0.1] at io.druid.segment.realtime.firehose.LocalFirehoseFactory.connect(LocalFirehoseFactory.java:87) ~[druid-server-0.8.2.jar:0.8.2] at io.druid.segment.realtime.firehose.LocalFirehoseFactory.connect(LocalFirehoseFactory.java:43) ~[druid-server-0.8.2.jar:0.8.2] at io.druid.indexing.common.task.IndexTask.getDataIntervals(IndexTask.java:234) ~[druid-indexing-service-0.8.2.jar:0.8.2] at io.druid.indexing.common.task.IndexTask.run(IndexTask.java:192) ~[druid-indexing-service-0.8.2.jar:0.8.2] at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:221) [druid-indexing-service-0.8.2.jar:0.8.2] at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:200) [druid-indexing-service-0.8.2.jar:0.8.2] at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_67] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_67] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_67] at java.lang.Thread.run(Thread.java:745) [?:1.7.0_67] 2016-01-29T06:15:19,202 INFO [task-runner-0] io.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: { "id" : "index_abc_2016-01-29T06:15:11.269Z", "status" : "FAILED", "duration" : 65 }
druidGParser.java line 1142 always returning 88 for query like above forcing to switch statement case(1) breaking the loop.