CODAIT / spark-netezza

Netezza Connector for Apache Spark
Apache License 2.0
13 stars 7 forks source link

Not Windows-compatible due to mkfifo call in NetezzaUtils #7

Open abradbury opened 8 years ago

abradbury commented 8 years ago

Hello, I am using Spark 1.5.0 in Scala with spark-netezza 0.1.1 on a Windows 7 machine.

Running Spark with the spark-netezza connector configured as detailed in the README results in the following error and stack trace (the full stack trace is included at the end):

16/06/03 11:08:39 ERROR NetezzaUtils$: Unable to create named pipe:C:\Users\blah\AppData\Local\Temp\spark-netezza-51420f0c-512e-47fe-aa89-5497832d116f
java.io.IOException: Cannot run program "mkfifo": CreateProcess error=2, The system cannot find the file specified
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047)
    at java.lang.Runtime.exec(Runtime.java:617)
    at java.lang.Runtime.exec(Runtime.java:450)
    at java.lang.Runtime.exec(Runtime.java:347)
    at com.ibm.spark.netezza.NetezzaUtils$.createPipe(NetezzaUtils.scala:51)
    at com.ibm.spark.netezza.NetezzaDataReader.startExternalTableDataUnload(NetezzaDataReader.scala:67)
    at com.ibm.spark.netezza.NetezzaRDD$$anon$1.<init>(NetezzaRDD.scala:73)
    at com.ibm.spark.netezza.NetezzaRDD.compute(NetezzaRDD.scala:63)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified
    at java.lang.ProcessImpl.create(Native Method)
    at java.lang.ProcessImpl.<init>(ProcessImpl.java:385)
    at java.lang.ProcessImpl.start(ProcessImpl.java:136)
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028)
    ... 21 more

This seems to be because of line 51 of NetezzaUtils.scala, which calls the mkfifocommand, which is a Linux-only command and, hence, would fail on Windows. Perhaps this StackOverflow question and answers would help with information about making such a step of the connector platform agnostic?

Thank you.

16/06/03 11:08:39 ERROR NetezzaUtils$: Unable to create named pipe:C:\Users\blah\AppData\Local\Temp\spark-netezza-51420f0c-512e-47fe-aa89-5497832d116f
java.io.IOException: Cannot run program "mkfifo": CreateProcess error=2, The system cannot find the file specified
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047)
    at java.lang.Runtime.exec(Runtime.java:617)
    at java.lang.Runtime.exec(Runtime.java:450)
    at java.lang.Runtime.exec(Runtime.java:347)
    at com.ibm.spark.netezza.NetezzaUtils$.createPipe(NetezzaUtils.scala:51)
    at com.ibm.spark.netezza.NetezzaDataReader.startExternalTableDataUnload(NetezzaDataReader.scala:67)
    at com.ibm.spark.netezza.NetezzaRDD$$anon$1.<init>(NetezzaRDD.scala:73)
    at com.ibm.spark.netezza.NetezzaRDD.compute(NetezzaRDD.scala:63)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified
    at java.lang.ProcessImpl.create(Native Method)
    at java.lang.ProcessImpl.<init>(ProcessImpl.java:385)
    at java.lang.ProcessImpl.start(ProcessImpl.java:136)
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028)
    ... 21 more
16/06/03 11:08:39 WARN NetezzaRDD: Exception closing Netezza record reader
java.lang.NullPointerException
    at com.ibm.spark.netezza.NetezzaDataReader.close(NetezzaDataReader.scala:158)
    at com.ibm.spark.netezza.NetezzaRDD$$anon$1.com$ibm$spark$netezza$NetezzaRDD$$anon$$close(NetezzaRDD.scala:88)
    at com.ibm.spark.netezza.NetezzaRDD$$anon$1$$anonfun$1.apply(NetezzaRDD.scala:69)
    at com.ibm.spark.netezza.NetezzaRDD$$anon$1$$anonfun$1.apply(NetezzaRDD.scala:69)
    at org.apache.spark.TaskContextImpl$$anon$1.onTaskCompletion(TaskContextImpl.scala:60)
    at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:79)
    at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:77)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:77)
    at org.apache.spark.scheduler.Task.run(Task.scala:90)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
16/06/03 11:08:39 INFO NetezzaRDD: closed connection
16/06/03 11:08:39 ERROR Executor: Exception in task 0.0 in stage 10.0 (TID 10)
java.io.IOException: Unable to create named pipe:C:\Users\blah\AppData\Local\Temp\spark-netezza-51420f0c-512e-47fe-aa89-5497832d116f
    at com.ibm.spark.netezza.NetezzaUtils$.createPipe(NetezzaUtils.scala:57)
    at com.ibm.spark.netezza.NetezzaDataReader.startExternalTableDataUnload(NetezzaDataReader.scala:67)
    at com.ibm.spark.netezza.NetezzaRDD$$anon$1.<init>(NetezzaRDD.scala:73)
    at com.ibm.spark.netezza.NetezzaRDD.compute(NetezzaRDD.scala:63)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Cannot run program "mkfifo": CreateProcess error=2, The system cannot find the file specified
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047)
    at java.lang.Runtime.exec(Runtime.java:617)
    at java.lang.Runtime.exec(Runtime.java:450)
    at java.lang.Runtime.exec(Runtime.java:347)
    at com.ibm.spark.netezza.NetezzaUtils$.createPipe(NetezzaUtils.scala:51)
    ... 17 more
Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified
    at java.lang.ProcessImpl.create(Native Method)
    at java.lang.ProcessImpl.<init>(ProcessImpl.java:385)
    at java.lang.ProcessImpl.start(ProcessImpl.java:136)
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028)
    ... 21 more
16/06/03 11:08:39 WARN TaskSetManager: Lost task 0.0 in stage 10.0 (TID 10, localhost): java.io.IOException: Unable to create named pipe:C:\Users\blah\AppData\Local\Temp\spark-netezza-51420f0c-512e-47fe-aa89-5497832d116f
    at com.ibm.spark.netezza.NetezzaUtils$.createPipe(NetezzaUtils.scala:57)
    at com.ibm.spark.netezza.NetezzaDataReader.startExternalTableDataUnload(NetezzaDataReader.scala:67)
    at com.ibm.spark.netezza.NetezzaRDD$$anon$1.<init>(NetezzaRDD.scala:73)
    at com.ibm.spark.netezza.NetezzaRDD.compute(NetezzaRDD.scala:63)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Cannot run program "mkfifo": CreateProcess error=2, The system cannot find the file specified
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047)
    at java.lang.Runtime.exec(Runtime.java:617)
    at java.lang.Runtime.exec(Runtime.java:450)
    at java.lang.Runtime.exec(Runtime.java:347)
    at com.ibm.spark.netezza.NetezzaUtils$.createPipe(NetezzaUtils.scala:51)
    ... 17 more
Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified
    at java.lang.ProcessImpl.create(Native Method)
    at java.lang.ProcessImpl.<init>(ProcessImpl.java:385)
    at java.lang.ProcessImpl.start(ProcessImpl.java:136)
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028)
    ... 21 more

16/06/03 11:08:39 ERROR TaskSetManager: Task 0 in stage 10.0 failed 1 times; aborting job
16/06/03 11:08:39 INFO TaskSchedulerImpl: Removed TaskSet 10.0, whose tasks have all completed, from pool 
16/06/03 11:08:39 INFO TaskSchedulerImpl: Cancelling stage 10
16/06/03 11:08:39 INFO DAGScheduler: ResultStage 10 (show at ProcessRodgerJsonLogs.scala:69) failed in 0.359 s
16/06/03 11:08:39 INFO DAGScheduler: Job 8 failed: show at ProcessRodgerJsonLogs.scala:69, took 0.375836 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 10.0 failed 1 times, most recent failure: Lost task 0.0 in stage 10.0 (TID 10, localhost): java.io.IOException: Unable to create named pipe:C:\Users\blah\AppData\Local\Temp\spark-netezza-51420f0c-512e-47fe-aa89-5497832d116f
    at com.ibm.spark.netezza.NetezzaUtils$.createPipe(NetezzaUtils.scala:57)
    at com.ibm.spark.netezza.NetezzaDataReader.startExternalTableDataUnload(NetezzaDataReader.scala:67)
    at com.ibm.spark.netezza.NetezzaRDD$$anon$1.<init>(NetezzaRDD.scala:73)
    at com.ibm.spark.netezza.NetezzaRDD.compute(NetezzaRDD.scala:63)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Cannot run program "mkfifo": CreateProcess error=2, The system cannot find the file specified
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047)
    at java.lang.Runtime.exec(Runtime.java:617)
    at java.lang.Runtime.exec(Runtime.java:450)
    at java.lang.Runtime.exec(Runtime.java:347)
    at com.ibm.spark.netezza.NetezzaUtils$.createPipe(NetezzaUtils.scala:51)
    ... 17 more
Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified
    at java.lang.ProcessImpl.create(Native Method)
    at java.lang.ProcessImpl.<init>(ProcessImpl.java:385)
    at java.lang.ProcessImpl.start(ProcessImpl.java:136)
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028)
    ... 21 more

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1294)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1282)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1281)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1281)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1507)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1469)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1824)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1837)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1850)
    at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:215)
    at org.apache.spark.sql.execution.Limit.executeCollect(basicOperators.scala:207)
    at org.apache.spark.sql.DataFrame$$anonfun$collect$1.apply(DataFrame.scala:1386)
    at org.apache.spark.sql.DataFrame$$anonfun$collect$1.apply(DataFrame.scala:1386)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
    at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:1904)
    at org.apache.spark.sql.DataFrame.collect(DataFrame.scala:1385)
    at org.apache.spark.sql.DataFrame.head(DataFrame.scala:1315)
    at org.apache.spark.sql.DataFrame.take(DataFrame.scala:1378)
    at org.apache.spark.sql.DataFrame.showString(DataFrame.scala:178)
    at org.apache.spark.sql.DataFrame.show(DataFrame.scala:402)
    at org.apache.spark.sql.DataFrame.show(DataFrame.scala:363)
    at org.apache.spark.sql.DataFrame.show(DataFrame.scala:371)
    at com.some.company.name.ProcessRodgerJsonLogs.<init>(ProcessRodgerJsonLogs.scala:69)
    at com.some.company.name.RodgerPAMain$.com$sky$rodger$spark$pa$RodgerPAMain$$execute(RodgerPAMain.scala:43)
    at com.some.company.name.RodgerPAMain$delayedInit$body.apply(RodgerPAMain.scala:13)
    at scala.Function0$class.apply$mcV$sp(Function0.scala:40)
    at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
    at scala.App$$anonfun$main$1.apply(App.scala:71)
    at scala.App$$anonfun$main$1.apply(App.scala:71)
    at scala.collection.immutable.List.foreach(List.scala:318)
    at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:32)
    at scala.App$class.main(App.scala:71)
    at com.some.company.name.RodgerPAMain$.main(RodgerPAMain.scala:8)
    at com.some.company.name.RodgerPAMain.main(RodgerPAMain.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
Caused by: java.io.IOException: Unable to create named pipe:C:\Users\blah\AppData\Local\Temp\spark-netezza-51420f0c-512e-47fe-aa89-5497832d116f
    at com.ibm.spark.netezza.NetezzaUtils$.createPipe(NetezzaUtils.scala:57)
    at com.ibm.spark.netezza.NetezzaDataReader.startExternalTableDataUnload(NetezzaDataReader.scala:67)
    at com.ibm.spark.netezza.NetezzaRDD$$anon$1.<init>(NetezzaRDD.scala:73)
    at com.ibm.spark.netezza.NetezzaRDD.compute(NetezzaRDD.scala:63)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Cannot run program "mkfifo": CreateProcess error=2, The system cannot find the file specified
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047)
    at java.lang.Runtime.exec(Runtime.java:617)
    at java.lang.Runtime.exec(Runtime.java:450)
    at java.lang.Runtime.exec(Runtime.java:347)
    at com.ibm.spark.netezza.NetezzaUtils$.createPipe(NetezzaUtils.scala:51)
    ... 17 more
Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified
    at java.lang.ProcessImpl.create(Native Method)
    at java.lang.ProcessImpl.<init>(ProcessImpl.java:385)
    at java.lang.ProcessImpl.start(ProcessImpl.java:136)
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028)
    ... 21 more
sureshthalamati commented 8 years ago

Thank you for reporting this issue. Did u you try using cygwin ? I currently don't have Windows setup to try the fix you suggested above.