h2oai / sparkling-water

Sparkling Water provides H2O functionality inside Spark cluster
https://docs.h2o.ai/sparkling-water/3.3/latest-stable/doc/index.html
Apache License 2.0
965 stars 360 forks source link

Memory usage blows up for GLM with many variables. #484

Closed axiomoixa closed 6 years ago

axiomoixa commented 6 years ago

When I run GLM with 15k variables, H2O needed less than 100GB to finish the job, however when I run GLM with 21k variables (20k of which are interactions), H2O could not finish even with 640GB memory. Clusterstatus in Flow shows that the Free Memory for each executor drops to 2-3GB, and then the executors exit with Code 143 & 52.

Spark version 2.2.0 H2O version 3.16.0.2 Sparkling Water version 2.2.4 R 3.3.3 sparklyr_0.7.0-9105

The Spark was initiated with the following config

config <- spark_config()                   
config$sparklyr.cores.local <- 2
config$spark.executor.extraJavaOptions <- "-XX:-UseGCOverheadLimit -XX:+UseG1GC -XX:+UnlockExperimentalVMOptions -XX:-ResizePLAB -XX:+ParallelRefProcEnabled -XX:+AlwaysPreTouch -XX:MaxGCPauseMillis=100 -XX:ParallelGCThreads=20 -XX:ConcGCThreads=15 -XX:G1NewSizePercent=1 -XX:G1MaxNewSizePercent=5 -XX:G1MixedGCLiveThresholdPercent=85 -XX:G1HeapWastePercent=2 -XX:InitiatingHeapOccupancyPercent=35"                                                                             
config$`sparklyr.shell.driver-memory` <- "8g"
config$`sparklyr.shell.executor-memory` <- "8g"
config$spark.yarn.executor.memoryOverhead <- "2g"
config$spark.yarn.driver.memoryOverhead <- "2g"
config$spark.executor.instances <- 36
config$spark.executor.cores <- 4                                                                                  
config$spark.executor.memory <- "20g"

stdout log from executor

12-06 16:12:34.661 10.114.134.134:54321  16017  #e Thread WARN: Unblock allocations; cache below desired, but also OOM: GC CALLBACK, (K/V:105.2 MB + POJO:14.79 GB + FREE:2.89 GB == MEM_MAX:17.78 GB), desiredKV=2.22 GB OOM!
#
# java.lang.OutOfMemoryError: Java heap space
# -XX:OnOutOfMemoryError="kill %p"
#   Executing /bin/sh -c "kill 16017"...
12-06 16:13:53.834 10.114.134.134:54321  16017  #39:54321 ERRR: java.lang.OutOfMemoryError: Java heap space

stderr log from executor

17/12/06 16:00:36 WARN retry.RetryInvocationHandler: A failover has occurred since the start of this method invocation attempt.
17/12/06 16:00:36 WARN retry.RetryInvocationHandler: A failover has occurred since the start of this method invocation attempt.
17/12/06 16:00:36 WARN retry.RetryInvocationHandler: A failover has occurred since the start of this method invocation attempt.
java.lang.OutOfMemoryError: Java heap space
17/12/06 16:13:53 ERROR executor.CoarseGrainedExecutorBackend: RECEIVED SIGNAL TERM
17/12/06 16:13:53 ERROR util.SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[UDP-TCP-READ-rkalsdatanode032.kau.roche.com/10.114.134.138:54321,10,main]
java.lang.OutOfMemoryError: Java heap space
17/12/06 16:13:53 ERROR util.SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[Heartbeat,10,main]
java.lang.OutOfMemoryError: Java heap space
17/12/06 16:13:53 ERROR util.SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[FailedNodeWatchdogThread,5,main]
java.lang.OutOfMemoryError: GC overhead limit exceeded
17/12/06 16:13:53 INFO storage.DiskBlockManager: Shutdown hook called
17/12/06 16:13:53 INFO util.ShutdownHookManager: Shutdown hook called
17/12/06 16:13:53 INFO util.ShutdownHookManager: Deleting directory /srv/hdfs/disk4/yarn/nm/usercache/laic16/appcache/application_1511791866449_5610/spark-32b887f4-b7ee-41df-96fc-635d9c3c1a2d
mmalohlava commented 6 years ago

@axiomoixa are the stdout/stderr from driver or executor? If from driver, you can try to increase driver memory size (our recommendation is to keep cluster memory configuration homogenous).

axiomoixa commented 6 years ago

The stdout/stderr are from executor. The memory usage seems strange. For 15k variables, 100GB are sufficient, for 21k (or with 17.5k newly tested) variables, even 640GB are insufficient. A gut feeling is that there is a bug somewhere...

jakubhava commented 6 years ago

@axiomoixa can you please share the logs from the driver ? It would give us more information, thanks!

Kuba

axiomoixa commented 6 years ago

@jakubhava Here is the complete log file374223f3f86b_spark.log

Thanks for looking into it

jakubhava commented 6 years ago

Thanks @axiomoixa, the driver log looks fine, seems like the exception happened on executor. Would you be able to obtain also executor logs (not just stdout/stderr) ?

Could you also try increasing sparklyr.shell.driver-memory to 20GB and sparklyr.shell.executor-memory to 20GM ? I don't know RSparkling and SparklyR perfectly, but seems like this is the way how to configure memory for both driver and executors ( and currently, it is inconsistent with spark.executor.memory )

BTW: are you running in YARN master or cluster mode, could you share your YARN memory configuration ?

Thanks, Kuba

axiomoixa commented 6 years ago

I run in YARN client mode. The driver is not in the cluster. Due to hardware limitation, I don't have 20GB to allocate to the driver, so it stays at 8GB. But I did raise sparklyr.shell.executor-memory to 20GB, and still reproduced the same problem. The log file of the executor that first failed is attached rkalsdatanode033.kau.roche.com_8041.txt

our YARN memory config is the following:

yarn.app.mapreduce.am.resource.mb  = 2560 MB
ApplicationMaster Java Maximum Heap Size  = 1711 MB  
mapreduce.map.memory.mb = 2560   MB
Map Task Maximum Heap Size = 1711 MB
Reduce Task Maximum Heap Size   = 1711 MB
mapreduce.job.heap.memory-mb.ratio  = 0.8  

Best,

jakubhava commented 6 years ago

I would suggest reading this stack overflow question and answers, it greatly explains spark & yarn memory configuration - https://stackoverflow.com/questions/43214762/how-to-solve-yarn-container-sizing-issue-on-spark.

In this case, I don't see yarn.scheduler.maximum-allocation-mb which specifies maximum number of memory per container. By default, it is set to 8192 which is not enough in this case.

Can you please try increasing this and maybe tuning the rest of yarn properties as well ?

axiomoixa commented 6 years ago

Hi Kuba,

thanks for the link to the readup. However, how may that issue be related to mine? As far as I can understand the issue in that Stackoverflow problem had exit code 1, which pointed to YarnScheduler directly. For my exit code 52 & 143, they are out of memory codes, where the executor ran out of the 20GB memory that it is allocated. So each executor has been allocated 20GB of memory without issue, but rather the issue being when the number of variables in h2o.glm goes from 15k to 17.5k, the memory usage increased by multiple-folds so that there is a memory blow-up.

I believe the issue lays more in memory usage in the GLM algorithm then resource allocation, as it feels strange to me that a job can finish with 100GB memory before, but would run out of 640GB when the number of variables increases by not even 20%. Or have I understood something wrong?

Best regards,

jakubhava commented 6 years ago

Yup, this makes sense. I just wanted to be sure your memory on YARN is fine.

Can you please try reproducing the issue in raw H2O ? You can see how to start H2O on hadoop here http://docs.h2o.ai/h2o/latest-stable/h2o-docs/downloading.html?highlight=hadoop#install-on-hadoop. Then you can use either R or Python client ( or Flow - our user interface ) client to communicate with the cluster.

If we can reproduce the issue on raw H2O without Spark, it makes it simpler to debug as Sparkling Water lives in a complex environment. Thanks!

axiomoixa commented 6 years ago

@jakubhava The computation still broke for apparently the same issue. application_1511791866449_9382.zip

Thanks for looking into it

jakubhava commented 6 years ago

No worries.

I would like to let H2O folks know about this issue.

One more think - could you please try the same on the latest H2O version 3.16.0.2 and share the logs again ? I suspect that it will appear as well, but good to test it on the latest.

Could you also share the command line options you use to start H2O on Hadoop, the code ( if it's confidential, please share the most you can ) and the shape of the data ?

Thanks!

Kuba

axiomoixa commented 6 years ago

@jakubhava as you predicted, the result is the same. Attached is the log application_1511791866449_9410.zip

The launching code:

hadoop jar /opt/third-party-software/software/h2oai/h2o/h2odriver.jar -Dmapreduce.job.queuename=root.poc -nodes 12 -mapperXmx 20g -output /applications

The relevant R code:

h2o.glm(y=var.Y,weights_column = var.weight,
                  training_frame = HF.train ,
                  validation_frame=HF.test ,
                  family="gaussian",
                  solver="IRLSM",
                  x=var.X,
                  interactions = var.Interact,
                  lambda_search=F,
                  nlambdas = 100L,
                  alpha=1,lambda=eval(Lambda.2) )
length(var.Interact) 
[1] 201

length(var.X)
[1] 1376

dim(HF.train)
[1] 41463  1483

dim(HF.test)
[1] 6629 1483

Note that whenvar.Interact is only 174 long, it ran fine.

michalkurka commented 6 years ago

When I run GLM with 15k variables, H2O needed less than 100GB to finish the job, however when I run GLM with 21k variables (20k of which are interactions), H2O could not finish even with 640GB memory.

@axiomoixa I think adding more memory vertically won't help. It seems like you are adding more executor instances but the amount of memory per executor seems low to me (20G). I think you should consider a smaller number of more powerful instances. I would try to increase the per executor memory to 40G and go up from there.

axiomoixa commented 6 years ago

@michalkurka Our sysadmin is on vacation already. I will make the suggestion of increasing the executor memory limit to 40G when he is back, and try again.

Before then, I wish you all a merry Christmas and a smooth slide into the new year.

jakubhava commented 6 years ago

@axiomoixa any updates on this please or still to soon to ask :) ?

axiomoixa commented 6 years ago

@jakubhava Our sysadmin is still away this week. I am hoping to run some tests next week on the new configuration. However, a positive result would mean that the IRLSM algorithm does not scale vertically with the number of variables. So even if I run the computation under Spark MLlib with IRLSM, I would also hit the same limitation?

jakubhava commented 6 years ago

@axiomoixa thanks for the update, please let us know about the result. I have never used IRLSM in Spark MLLIB so hard to say how it behaves on big data at this stage

axiomoixa commented 6 years ago

@michalkurka @jakubhava Here is an update with new configuration. The executor memory has been raised from 20G to 38G. Two models with 15,327 variables and 17,666 variables respectively are to be built. The former is known to succeed in the original configuration, while the latter is known to fail. The result with the new configuration repeats that of the old, that is the computation finished fine with 15,327, but failed with 17,666 variables. The same tests have been performed 3 times to ensure reproducibility.

A cluster of 6 executors each with 38GB was formed. Everything else remained the same.

Attached is the log application_1516009537818_0494.zip

Here is the console output.

Train model.2 
  |=                                                                     |   2%Error in .h2o.doSafeREST(h2oRestApiVersion = h2oRestApiVersion, urlSuffix = urlSuffix,  : 
  Unexpected CURL error: Failed connect to 10.136.192.213:54321; Connection refused
Calls: h2o.glm ... tryCatchOne -> doTryCatch -> .h2o.doSafeGET -> .h2o.doSafeREST
Execution halted

Best regards,

axiomoixa commented 6 years ago

after having config$spark.network.timeout <- 600, I get the following console output

Train model.2 
  |==                                                                                     |   2%

java.lang.OutOfMemoryError: Java heap space

java.lang.OutOfMemoryError: Java heap space
    at java.util.Arrays.copyOfRange(Arrays.java:3736)
    at hex.gram.Gram.cholesky(Gram.java:504)
    at hex.glm.GLM$GramSolver.computeCholesky(GLM.java:1482)
    at hex.glm.GLM$GramSolver.<init>(GLM.java:1472)
    at hex.glm.GLM$GLMDriver.ADMM_solve(GLM.java:604)
    at hex.glm.GLM$GLMDriver.fitLSM(GLM.java:653)
    at hex.glm.GLM$GLMDriver.fitModel(GLM.java:943)
    at hex.glm.GLM$GLMDriver.computeSubmodel(GLM.java:1029)
    at hex.glm.GLM$GLMDriver.computeImpl(GLM.java:1098)
    at hex.ModelBuilder$Driver.compute2(ModelBuilder.java:206)
    at hex.glm.GLM$GLMDriver.compute2(GLM.java:543)
    at water.H2O$H2OCountedCompleter.compute(H2O.java:1263)
    at jsr166y.CountedCompleter.exec(CountedCompleter.java:468)
    at jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:263)
    at jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:974)
    at jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1477)
    at jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104)

Error: java.lang.OutOfMemoryError: Java heap space

For one reason or another, the computation cannot finish.

> sessionInfo()
R version 3.3.3 (2017-03-06)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Red Hat Enterprise Linux Server 7.2

locale:
 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C               LC_TIME=en_US.UTF-8       
 [4] LC_COLLATE=en_US.UTF-8     LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                  LC_ADDRESS=C              
[10] LC_TELEPHONE=C             LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
 [1] h2o_3.16.0.4         data.table_1.10.4-3  sparklyr_0.6.4-9904  rddlist_0.1.0       
 [5] dplyr_0.7.4.9000     rsparkling_0.2.3     ACI3_0.0.0.9000      RevoUtilsMath_10.0.0
 [9] RevoUtils_10.0.3     RevoMods_11.0.0      MicrosoftML_1.3.0    RevoScaleR_9.1.0    
[13] lattice_0.20-34      rpart_4.1-10        

loaded via a namespace (and not attached):
 [1] tidyselect_0.2.3.9000  purrr_0.2.4.9000       reshape2_1.4.2         htmltools_0.3.6       
 [5] yaml_2.1.14            CompatibilityAPI_1.1.0 base64enc_0.1-3        rlang_0.1.6.9003      
 [9] foreign_0.8-67         glue_1.1.1             withr_1.0.2            DBI_0.6-1             
[13] dbplyr_1.1.0           bindrcpp_0.2           foreach_1.4.3          bindr_0.1             
[17] plyr_1.8.4             stringr_1.2.0          SparkDriver_0.1.0      devtools_1.12.0       
[21] codetools_0.2-15       psych_1.6.12           memoise_1.1.0          knitr_1.16            
[25] httpuv_1.3.3           mrupdate_1.0.1         curl_2.3               parallel_3.3.3        
[29] broom_0.4.2            Rcpp_0.12.15           readr_1.2.0            xtable_1.8-2          
[33] openssl_0.9.6          backports_1.0.5        jsonlite_1.5           config_0.2            
[37] mime_0.5               mnormt_1.5-5           hms_0.3                digest_0.6.12         
[41] stringi_1.1.5          shiny_1.0.2            grid_3.3.3             rprojroot_1.2         
[45] tools_3.3.3            bitops_1.0-6           magrittr_1.5           lazyeval_0.2.0        
[49] RCurl_1.95-4.8         tibble_1.3.3           tidyr_0.7.2            pkgconfig_2.0.1       
[53] assertthat_0.2.0       httr_1.3.1             rstudioapi_0.6         iterators_1.0.8       
[57] R6_2.2.0               nlme_3.1-131           git2r_0.18.0       

application_1516009537818_2775.zip

jakubhava commented 6 years ago

Stil out of memory issue. Could you, just for trying it out, increase the memory limits to some really high values so at least we can see it works?

btw: @michalkurka I'm not that familiar what that part of code shared above, do you think it's possible we have some memory leaks/unnecessary use of memory in that code?

michalkurka commented 6 years ago

17k variables is not a small problem to solve, it is failing in a place where Cholesky decomposition is calculating a dense matrix. Right now I agree with @jakubhava - a good strategy is to keep increasing the memory of the single node until it stops failing, adding more nodes will not help.

There might be a data science workaround to this problem as well. Are all the interactions needed? Can you eliminate some of them based on the first successful run and then run GLM again?

We will not be able to look more into this issue until 3.18 is finalized.

michalkurka commented 6 years ago

@axiomoixa if this problem is pressing and you are a customer of H2O please log the issue on the support portal as well, thank you!

axiomoixa commented 6 years ago

@jakubhava @michalkurka the Spark environment is shared among others., so it is not that convenient to experiment with various settings. It was already a favour from the sys admin that he raised the Yarn scheduler memory limit from 24GB to 42GB, hence I got to test a 38GB executor. The goal is in fact to solve a 47k variable problem, so I am unfortunately not quite there yet. However, if the problem is from Cholesky decomposition, how come increasing the executor memory from 20GB to 38GB did not help a bit? Is there another memory limit at some place where it is not supposed to be? Such as the 64Kb Java bytecode limit? If the problem is solely a matrix calculation, one should see that the limit of solvability scales with the resources available. But appeared not to be the case.

What can one expect from 3.18 in regards to solving large GLM with IRLSM?

Thanks,

michalkurka commented 6 years ago

Your logs indicate that the JVM ran out of memory. It does not indicate you are hitting a Java limit.

We have tested H2O with 100k+ columns (with a majority of them being interactions, virtual interactions not explicit interactions to be precise).

There will be no major improvements in 3.18 in GLM compared to 3.16. One thing that might benefit you will be a support for specifying an explicit list of pairwise interactions.

Can you share anonymized data with and a script that is driving the model training so that we can reproduce?

axiomoixa commented 6 years ago

Attached is a reproduction of the same workflow with generated data (I cannot share the original data set). The problem is unfortunately not exactly reproducible due to different data complexity, and the right parameters between successful runs and failing runs are also unclear. However, if there is a Spark configuration that allows this computation to finish with about the same number of variables, I would be very much like to see it. reproduce.R.zip

I will in parallel test this computation on a large single machine, so to rule out that the issue is coming from h2o. My gut feeling is that, the computation is rather hitting some limit in sparkling-water.

axiomoixa commented 6 years ago

When the computation with 50k+ variables is run on a single machine with large RAM, I get the following.

  |=                                                   |   2%
ERROR: Unexpected HTTP Status code: 500 Server Error (url = http://localhost:54321/3/Jobs/$03017f00000132d4ffffffff$_8b124584c7cde6dbb63fb629198045e3)

Fehler: lexical error: invalid char in json text.
                                       <html> <head> <meta http-equiv=
                     (right here) ------^

GLM with 27k variables on a single machine finished fine.

Tagar commented 6 years ago

We've ran into similar memory OOM issues with GLM on a dataset with 6k columns. @axiomoixa how did you solve this? Thanks.

axiomoixa commented 6 years ago

@Tagar

We are still testing the limits with various settings. So far we have seen two different ways h2o can fail. Namely, memory blows up or connection simply drops. My colleagues have pointed out that increasing the shell memory limits help in the latter scenario.

    config$`sparklyr.shell.driver-memory` <- "8g"
    config$`sparklyr.shell.executor-memory` <- "8g"

Let me know how it works out for you.

michalkurka commented 6 years ago

GLM with 27k variables on a single machine finished fine.

@axiomoixa Is that the original problem you wanted to solve?

H2O scales vertically with the size (number of rows) of the dataset. Matrix operations (like the Cholesky decomposition) are not distributed. If you have a big problem in terms of the number of variables you will need more memory per node.

As general reminder for people who find this issue: please note that we recommend driver memory to be set the same as the executor memory. The reason is that the driver can actually do some of the non-distributed computation (like the matrix algebra). @axiomoixa I would recommend doing the same in your R script as well.

NkululekoThangelane commented 6 years ago

Hi Get the a simliar error for Java Heap Memory when using trying to score new data . New data has less than 1000 records however Sparkling water keeps running in to this GC Memory Error. I have increase executor memory to 60GB for 5 executors but I did not think that this model would need this much memory to score such a smore data set. conf.set('spark.driver.memory','50G') conf.set("spark.driver.cores","10") conf.set("spark.executor.instances", "5") conf.set("spark.yarn.queue",QUEUE) conf.set("spark.executor.cores", "10") conf.set("spark.executor.memory", '60G') conf.set("spark.dynamicAllocation.enabled" , "false") conf.set("spark.sql.autoBroadcastJoinThreshold","-1") conf.set("spark.scheduler.minRegisteredResourcesRatio","1") conf.set("spark.locality.wait","0") conf.set("spark.sql.crossJoin.enabled", "true") conf.set("spark.yarn.executor.memoryOverhead","60G") conf.set("spark.driver.maxResultSize","50G")

java.lang.OutOfMemoryError: Java heap space at java.lang.Class.getDeclaredMethods0(Native Method) at java.lang.Class.privateGetDeclaredMethods(Class.java:2701) at java.lang.Class.getDeclaredMethod(Class.java:2128) at java.io.ObjectStreamClass.getInheritableMethod(ObjectStreamClass.java:1442) at java.io.ObjectStreamClass.access$2200(ObjectStreamClass.java:72) at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:508) at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:472) at java.security.AccessController.doPrivileged(Native Method) at java.io.ObjectStreamClass.(ObjectStreamClass.java:472) at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:369) at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:598) at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1843) at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1713) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2000) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422) at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75) at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:108) at org.apache.spark.rpc.netty.NettyRpcEnv$$anonfun$deserialize$1$$anonfun$apply$1.apply(NettyRpcEnv.scala:259) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.rpc.netty.NettyRpcEnv.deserialize(NettyRpcEnv.scala:308) at org.apache.spark.rpc.netty.NettyRpcEnv$$anonfun$deserialize$1.apply(NettyRpcEnv.scala:258) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.rpc.netty.NettyRpcEnv.deserialize(NettyRpcEnv.scala:257)

michalkurka commented 6 years ago

@NkululekoThangelane please create a separate github issue for this problem - it doesn't seem related to this one. This OOM happens when spark moves data around, h2o uses different channels - I would therefore assume it is a different issue and decouple these 2 problems. Once the new issue is created I will remove the comments to keep this issue organized. Thanks for understanding.

jakubhava commented 6 years ago

@axiomoixa I'm just checking, did you please try @michalkurka proposal in the last relevant comment?

NkululekoThangelane commented 6 years ago

@michalkurka No problem i was able to create a new issue for this.

jakubhava commented 6 years ago

Closing this issue as there is no input, please fee free to re-open if needed