Open dpdrmj opened 1 year ago
Hey @dpdrmj :wave:! Thank you so much for reporting the issue/feature request :rotating_light:. Someone from SynapseML Team will be looking to triage this issue soon. We appreciate your patience.
Hello everyone! Can someone please help here? Does anyone know what could've caused this?
Hello everyone! Can someone please help here? Does anyone know what could've caused this?
val (trainingData, validationData) =
if (get(validationIndicatorCol).isDefined && dataset.columns.contains(getValidationIndicatorCol))
(df.filter(x => !x.getBoolean(x.fieldIndex(getValidationIndicatorCol))),
Some(sc.broadcast(preprocessData(df.filter(x =>
x.getBoolean(x.fieldIndex(getValidationIndicatorCol)))).collect())))
if the validationData is large, the "collect" use many memory. you need to set driver.memory and executor.memory very large.
Hello everyone! Can someone please help here? Does anyone know what could've caused this?
val (trainingData, validationData) = if (get(validationIndicatorCol).isDefined && dataset.columns.contains(getValidationIndicatorCol)) (df.filter(x => !x.getBoolean(x.fieldIndex(getValidationIndicatorCol))), Some(sc.broadcast(preprocessData(df.filter(x => x.getBoolean(x.fieldIndex(getValidationIndicatorCol)))).collect())))
if the validationData is large, the "collect" use many memory. you need to set driver.memory and executor.memory very large.
set driver.memory and executor.memory very large can fix it, but it is slow, consume many resource, hope hynapseML team find a new way to rewrite the code, hope to replace the "collect".
SynapseML version
com.microsoft.azure:synapseml_2.12:0.11.3
System information
Describe the problem
while trying to use LightGBMClassifier, the program always crashes with connection refused error. Same code works completely fine if the data size is smaller. My train data has ~500million rows. And there is no way that this is happening because of memory issues are I'm using 7 executors and each one has 256GB memory. I tried changing some params as well, I tried without executionMode='streaming', tried using useBarrierExecutionMode=True as well but it doesn't work. Looks like this is a frequent issue which people face with large data size? I have not found any solution to this problem. Does the following give any hint on the problem?
Code to reproduce issue
Other info / logs
What component(s) does this bug affect?
area/cognitive
: Cognitive projectarea/core
: Core projectarea/deep-learning
: DeepLearning projectarea/lightgbm
: Lightgbm projectarea/opencv
: Opencv projectarea/vw
: VW projectarea/website
: Websitearea/build
: Project build systemarea/notebooks
: Samples under notebooks folderarea/docker
: Docker usagearea/models
: models related issueWhat language(s) does this bug affect?
language/scala
: Scala source codelanguage/python
: Pyspark APIslanguage/r
: R APIslanguage/csharp
: .NET APIslanguage/new
: Proposals for new client languagesWhat integration(s) does this bug affect?
integrations/synapse
: Azure Synapse integrationsintegrations/azureml
: Azure ML integrationsintegrations/databricks
: Databricks integrations