Kuwala is the no-code data platform for BI analysts and engineers enabling you to build powerful analytics workflows. We are set out to bring state-of-the-art data engineering tools you love, such as Airbyte, dbt, or Great Expectations together in one intuitive interface built with React Flow. In addition we provide third-party data into data science models and products with a focus on geospatial data. Currently, the following data connectors are available worldwide: a) High-resolution demographics data b) Point of Interests from Open Street Map c) Google Popular Times
111 Pull request was a fail. Because segmentation fault issue always occurs.
The segfault is not as simple as memory allocation issue. It is more from the R low level system…
Hence I purposed a change of plan:
1) Create a preloaded .r file with Robyn syntaxes that we could edit via python,
2) Ask user the hyperparameters via python
3) Since we have control on the preloaded .r we exactly know where the training result, graphs, etc will be
4) we do os.system("Rscript preloaded_file.r")
Similar PR #119 was still not tidy, with some cleanup commits got merged, hence this cleaner PR requested.
In this PR, I added an automation script to load Robyn's demo data from Postgres, initiate the model fitting using modified .r file as a concept proof, and save the model fitting result back to Postgres ^^.
111 Pull request was a fail. Because segmentation fault issue always occurs.
The segfault is not as simple as memory allocation issue. It is more from the R low level system…
Hence I purposed a change of plan:
1) Create a preloaded
.r
file with Robyn syntaxes that we could edit via python, 2) Ask user the hyperparameters via python 3) Since we have control on the preloaded.r
we exactly know where the training result, graphs, etc will be 4) we doos.system("Rscript preloaded_file.r")
Similar PR #119 was still not tidy, with some cleanup commits got merged, hence this cleaner PR requested. In this PR, I added an automation script to load Robyn's demo data from Postgres, initiate the model fitting using modified .r file as a concept proof, and save the model fitting result back to Postgres ^^.