I'm trying to load data into data bricks from ELK with the help of API using python script. Can you suggest the best option so I can read the huge data like 200 million in a single shot? The method I'm using now allows me only 5 million of data even that is running for 40 min approx. The run is happening on a single core and not spreading between the cluster cores in Databricks. Please help me with the code if you have any.
I don't think I understand the problem you're running into. This sounds like a good topic for discussion on https://discuss.elastic.co/ rather than a confirmed bug, right?
Hi,
I'm trying to load data into data bricks from ELK with the help of API using python script. Can you suggest the best option so I can read the huge data like 200 million in a single shot? The method I'm using now allows me only 5 million of data even that is running for 40 min approx. The run is happening on a single core and not spreading between the cluster cores in Databricks. Please help me with the code if you have any.
Databricks #Hadoop #ApacheSpark
Thanks Sasidhar