Open shesadri opened 4 years ago
I saw this issue a bit ago. execute
looks like it only runs through the driver (and increasing that limit can OOM) and should be used primarily for catalog operations. The solution was to use executeQuery
instead.
we are using the spark 2.3.0 version with Hadoop3 to fetch the records using hive table. While using the hive connector library we are facing issue where it does only 1000 records fetch though we have more than millions records eligible for query we pass. Any possibility of overriding this value set for us to get more records.