-
For several values of `-fp`, checking this spec fails with `: Failed to recover the next state from its fingerprint.` (see below):
```tla
---- MODULE Github1045 ----
EXTENDS Naturals
CONSTANT …
lemmy updated
3 weeks ago
-
We use the cluster with 1 master and 3 workers. Each worker contains 128 vcores and 512GB DRAM. Vanilla spark can successfully run TPC-DS F10TB while gazelle will fail during running q14a,b. If we inc…
-
### Backend
VL (Velox)
### Bug description
[Expected behavior] and [actual behavior].
When I use velox as the backend execution engine to test TPC-H, the output result is unstable. Sometimes it pr…
-
### Backend
VL (Velox)
### Bug description
It seems that writing too many hive partitions causes `Not enough spark off-heap execution memory`
### Spark version
None
### Spark configurations
_No…
-
**Describe the bug**
```
java.lang.RuntimeException: Error during calling Java code from native code: java.lang.UnsupportedOperationException: Not enough spark off-heap execution memory. Acquired: 8…
-
The atomic reference used to return the previous mapping by side affect from the functions used in EhcacheWithLoaderWriter methods are incorrectly memoized. This means the statistics reported for a c…
-
### Backend
VL (Velox)
### Bug description
one common issue of Gluten is that the task is killed by yarn. Currently Gluten has some memory allocation like std::vector isn't tracked by spark's memor…
-
We triggered stability test for native-sql-engine and run TPC-DS for 5 rounds. The cluster contains 3 workers and each has 512GB DRAM. The configuration of spark-defaults.conf are showed below:
spa…
-
* Document the Realtime provisioning helper better.
* Include the arguments, assumptions (be it default or deductions from table config) in the output of the tool. For example, if we assume that this…
-
I am trying to dump the data from a heap(in memory) cache to disk cache via iterator, this happens every 30 mins. While trying to do that getting Out of Memory. Not sure if this is due to cache corrup…