-
# š Feature & Motivation
PyTorch/XLA recently launched PyTorch/XLA SPMD ([RFC](https://github.com/pytorch/xla/issues/3871), [blog](https://pytorch.org/blog/pytorch-xla-spmd/), [docs/spmd.md](https:ā¦
-
Make sure that you can run NONMEM correctly without PsN, i.e. try ```C:\nm7C\ver7_20100223\gf\ref\run\nmfe7 run1.mod run1.lst```
_Originally posted by @rikardn in https://github.com/Uā¦
-
### š Describe the bug
This code is extracted from a portion of [`torch._decomp.decompositions.native_batch_norm`](https://github.com/pytorch/pytorch/blob/b6d6a78c12e5869d0c738456e28155a3a2554ece/tā¦
-
gcc allows passing the number of threads for parallelization to use at link time with LTO. Since gcc 10.x this can be set automatically with `-flto=auto` which will use either GNU Make's job server orā¦
jsks updated
9 months ago
-
**Is your feature request related to a problem? Please describe.**
According to the current design a test run when created will be active (in progress)
This doesn't always a desired behaviour.
ā¦
-
I know that it's just the way things have progressed in the benchmarking game, but most of the "single-core" speed results use auto-parallelization and do not truly represent single-core performance. ā¦
-
In case of dataframe based task code, it is sometimes possible to batch computation based on some grouping columns. This can be used to reduce the memory footprint (not the whole input and output dataā¦
-
### Is your feature request related to a problem?
While developing #5847, I noticed that the pytests were taking quite a while to run on my machine (which is a pretty weak old desktop).
### Describeā¦
-
Right now parallel analyses do their own sort of pickling/unpickling. While this gives the analysis more control and possibly skips unneeded pickle/unpickle cruft, it has a number of drawbacks:
- Sā¦
-
To reproduce it on `master`: 2c51057013e102f7e5364a7242d66a65ecd524bc :
```sql
SET citus.shard_replication_factor TO 1;
set log_min_messages to DEBUG4;
set client_min_messages to DEBUG4;
SET cā¦