-
(python3-venv) aarch64_sh ~> cm run script --tags=run-mlperf,inference,_find-performance,_full,_r4.1 --model=dlrm_v2-99 --implementation=reference --framework=pytorch --category=datacenter…
-
Regarding https://github.com/openjournals/joss-reviews/issues/7018 "State of the field: Do the authors describe how this software compares to other commonly-used packages?"
The paper does not seem …
-
### ERR output:
```
2024-07-17 17:46:40,496 - pyscenic.cli.pyscenic - INFO - Loading expression matrix.
2024-07-17 17:48:21,231 - pyscenic.cli.pyscenic - INFO - Inferring regulatory networks.
20…
-
## Integrating DeepSpeed with PyTorch Lightning
Integrating DeepSpeed with PyTorch Lightning can significantly enhance training efficiency and scalability, especially for large models and distribut…
-
# Issue + Reproducers
So I have an i/o job that reads in data to the CPU and passes to the GPU in a `map_blocks` call, and then uses CuPy downstream for a non-standard map-blocks call. Here is the…
-
todo
-
First issue - GPU Dockerfile hasn't been fixed since I brought it up in [!1373 ](https://github.com/mlcommons/inference/pull/1373). Had to replace it with this one I left in the comments: https://gith…
-
Currently all nodes are stored in an array of size maxNodeId+1. This is wasteful in memory when the node ids are sparsely distributed. Make a sparse representation of the collection of nodes (e.g., us…
-
### Issue Type
Bug
### Source
binary
### Secretflow Version
secretflow v1.8.0b0
### OS Platform and Distribution
centOS7
### Python version
3.10
### Bazel version
_No response_
### GCC/Com…
-
### The problem
Scenario: the Chapel program calls a function returning a type that's a dmapped domain.
In the generated code / at run time, that function returns a chpl___RuntimeTypeInfo struct…