-
A more diverse crawl policy.
-
While using SevoJ mode, we frequently encounter C24 speed limit errors. We believe these occur when the TCP exceeds 1000mm/s.
[This response](https://github.com/xArm-Developer/xArm-Python-SDK/issues…
-
I noticed the \# downloads and \# likes in the model dropdown is hard coded. I dug around into the Huggingface Hub API and found that we can access this info via GET requests. Here's an example of doi…
-
I found stix causes unexpected output of `\with` command from cmll package. What’s happened?
The following code shows a term **AςB** though I expected **A&B**.
```
\documentclass{article}
\use…
-
Currently, it is unclear on how to handle a logarithmic (or other scales). There is only one library on the market that provides such a built-in support https://github.com/nholthaus/units
mpusz updated
3 months ago
-
functorch looks super cool and helpful! I was just playing around with it this morning, trying to aot compile `nn.Linear`. Maybe I'm misunderstanding the API, but I'm getting a RuntimeError
```
…
-
As suggested in issue #3, In order to proceed to porting other subsystems we choose to wrap the existing Combo reduct engine.
Given a program `P` to be reduced, first we want to convert `P` to a `co…
-
### System Info
I am using `bitsandbytes` quantization to load `mistral-7b` on `NVIDIA T4` gpu. I loaded the model with the quantized configuration, however, I keep getting an runtime error related t…
-
If I shard a table into 32 shards on a two node cluster, the insert throughput drops to ~2k inserts/s, from 18k inserts/s at a single shard and ~30k at two shards.
-
### 🐛 Describe the bug
```python
torch.distributed.init_process_group(
backend="xla",
init_method="xla://",
)
m = torch.nn.Linear(10, 10).to(xm.xla_device())
m = torch.nn.parallel.D…