-
From the tutorials and recipes it looks like you can only do dynamic Int8 Int4? Also I cannot export the trained model to onnx?
```
import torch
from torchao.quantization.prototype.qat import I…
-
TODO list
----------- on 'refactor' branch && v0.10.0 --------------
- [x] Hide `type_map`
- [x] Hide toggle grad things
- [x] Refactoring or from scratch: `scripts/processing_dataset.py` and `…
-
## 🚀 Feature
[Graph Matching Networks for Learning the Similarity of Graph Structured Objects](https://proceedings.mlr.press/v97/li19d/li19d.pdf)
There is model available in DGL that can do simila…
-
This is a general ticket to keep track of the tasks necessary for the complete demo project goals. Subtickets can be created as necessary.
Demonstration goals/tasks with commentary:
- [x] #13
- […
-
Without adding noise, the reported results in the GraphBEV paper are mAP and NDS of 70.1 and 72.9, respectively. However, my results are 45 and 52. I use the config file of bevfusion_graph_deformable.…
fdy61 updated
2 weeks ago
-
Probably related to #45, but not exactly sure what you have in mind there. A graph contraction algorithm should work like this:
```
library (sf)
l1 % st_linestring ()
l2 % st_linestring ()
x % st…
-
Objective: Integrate NASA APIs into the Galactic Mining Hub to provide real-time data on celestial bodies, enhancing mining predictions and user engagement.
Plan for Integration
Identify Relevant …
-
-
## ❓ Questions and Help
Hi!
We are trying to train Gemma-2-9B on v4-64 and v5-128 Pod as mentioned in [this comment](https://github.com/pytorch/xla/issues/7987#issuecomment-2352326629). We use FS…
ayukh updated
56 minutes ago
-
Hello,
I am looking to log my model weights and graph in wandb, not as an artifact but more as a complete network and relevant model weights upgrade during the training. I am unable to find any rel…