issues
search
google
/
jetstream-pytorch
PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"
Apache License 2.0
21
stars
12
forks
source link
Fix gemma model, enable_weight_quantization is available through quant_config.
#98
Closed
wang2yn84
closed
1 month ago
lsy323
commented
1 month ago
@wang2yn84 Thank you for fixing it!
@wang2yn84 Thank you for fixing it!