-
So I have a GPTQ llama model I downloaded (from TheBloke), and it's already 4 bit quantized. I have to pass in False for the load_in_4bit parameter of:
```
model, tokenizer = FastLlamaModel.from_pr…
-
- [ ] implement download functionality (url is on track.media.files.url. Take first of array)
- [ ] show popup when trying to download a message ([see figma](https://www.figma.com/file/vUnfD52FdhTmXq…
-
see https://bmm-web.brunstad.org/playlist/podcast/55 or https://bmm-web.brunstad.org/browse/events
when scrolling to the end of the list, it should load the next items.
-
### Describe the bug
calling `set_attn_processor(attention)` using `AttnProcessor` or `AttnProcessor2_0`
on `SD3Transformer2DModel` executes without issues, but results in runtime error during i…
-
1 warning generated.
[607/607] Linking CXX executable src/pnnx
FAILED: src/pnnx
: && /Library/Developer/CommandLineTools/usr/bin/c++ -g -arch arm64 -isysroot /Library/Developer/CommandLineTools/SD…
-
Proteus f7 BMM ecu for Mazda Miata NA6
When setting up the Max knock retard table with an array of values ranging from 2 through 10. The variable m_maximumretard does not correlate properly, I assu…
rk-iv updated
4 months ago
-
### 🐛 Describe the bug
python code being run under the hood:
```python
_TORCH_DEVICE = "mps" if torch.backends.mps.is_available() else "cpu"
print(f"Using torch device: {_TORCH_DEVICE}")
…
-
### Issue summary
The estimated gradients by `GradientChecker` are not correct. I implemented my own Caffe layer and the gradients are the same as [PyTorch](https://github.com/pytorch/pytorch) and …
hzxie updated
5 years ago
-
Tasks for Arek (ordered by importance)
- [x] ask question textfield should get immediate focus on on-screen-keyboard show up (I think .BecomeFirstResponder() fixed it on iOS)
- [x] Android: green …
-
I have spent all day trying to upgrade cuda to 11.2 and get it working with pytorch. At the moment I believe I should have a fully working version of Cuda 11.2, yet I still get the following error whe…