-
```
obj = SAM_handson(num_hidden_generator=200, num_hidden_discriminator=200, train_epochs=100, test_epochs=30, batchsize=10, dagloss=True, verbose=True, nruns=1)
output = obj.predict(data, graph=sk…
-
**Choose Topics for Presentation**
- [x] Q-learning
- [x] Deep Neural Network
- [x] Artificial General Intelligence
- [x] Artificial Quantum Intelligence
- [x] Cognitive Science
- [ ] Quantum Co…
ErSKS updated
4 years ago
-
1) Start with Issues with transfer learning
2) Issues with current bellwether
3) motivation of splitting the data from 500 times faster project
4) discuss General (on process + product data)
5) …
-
This is the flash attention code that I have encapsulated
`
# flash-attention
import math
import torch
import torch.nn as nn
from torch.nn.init import (
xavier_uniform_,
constant_,
…
-
### Python -VV
```shell
(codestral) ➜ dev python -VV
Python 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0]
```
### Pip Freeze
```shell
(codestral) ➜ dev pip freeze
absl-py==2.1.0
addict==…
-
### System Info
-
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples`…
-
Hi,May I ask why the following error pops up when I run preprocess.py?
Traceback (most recent call last):
File "preprocess.py", line 103, in
mean_pooled = torch.avg_pool1d(last_avg.transp…
-
# 🐛 Bug
JIT scripting xformers (running commit 357545ae13948659db07428553155e1802ee15af) breaks with the following error:
```bash
xformers/components/attention/attention_mask.py", line 128
…
-
### 🐛 Describe the bug
Here's my `TrainConfig`:
```python
default_config = TRLConfig(
train=TrainConfig(
seq_length=512,
epochs=10000,
total_steps=10000,
…
-
Great work so far. I'm trying to run vLLM on my 7900XTX cards and was wondering if there were any plans to support RDNA3?