(tokenflow) C:\tut\TokenFlow>python preprocess.py --data_path data/woman-running.mp4 --inversion_prompt "a silver sculpt
ure of a woman running"
A matching Triton is not available, some optimizations will not be enabled
Traceback (most recent call last):
File "C:\Users\nitin\miniconda3\envs\tokenflow\lib\site-packages\xformers\__init__.py", line 55, in _is_triton_available
from xformers.triton.softmax import softmax as triton_softmax # noqa
File "C:\Users\nitin\miniconda3\envs\tokenflow\lib\site-packages\xformers\triton\softmax.py", line 11, in <module>
import triton
ModuleNotFoundError: No module named 'triton'
C:\Users\nitin\miniconda3\envs\tokenflow\lib\site-packages\torchvision\io\video.py:161: UserWarning: The pts_unit 'pts' gives wrong results. Please use pts_unit 'sec'.
warnings.warn("The pts_unit 'pts' gives wrong results. Please use pts_unit 'sec'.")
[INFO] loading stable diffusion...
C:\Users\nitin\miniconda3\envs\tokenflow\lib\site-packages\diffusers\models\attention_processor.py:1117: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.)
hidden_states = F.scaled_dot_product_attention(
[INFO] loaded stable diffusion!
100%|████████████████████████████████████████████████████████████████████████████████| 500/500 [18:02<00:00, 2.16s/it]
100%|████████████████████████████████████████████████████████████████████████████████| 500/500 [18:03<00:00, 2.17s/it]
pip list
Output https://github.com/omerbt/TokenFlow/assets/2102186/92f3cb7d-67f8-48a2-bccb-abe062384af8