XLabs-AI / deforum-x-flux

Deforum based on flux-dev by XLabs-AI
160 stars 9 forks source link

Triton? #1

Open tayshie opened 3 weeks ago

tayshie commented 3 weeks ago

image any suggestions? 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz 8 16

Intel(R) UHD Graphics 30.0.101.2079 NVIDIA GeForce RTX 3050 Ti Laptop GPU 32.0.15.6081

stazizov commented 3 weeks ago

Sorry for this, we had to add that you should better create virtual environment and install all packages there

melMass commented 3 weeks ago

Triton is notoriously hard to install (if even possible) on windows https://github.com/triton-inference-server/server/issues/4737

tonywhite11 commented 3 weeks ago

open a linux or wsl cli and install requirements

Bladed3d commented 3 weeks ago

pip install requirements results in Triton error message. I can open a WSL window in Windows, but how do I install requirements there? Why should this resolve the Triton error message?

tonywhite11 commented 3 weeks ago

I got the same error message in an earlier project and just like the previous commenter and ChatGPT said, it’s virtually impossible to install triton on windows but not Linux and WSL is Linux for windows. I use VS Code. Just open a new terminal and the chose a Ubuntu wsl terminal and follow project instructions. It works. Correction, got it installed but ran out of memory before Incould use it

Bladed3d commented 3 weeks ago

I got the same error message in an earlier project and just like the previous commenter and ChatGPT said, it’s virtually impossible to install triton on windows but not Linux and WSL is Linux for windows. I use VS Code. Just open a new terminal and the chose a Ubuntu wsl terminal and follow project instructions. It works. Correction, got it installed but ran out of memory before Incould use it

Instructions seem to say that I just need to install the requirements.txt from a wsl window. Is that right and what exactly is the command to install requirements.txt?

Bladed3d commented 3 weeks ago

And what is the error... ValueError: Non-consecutive added token '' found. Should have index 32100 but has index 32000 in saved vocabulary?

tonywhite11 commented 3 weeks ago

No, I was responding to the triton installation problem. The command is the same except I have use python3 instead of python. My suggestion is to use a LLM to help with your issues. I gave copilot your error message and this is what it responded-

It seems like you're encountering an error related to the tokenization process in a language model, possibly while using the Hugging Face's Transformers library.

The error message suggests that there's an issue with the indices of the added tokens. The token should have an index of 32100, but it has an index of 32000 in the saved vocabulary.

This could be due to a mismatch between the pre-trained model's tokenizer and the one you're using. If you've added new tokens to the tokenizer, you need to make sure that the model is aware of these new tokens.

Here's a general way to add new tokens:

tokenizer = AutoTokenizer.from_pretrained('model_name') model = AutoModel.from_pretrained('model_name')

Add new tokens

new_tokens = ['', '', ...] # Add your new tokens here num_added_tokens = tokenizer.add_tokens(new_tokens)

Resize the token embeddings of the model

model.resize_token_embeddings(len(tokenizer))

stazizov commented 3 weeks ago

Guys, have you just tried to ignore triton installation?

stazizov commented 3 weeks ago

and all torch dependencies installation too

tonywhite11 commented 3 weeks ago

I haven't but it's good to have WSL installed for other linux projects or projects that don't have Windows compatibility yet like ollama when it fisrt came out. I also use Copilot or the internet for most of my installation issues.

stazizov commented 3 weeks ago

Please try to just ignore triton and run the code again