Open oldgithubman opened 4 months ago
It's true that things in the FAQ are heavily geared towards linux/x86 explanations. One would need to be able to understand both and translate to windows.
You can probably ask chatGPT for help in that translation.
If you are using the one-click windows installer as a base, that will also be behind main branch. If you want to keep up with bleeding edge that's in main, need to use the manual installer. I only added TEI 1 week ago.
It's not hard, it's just a script that's run. Then it will be able to work with that FAQ element.
Since you have a windows release, you should warn users when things don't apply to windows. Expecting users to ask chatGPT to translate your documentation is not reasonable. Why wouldn't I be using the main, official windows installer on your front page? Main branch should be official. Development branches are for development. As far as I know, that's the point. I don't want to keep up with the bleeding edge. I want something that works. TEI is in the main branch FAQ. If it doesn't apply to your main release, it should say so. Again, not interested in the bleeding edge. I can't count on devs to have working stable releases, let alone bleeding edge. All of this is not unique to your project. These criticisms apply to most of the AI projects I've evaluated and I've evaluated probably a dozen in-depth (as far as I can, anyway. most have bad documentation and a ridiculous amount of bugs). This style of rapid, sloppy development (or in this case, documentation (some of this project's docs are actually very good, btw)) is clearly the zeitgeist and I suppose if it works for developers, great. Just know you're all probably driving away a lot of users.
My advice - take it or leave it (in addition to my advice about tracking from the other thread):
Again, those are my suggestions, take em or leave em. For now, I suppose I'll just give up trying to get TEI to work and delete this docker. This has been a huge waste of my time.
Cheers, A normal user
@ChathurindaRanasinghe is working on a stable release, it's been in works, but our company is too small to do everything you are asking for.
Welp, just ran into another breaking bug. Searched around and found this: https://github.com/h2oai/h2ogpt/issues/1248#issue-2060401402 There's a clear pattern of bugs and excuses here. I'm moving on. Good luck out there
Good luck.
Trying to follow the directions in the FAQ for setting up TEI and as far as I can tell, they're full of errors, at least for my windows environment. Considering there's no mention of linux or windows and you're obviously catering to windows users as well, this is a frustrating problem (and far too common in this space). Regarding
docker run -d --gpus '"device=0"' --shm-size 3g -v $HOME/.cache/huggingface/hub/:/data -p 5555:80 --pull always ghcr.io/huggingface/text-embeddings-inference:0.6 --model-id BAAI/bge-large-en-v1.5 --revision refs/pr/5 --hf-api-token=$HUGGING_FACE_HUB_TOKEN --max-client-batch-size=4096 --max-batch-tokens=2097152
'"device=0"'
throws an error. Needs to be0
.$HOME/.cache/huggingface/hub/:/data
throws an error. On windows, you probably don't want to do this for performance reasons, butC:\Users\
[user]\.cache\huggingface\hub\:/data
works. "Then for h2oGPT ensure pass:" makes no sense. What is this supposed to mean? I'm guessing you mean you should run something like:C:\Users\[user]\AppData\Local\Programs\h2oGPT\Python\python.exe "C:\Users\[user]\AppData\Local\Programs\h2oGPT\h2oGPT.launch.pyw" --hf_embedding_model=tei:http://localhost:5555 --cut_distance=10000
--hf_embedding_model=tei:http://localhost:5555 --cut_distance=10000
throws an error:ValueError: Path tei:http://localhost:5555 not found
I haven't been able to fix that one because I don't even know where to start. "or whatever address is required." is not helpful