Closed makeasnek closed 1 year ago
Sep 15, 2023: Bounty is now live and open!
I encourage the science commons to add clear and specific hardware compatibility requirements, but I understand why not
It would not be ideal if someone "only" released a cuda windows binary/exe and neglected AMD and Intel
9000 GRC (approx 90 USD) has been added to the bounty today by some generous community contributors. Updated the badge
Hhh. And petals has been sitting there on lollms for weeks now: https://twitter.com/SpaceNerduino/status/1697033550413938694?t=Mg-a1zIKwFvUQWBNJBytFA&s=19
For those who don't know lollms it's like oobabooga text generation and can be found here:
Ok folks, I'll take time to make sure it can be installed on windows from lollms. Does that count? Lollms can be installed using a simple windows installer already. And it offers many many things, like more than 300 personalities, a playground tool with loads of presets, full control over the generation system, access to other tools like stable diffusion, musicgen and so on.
Thank you for your interest @ParisNeo. If I understand correctly, you have another tool (lollms) which is sort of like a meta-installer for running many different language models? If so, if you add Petals to it so that you can create a Petals node (on Windows), then yes that would qualify for the bounty. The code for the installer must be open-source and otherwise meet all aspects of the bounty requirements.
OK then, I'll try to do it tomorrow evening.
Best regards. You can learn more about lollms in my youtube videos:
It is a multi bindings UI for text generation that provides personalities to chat with, a vector database to use documents, a playground for experimenting with text generation tasks along with multiple presets for many applications (coding, translation, documenting, writinf, fixing mails etc..). It also supports image and video generation as well as music generation. All in one :)
Thank you very much. I actually managed only to make it run natively on linux.
On windows, there is a dependency that is making this very very difficult: uvloop. This dependency explicitly rejects any attempt to install it on windows. There is active work to make it windows friendly, but the pull requests are not yet accepted and they don't seem to be fully working yet. So we may expect them to make a windows version in the upcoming months but not sooner.
This means that my best shot at doing this is to use WSL.
It works like charm with WSL with cuda and everything:
The node is visible from the https://health.petals.dev/ site. So everything is running fine.
To sum up, I've built a simple .bat file that installs an ubuntu WSL system, installs python and pip, then installs petals and runs the server.
But that won't be acceptable if I understand the rules of this challenge. So I am integrating the installation directly in the lollms binding installation procedure. Usually, if you are using linux, I install the binding and run the node from python with the right models. So for windows I'll make a test and use the wsl instead.
Now with this, when you run lollms it starts the node but I need to code a bridge so that it is usable for text generation. I may go with a client that uses socketio to communicate with lollms.
The other solution is to literally install lollms in wsl, which will solve all bridging needs. I think I'll go with that other solution, that would save me some time.
I'll make a version of lollms that runs on wsl and is using petals by default.
DONE!
Now lollms can be installed with wsl support Works! Now Install petals
It automatically installs cuda and stuff:
Now it is using petals:
To finish, I created an exe installer using innosetup:
Once installed you will have three new icons:
OK, now I finished making the installer. I'll try to do a full reinstall and see if it works.
You can find all the scripts to build the installer in the lollms repository:
https://github.com/ParisNeo/lollms-webui/tree/main/scripts/wsl
The installer is built using innosetup tool (free to download from the internet):
Steps:
You need to run lollms to install petals binding. When it is loaded it opens a browser. If it doesn't open a browser and navigate to localhost:9600. Go to settings -> Bindings zoo -> petals and press install. You can monitor the install by looking at the console output.
Once ready, open the models zoo and select a model you want to use for petals. Wait for it to load. If no model is showing up, just reload the localhost:9600 page and then go to settings and the models zoo should have models in it.
You can run the petals server by double clicking the petals server icon on the desktop. This will use your machine as part of the hive mind:
And after all, in the discussion view it works like charm. We can see here that it is using the bs_petals which is the codename for the petals binding (i can't use the same name as the module to avoid import issues):
Now this is all in my lollms hugging face repository. You can find the code for wsl install of everything in here: https://github.com/ParisNeo/lollms-webui/tree/main/scripts/wsl
You can modify the code to adapt any aspect to your needs then use innosetup to generate an installer or even make an installer that is independant from lollms if you don't need it.
I also provide an executable installer on my release page of lollms, just select the petals version: https://github.com/ParisNeo/lollms-webui/releases/tag/v6.5.0
The one with wsl and petals support is lollms-with-petals.exe
I will probably make a video explaining exactly how to install and use this tool.
I hope you like this. Tell me if you have questions or notice a bug or something.
Here is my free discord channel: https://discord.gg/vHRwSxb5
Best regards
This is looking great! Using WSL or Docker is fine, this is probably preferable to installing it on Windows natively. There are a few requirements which I think are not met yet, please let me know if I am mistaken:
If these requirements are satisfied, we will ask the Petals team to review your submission and if everything looks good we will pay out the bounty :).
Thank you very much.
I will adress the issues this evening when I'm back home. It would be a cool idea if we can talk, because for long time, I had an idea with few other people of using distributed computing for AI and I think petals offers an interesting platform to my neurovoyance initiave.
If you have time this evening We can talk on discord: https://discord.gg/vHRwSxb5 You can DM me.
The bounty is not what interest me the most in this. It is more the potential of the tool that interests me most. But I won't say no to 200$ :) that can pay for the expenses of hosting lollms services.
I think I'm done. I have integrated it to lollms and I have built a stand alone application for the server with automatic install on windows:
You can find the executable here:
https://github.com/ParisNeo/petals_server_installer/releases/tag/v1.0
The full code is on the same repository:
https://github.com/ParisNeo/petals_server_installer
A video on how it is installed using lollms :
https://www.youtube.com/watch?v=XwjL8ZOa7ec&t=332s
So you can pick between independent version or lollms integrated version.
Have a nice weekend, Got to sleep. I havn't slept for some time :)
Excellent, send this to the Petals team for their review and if they sign off, we'll do a final code review and then release the bounty to you :)
Note for bounty hunters: We currently have a submission in for this bounty which is under review
Followed up w/ Petals today awaiting response.
Bounty is approved, making payment today
Thanks
Bounty amount will increase at random times and amounts until it is claimed. Subscribe to this issue to receive notifications about increases
Context: Why this bounty exists Petals is a bleeding-edge tool for running large AI models in a distributed fashion. Previously, AI researchers and those looking to use large language models would have to pay exorbitant costs to host a server farm to train and run models. With Petals, this is done in a decentralized way which removes this barrier to research.
However, installing and hosting a petals node still requires some technical expertise as some knowledge of command-line usage is required. This bounty will create a point-and-click GUI installer, enabling more people to contribute to and benefit from the Petals network.
Requirements to claim bounty:
Contribute to this Bounty You can contribute to SCI's bounty program by donating cash or crypto to SCI. You will get a nice tax deduction, and we will spend those donated funds on our bounty programs.
You can also donate to this bounty specifically by sending crypto to the following addresses. Did you know that crypto is one of the most effective ways to make donations (for US donors)? Cryptocurrency donations to 501(c)3 nonprofits are considered tax-deductible and do not trigger a taxable event, meaning you do not usually have to pay capital gains tax on them. We request that any individual donating over $500USD (or equivalent) provide their information along with their donation to ensure compliance with our AML and KYC policies. Any organization that wishes to make a donation to SCI is requested to reach out to us directly at contact{at}thesciencecommons.org. In the event that the awardee does not want the crypto or the bounty is closed without being paid out, it will be turned over to SCI's bounty fund to be spent on future bounties.
BTC (Bitcoin): bc1qrl5ksfgw2ue3fxf6avuyuw5z3rs32hdmw4t2k6 ETH (Ethereum) and DAI: 0x60982d4f98A3a9Cb957Fe66C15149A2d91311DD9 GRC (Gridcoin): S8VgmnQnVARejcPPcG4burFeoEVRS362fk. You can see the balance of this GRC address at http://gridcoinstats.eu/address/S8VgmnQnVARejcPPcG4burFeoEVRS362fk
Bounty amount: $200 USD + Contents of above crypto addresses
Payment of USD portion will be made through PayPal or DAI (your choice) directly from the SCI upon completion of the work. You will also get the contents of the crypto addresses linked above (minus any tx fees) and the satisfaction of knowing you are helping a software and ecosystem which supports the progress of science.
Claiming bounty
About SCI The SCI is a US 501(c)(3) non-profit organization dedicated to rebuilding the bridge of participation and trust between the public and the scientific process. We support tools and infrastructure that enable people to learn about and engage with science. Follow our work via our free newsletter on substack.