Open yhyu13 opened 1 year ago
Great question. @achraf-mer has strengthed our wheel building and CI and will soon release to pypi and docker images for public consumption.
Let's keep it open for @achraf-mer
@achraf-mer It is not possible to have direct dependencies when publishing to PYPI.
We have to update these to get from PYPI. https://github.com/h2oai/h2ogpt/blob/15dda4a2f8f2631128e7f8edabd8cd894760bebd/requirements.txt#L22 https://github.com/h2oai/h2ogpt/blob/15dda4a2f8f2631128e7f8edabd8cd894760bebd/reqs_optional/requirements_optional_langchain.metrics.txt#L2 https://github.com/h2oai/h2ogpt/blob/15dda4a2f8f2631128e7f8edabd8cd894760bebd/reqs_optional/requirements_optional_langchain.metrics.txt#L8
@achraf-mer @ChathurindaRanasinghe , the peft wheel is now on s3:
I've changed requirements.txt file. Please make a release of h2ogpt. Thanks!
FYI the build process is trivial for peft:
git clone https://github.com/h2oai/peft.git
cd peft
python setup.py bdist_wheel
# dist/peft-0.4.0.dev0-py3-none-any.whl
Stable release requires:
1) Only linux x86_64 for now for any artifacts or releases. I'll still manage windows installer for now, and @Mathanraj-Sharma is making mac installer.
2) pypi release with no s3, github, http or other such packages. Any packages mentioned in readme_linux.md that require http etc. are not required. Same for any s3, github, http stuff inside any requirements files, just drop those it's fine. E.g. migration chroma packages from our s3.
3) Also ignore extra stuff from readme_linux like nltk, playwrite, etc. Not required. Similarly for sudo stuff, not required for some usability. if they want full thing they need to use docker for x86_64, one click installer for windows, or follow full instructions in each readme.
4) All "optional" reqs should be installed, EXCEPT the gpl one so that the pypi release can be apache2.0 still.
5) Still allow them to do pip install h2ogpt[GPU] so it adds the --extra-index https://download.pytorch.org/whl/cu117
6) for pypi [GPU] we will assume CUDA only.
7) For h2ogpt[CPU] llama_cpp_python should be plainly installed. For [GPU] see https://llama-cpp-python.readthedocs.io/en/latest/ -- pre-append CMAKE_ARGS="-DLLAMA_CUBLAS=on"
if possible. If not possible, then it's ok for now.
Minimal Stable release workflow:
1) Build docker with released version and store (so not just nightly) 2) Build pypi release and push to pypi 3) Iterate release number
Future additional components:
1) full GPU asset, which is like docker but comes as pip installable package 2) MacOS one-click installer built in our jenkins as nightly and release 3) Windows one-click installer built in our jenkins as nightly and release
Hi, maybe a stupid question, when would the first stable release be? Is there any schedule on that?