Closed dagardner-nv closed 9 months ago
Misc fixes for the conda build scripts in PR #1462.
I was able to build the conda package outside of a container with:
CONDA_ARGS="--skip-existing" ./ci/conda/recipes/run_conda_build.sh morpheus
Created a new conda env with:
mamba create -n morpheus_consumer \
-c ~/work/conda/conda-bld/ \
-c nvidia/label/dev \
-c nvidia \
-c rapidsai \
-c pytorch \
-c conda-forge \
morpheus
I tested the examples which don't require any additional dependencies.
I copied the examples
and models
directories into a new directory to ensure I wasn't accidentally picking up my in-place morpheus package, and set export MORPHEUS_ROOT=$(pwd)
in the new directory.
columns_fil.txt
bert-base-cased-hash.txt
.Changes pushed into PR #1462
I was able to build the two C++ examples against the conda package. From within my morpheus repo I exported the build conda yaml:
rapids-dependency-file-generator --output conda --file_key build --matrix "cuda=11.8;arch=$(arch);py=3.10" > /tmp/build.yaml
From within the morpheus_consumer
env I installed the additional tools needed for the build:
mamba env update -n morpheus_consumer -f /tmp/build.yaml
I was then able to build and test the two examples without issue:
cd examples/developer_guide/3_simple_cpp_stage
./compile.sh
pip install ./
cd -
cd examples/developer_guide/4_rabbitmq_cpp_stage
./compile.sh
pip install ./
cd -
morpheus --log_level=DEBUG --plugin "simple_cpp_stage.pass_thru" run pipeline-other from-file --filename=examples/data/email.jsonlines pass-thru to-file --filename=/tmp/out.jsonlines --overwrite
# Start RabbitMQ in another terminal
python examples/developer_guide/4_rabbitmq_cpp_stage/src/read_simple.py
# In another terminal I ran
python examples/developer_guide/4_rabbitmq_cpp_stage/src/write_simple.py
For the tests I generated the test conda yaml from my morpheus repo with:
rapids-dependency-file-generator --output conda --file_key test --matrix "cuda=11.8;arch=$(arch);py=3.10" > /tmp/test.yaml
I then created a new env with:
mamba create -n morpheus_tester \
-c ~/work/conda/conda-bld/ \
-c nvidia/label/dev \
-c nvidia \
-c rapidsai \
-c pytorch \
-c conda-forge \
morpheus
conda activate morpheus_tester
mamba env update -n morpheus_tester -f /tmp/test.yaml
npm install -g camouflage-server@0.15
I copied the tests
, examples
and models
directories along with the relevant pytest sections from the pyproject.toml
into into a new directory.
All tests passed with pytest --run_slow
(skipping the kafka, milvus and tests requiring additional packages)
I was able to verify that the gnn_fraud_detection_pipeline
example and the example LLM pipelines were able to work, however it took a little bit of work to get a functioning conda env.
I added a new conda target to dependencies.yaml
which just contains the example
, runtime
and cudatoolkit
deps. I had to add the morpheus
to the generated yaml file in order to ensure I don't get an incompatible version of libtiff
installed. Once I did this I was able to execute the following:
python examples/llm/main.py --log_level=debug completion pipeline
python examples/llm/main.py vdb_upload pipeline --stop_after=1024
python examples/llm/main.py --log_level=debug rag pipeline
python examples/llm/main.py --log_level=debug rag pipeline --llm_service openai --model_name=gpt-3.5-turbo
python examples/llm/main.py --log_level=debug agents simple
Is this a new feature, an improvement, or a change to existing functionality?
Change
How would you describe the priority of this feature request
Medium
Please provide a clear description of problem this feature solves
Describe your ideal solution
n/a
Additional context
No response
Code of Conduct