I am interested in using Vearch for AI-native applications and would like to know if the team has considered releasing a GPU-enabled version of the Docker image. This would greatly enhance the performance of applications that require intensive vector computations.
Change the cron daemon startup command from /usr/sbin/crond to service cron start for better service management on Ubuntu-based docker image.
#!/usr/bin/env bash
cd /vearch/bin/
cur_dir=$(dirname $(readlink -f "$0"))
BasePath=$(
cd $(dirname $0)
pwd
)
cd $BasePath
CPUS=`cat /sys/fs/cgroup/cpu/cpu.cfs_quota_us` && [ -n $CPUS ] && [ $CPUS -gt 0 ] && CPUS=`expr $CPUS / 100000` && echo $CPUS && export OMP_NUM_THREADS=$CPUS
function getServiceStatusInfo {
pidFile=$1
filterTag=$2
if [ ! -f "${pidFile}" ]; then
echo ""
else
ps -ef | grep $(cat ${pidFile}) | grep -v grep | grep ${filterTag}
fi
}
function start {
stype=$1
info=$(getServiceStatusInfo "${stype}.pid" "${stype}")
if [ -z "$info" ]; then
export LD_LIBRARY_PATH=$cur_dir/lib/:$LD_LIBRARY_PATH
nohup $BasePath/bin/vearch -conf $BasePath/config.toml $1 >$BasePath/vearch-startup-${stype}.log 2>&1 &
pid=$!
echo $pid >$BasePath/${stype}.pid
echo "[INFO] ${stype} started... pid:${pid}"
else
echo "[Error]The ${stype} is running and the ${stype}'s status is :"
echo "[INFO] status of ${stype} : ${info}"
fi
echo "--------------------------------------------------------------------------"
}
if [ -z "$1" ]; then
echo "start args is empty"
fi
if [ -n "$1" ]; then
start $1
fi
# /usr/sbin/crond
service cron start
echo "*/1 * * * * cd /vearch && sh restart.sh $1 >> restart.log 2>&1 &" >> /var/spool/cron/root
sleep 9999999d
Create Env Image And Vearch Image
build compile base environment image
go to $vearch/cloud/env dir
run docker build -t vearch/vearch-dev-env-gpu:latest . you will got an image named vearch-dev-env-gpu
compile vearch_gpu
go to $vearch/cloud dir
add GPU Index support: change BUILD_WITH_GPU from "off" to "on" in $vearch/internal/engine/CMakeLists.txt
run docker run --privileged -i -v $(dirname "$PWD"):/vearch vearch/vearch-dev-env-gpu:latest /vearch/cloud/compile/compile.sh you will compile Vearch in $vearch/build/bin, $vearch/build/lib
make vearch_gpu image
go to $vearch/cloud dir
run cp -r ../build/bin compile/; cp -r ../build/lib compile/; docker build -t vearch/vearch_gpu:latest . you will get an image named vearch good luck
how to use it
you can use docker run --gpus all -it -v config.toml:/vearch/config.toml vearch_gpu all to start vearch by local model the last param has four types [ps, router ,master, all] all means tree type to start
Summary 概要
I am interested in using Vearch for AI-native applications and would like to know if the team has considered releasing a GPU-enabled version of the Docker image. This would greatly enhance the performance of applications that require intensive vector computations.
Platform 平台
OS: ubuntu22.04 CUDA: 12.2 Vearch version: 3.5.2 Installed from: Source
DockerFile
vearch-dev-env-gpu
vearch_gpu
start.sh
modifiedChange the cron daemon startup command from
/usr/sbin/crond
toservice cron start
for better service management on Ubuntu-based docker image.Create Env Image And Vearch Image
docker build -t vearch/vearch-dev-env-gpu:latest .
you will got an image named vearch-dev-env-gpuBUILD_WITH_GPU
from "off" to "on" in$vearch/internal/engine/CMakeLists.txt
docker run --privileged -i -v $(dirname "$PWD"):/vearch vearch/vearch-dev-env-gpu:latest /vearch/cloud/compile/compile.sh
you will compile Vearch in $vearch/build/bin, $vearch/build/libcp -r ../build/bin compile/; cp -r ../build/lib compile/; docker build -t vearch/vearch_gpu:latest .
you will get an image named vearch good luckdocker run --gpus all -it -v config.toml:/vearch/config.toml vearch_gpu
all to start vearch by local model the last param has four types [ps, router ,master, all] all means tree type to start