matchyc / RoarGraph

VLDB 2024 paper repo. RoarGraph: A Projected Bipartite Graph for Efficient Cross-Modal Approximate Nearest Neighbor Search
MIT License
28 stars 7 forks source link

Can't download webvid-2.5M #1

Closed 9p6p closed 3 months ago

9p6p commented 3 months ago

bash prepare_data.sh webvid-2.5M dataset webvid ./data/clip-webvid-2.5M/base.2.5M.fbin: No such file or directory % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0Warning: Failed to create the file Warning: data/clip-webvid-2.5M/query.train.2.5M.fbin: No such file or Warning: directory 0 4882M 0 4096 0 0 5165 0 11d 11h --:--:-- 11d 11h 5158 curl: (23) Failed writing body (0 != 4096) % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0Warning: Failed to create the file data/clip-webvid-2.5M/query.10k.fbin: No Warning: such file or directory 0 19.5M 0 4096 0 0 6159 0 0:55:25 --:--:-- 0:55:25 6150 curl: (23) Failed writing body (0 != 4096) % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0Warning: Failed to create the file data/clip-webvid-2.5M/gt.10k.ibin: No such Warning: file or directory 0 38.1M 0 4096 0 0 5843 0 1:54:05 --:--:-- 1:54:05 5834 curl: (23) Failed writing body (0 != 4096)

9p6p commented 3 months ago

the prepare_data.sh for webvid-2.5M doesn't fit well, for the dataset in the zenodo is under clip-webvid-2.5M folder, so to fix this, you should add clip- for all "webvid-2.5M" and you can run it correctly.

copy paste code below for prepare_data.sh.


# get argument "dataset", each dataset has a code branch

# check if the dataset is provided
if [ -z "$1" ]; then
    echo "Please provide the dataset name with t2i-10M | laion-10M | clip-webvid-2.5M"
    exit 1
fi

# check if the dataset is valid
if [ "$1" != "t2i-10M" ] && [ "$1" != "laion-10M" ] && [ "$1" != "clip-webvid-2.5M" ]; then
    echo "Invalid dataset name in [t2i-10M, laion-10M, clip-webvid-2.5M]"
    exit 1
fi

dataset=$1

mkdir -p data
mkdir -p data/$1

if [ "$1" == "t2i-10M" ]; then
    echo "dataset t2i"
    need_size=$((200*4*10000000+8-1))
    query_10k_size=$((200*4*10000+8-1))
    # download the dataset
    if [ ! -e ./data/$1/gt.10k.ibin ]; then
        curl -r 0-${need_size} -o data/$1/base.10M.fbin https://storage.yandexcloud.net/yandex-research/ann-datasets/T2I/base.10M.fbin
        curl -r 0-${need_size} -o data/$1/query.train.10M.fbin https://storage.yandexcloud.net/yandex-research/ann-datasets/T2I/query.learn.50M.fbin
        curl -r 0-${query_10k_size} -o data/$1/query.10k.fbin https://storage.yandexcloud.net/yandex-research/ann-datasets/T2I/query.public.100K.fbin
        curl -o data/$1/gt.10k.ibin https://zenodo.org/records/11090378/files/t2i.gt.10k.ibin
    fi
    curl -o data/$1/gt.10k.ibin https://zenodo.org/records/11090378/files/t2i.gt.10k.ibin
    python3 change_meta_data_in_file.py ./data/t2i-10M/query.train.10M.fbin 10000000
    python change_meta_data_in_file.py ./data/t2i-10M/query.10k.fbin 10000
elif [ "$1" == "laion-10M" ]; then
    echo "dataset laion"
    # download the dataset
    for i in 0 1 2 3 4 5 6 7 9 10
    do
        if [ ! -e ./data/$1/img_emb_${i}.npy ]; then
            wget -t 0 https://the-eye.eu/public/AI/cah/laion400m-met-release/laion400m-embeddings/images/img_emb_${i}.npy -P data/$1
        fi
    done

    for i in 0 1 2 3 4 5 6 7 9 10
    do
        if [ ! -e ./data/$1/text_emb_${i}.npy ]; then
            wget -t 0 https://the-eye.eu/public/AI/cah/laion400m-met-release/laion400m-embeddings/texts/text_emb_${i}.npy -P data/$1
        fi
    done

    # export text and img simultaneously, watch out the DRAM.
    python3 export_fbin_from_npy.py
    if [ ! -e ./data/$1/gt.10k.ibin ]; then
        curl -o data/$1/query.10k.fbin https://zenodo.org/records/11090378/files/laion.query.10k.fbin
        curl -o data/$1/gt.10k.ibin https://zenodo.org/records/11090378/files/laion.gt.10k.ibin
    fi
elif [ "$1" == "clip-webvid-2.5M" ]; then
    echo "dataset clip-webvid"
    if [ ! -e ./data/clip-webvid-2.5M/base.2.5M.fbin ]; then
        wget -O ./data/clip-webvid-2.5M/base.2.5M.fbin https://zenodo.org/records/11090378/files/clip.webvid.base.2.5M.fbin
        # you can run prepare_for_clip_webvid on your own to generate base.2.5M.fbin.
        # mkdir -p ./data/clip-webvid-2.5M/temp_tar_data/
        # python3 prepare_for_clip_webvid.py
    fi

    if [ ! -e ./data/clip-webvid-2.5M/query.train.2.5M.fbin ]; then
        curl -o data/clip-webvid-2.5M/query.train.2.5M.fbin https://zenodo.org/records/11090378/files/webvid.query.train.2.5M.fbin
    fi

    if [ ! -e ./data/clip-webvid-2.5M/gt.10k.ibin ]; then
        curl -o data/clip-webvid-2.5M/query.10k.fbin https://zenodo.org/records/11090378/files/webvid.query.10k.fbin
        curl -o data/clip-webvid-2.5M/gt.10k.ibin https://zenodo.org/records/11090378/files/webvid.gt.10k.ibin
    fi
fi