NVIDIA / Megatron-LM

Ongoing research training transformer models at scale
https://docs.nvidia.com/megatron-core/developer-guide/latest/user-guide/index.html#quick-start
Other
10.16k stars 2.29k forks source link

[BUG] Faiss RuntimeError #699

Open zhentingqi opened 7 months ago

zhentingqi commented 7 months ago

Describe the bug I am running step 3 on one 80G A100 GPU to "Build index for similarity search". My "DATA_BLEND" is the first 10000 scraped text items from openwebtext created with the steps. I only want to build an index using these 10000 text items. But when I run bash tools/retro/examples/preprocess_data.sh index-train, I encountered the following error:

Traceback (most recent call last):
  File "tools/retro/main.py", line 224, in <module>
    train_index() # train only
  File "/n/home06/zhentingqi/LLM_safety/Megatron-LM-retro/./tools/retro/index/build.py", line 137, in train_index
    train_on_embeddings()
  File "/n/home06/zhentingqi/LLM_safety/Megatron-LM-retro/./tools/retro/index/build.py", line 112, in train_on_embeddings
    index.train()
  File "/n/home06/zhentingqi/LLM_safety/Megatron-LM-retro/./tools/retro/index/indexes/faiss_base.py", line 81, in train
    self._train()
  File "/n/home06/zhentingqi/LLM_safety/Megatron-LM-retro/./tools/retro/index/indexes/faiss_base.py", line 71, in _train
    index.train(inp)
  File "/n/home06/zhentingqi/.local/lib/python3.8/site-packages/faiss/__init__.py", line 280, in replacement_train
    self.train_c(n, swig_ptr(x))
  File "/n/home06/zhentingqi/.local/lib/python3.8/site-packages/faiss/swigfaiss.py", line 3605, in train
    return _swigfaiss.IndexPreTransform_train(self, n, x)
RuntimeError: Error in void faiss::Clustering::train_encoded(faiss::Clustering::idx_t, const uint8_t*, const faiss::Index*, faiss::Index&, const float*) at /project/faiss/faiss/Clustering.cpp:283: Error: 'nx >= k' failed: Number of training points (4850) should be at least as large as number of clusters (65536)

Why is the number of training points 4850? And how can I reset the limit 65536 to fit my number of training points? Thanks!

Environment (please complete the following information):

Script: preprocess_data.sh:

#!/bin/bash

set -u

unset NCCL_DEBUG

######## Megatron, Retro dirs. ########

REPO_DIR="Megatron-LM"
RETRO_WORKDIR="big_data/openwebtext/retro_workdir_10000"

######## Task (e.g., db, index, query). ########

# This script takes a single argument, which specifies the retro task to be performed.
# The available tasks are: db-build, index-train, index-add, and query-pretraining-neighbors.

# RETRO_TASKS="db-build"                      # Build the retrieval database
# RETRO_TASKS="index-train"                   # Train the index
# RETRO_TASKS="index-add"                     # Add data to the index
# RETRO_TASKS="query-pretraining-neighbors"   # Perform query pretraining for neighbors

# You can also provide the task as a command-line argument when executing the script.
# Example: ./preprocess_data.sh index-add
RETRO_TASKS=$1

######## Data. ########

DATA_BLEND=" \
    1 big_data/openwebtext/scraped_10000/gpt2_text_document \
"

######## Index. ########

RETRO_INDEX_STR="OPQ32_64,IVF65536_HNSW8,PQ32"
RETRO_INDEX_NTRAIN=1000000  #! qzt: adjust this if training samples are few
RETRO_INDEX_TRAIN_LOAD_FRACTION=0.97
RETRO_INDEX_ADD_LOAD_FRACTION=0.95

######## BERT. ########

BERT_CKPT="big_models/megatron-bert-345m-cased"
BERT_VOCAB="big_models/megatron-bert-345m-cased/bert-cased-vocab.txt"

######## GPT. ########

RETRO_GPT_SEED=1234
RETRO_GPT_SPLIT="98,2,0"
RETRO_GPT_DATA_PATH=${DATA_BLEND}
RETRO_GPT_DATALOADER_TYPE=single
RETRO_GPT_EVAL_INTERVAL=2000
RETRO_GPT_EVAL_ITERS=50
RETRO_GPT_TRAIN_SAMPLES=200000
RETRO_GPT_LR_DECAY_SAMPLES=175000
RETRO_GPT_LR_WARMUP_SAMPLES=10000
RETRO_GPT_SEQ_LENGTH=512
RETRO_GPT_GLOBAL_BATCH_SIZE=256
RETRO_GPT_CHUNK_LENGTH=64

GPT_VOCAB="big_models/megatron-gpt-345m/gpt2-vocab.json"
GPT_MERGE="big_models/megatron-gpt-345m/gpt2-merges.txt"

######## Query. ########

RETRO_QUERY_NUM_NEIGHBORS_QUERY=200 RETRO_QUERY_NUM_NEIGHBORS_SAVE=20
RETRO_QUERY_EF_SEARCH=32
RETRO_QUERY_NPROBE=4096

######## Args. ########

ARGS=" \
    --distributed-timeout-minutes 600 \
    --tensor-model-parallel-size 1 \
    --pipeline-model-parallel-size 1 \
    --num-layers 24 \
    --hidden-size 1024 \
    --num-attention-heads 16 \
    --micro-batch-size 1 \
    --global-batch-size ${RETRO_GPT_GLOBAL_BATCH_SIZE} \
    --seq-length 512 \
    --max-position-embeddings 512 \
    --load ${BERT_CKPT} \
    --exit-on-missing-checkpoint \
    --no-load-optim \
    --no-load-rng \
    --data-path ${RETRO_GPT_DATA_PATH} \
    --tokenizer-type BertWordPieceLowerCase \
    --vocab-file ${BERT_VOCAB} \
    --split ${RETRO_GPT_SPLIT} \
    --distributed-backend nccl \
    --lr 0.0001 \
    --lr-decay-style linear \
    --min-lr 1.0e-5 \
    --train-samples ${RETRO_GPT_TRAIN_SAMPLES} \
    --lr-decay-samples ${RETRO_GPT_LR_DECAY_SAMPLES} \
    --lr-warmup-samples ${RETRO_GPT_LR_WARMUP_SAMPLES} \
    --weight-decay 1e-2 \
    --clip-grad 1.0 \
    --eval-interval ${RETRO_GPT_EVAL_INTERVAL} \
    --eval-iters ${RETRO_GPT_EVAL_ITERS} \
    --fp16 \
    --dataloader-type ${RETRO_GPT_DATALOADER_TYPE} \
    --no-data-sharding \
    --no-gradient-accumulation-fusion \
    --no-async-tensor-model-parallel-allreduce \
    --bert-embedder-type megatron \
    --output-bert-embeddings \
    \
    --retro-workdir ${RETRO_WORKDIR} \
    --retro-tasks ${RETRO_TASKS} \
    --retro-return-doc-ids \
    --retro-bert-vocab-file ${BERT_VOCAB} \
    --retro-bert-tokenizer-type BertWordPieceLowerCase \
    --retro-gpt-seed ${RETRO_GPT_SEED} \
    --retro-gpt-tokenizer-type GPT2BPETokenizer \
    --retro-gpt-vocab-file ${GPT_VOCAB} \
    --retro-gpt-merge-file ${GPT_MERGE} \
    --retro-gpt-seq-length ${RETRO_GPT_SEQ_LENGTH} \
    --retro-gpt-chunk-length ${RETRO_GPT_CHUNK_LENGTH} \
    --retro-gpt-global-batch-size ${RETRO_GPT_GLOBAL_BATCH_SIZE} \
    --retro-gpt-eval-interval ${RETRO_GPT_EVAL_INTERVAL} \
    --retro-gpt-eval-iters ${RETRO_GPT_EVAL_ITERS} \
    --retro-gpt-split ${RETRO_GPT_SPLIT} \
    --retro-gpt-data-path ${RETRO_GPT_DATA_PATH} \
    --retro-index-str ${RETRO_INDEX_STR} \
    --retro-index-ntrain ${RETRO_INDEX_NTRAIN} \
    --retro-index-train-load-fraction ${RETRO_INDEX_TRAIN_LOAD_FRACTION} \
    --retro-index-add-load-fraction ${RETRO_INDEX_ADD_LOAD_FRACTION} \
    --retro-index-no-delete-training-embeddings \
    --retro-index-no-delete-added-codes \
    --retro-query-num-neighbors-query ${RETRO_QUERY_NUM_NEIGHBORS_QUERY} \
    --retro-query-num-neighbors-save ${RETRO_QUERY_NUM_NEIGHBORS_SAVE} \
    --retro-query-ef-search ${RETRO_QUERY_EF_SEARCH} \
    --retro-query-nprobe ${RETRO_QUERY_NPROBE} \
"

######## Command. ########

NPROCS=1 #! Number of GPUs.
CMD="\
    cd ${REPO_DIR} && pwd && \
    python -m torch.distributed.run \
    --nproc_per_node ${NPROCS} \
    --nnodes 1 \
    --master_port 6000 \
    tools/retro/main.py ${ARGS} \
"
echo "~~~~~~~~~~~~~~~~~~~~~~~~~~"
echo "CMD = '$CMD'."
echo "~~~~~~~~~~~~~~~~~~~~~~~~~~"
eval $CMD
github-actions[bot] commented 5 months ago

Marking as stale. No activity in 60 days.