非GPU运行时,一直卡顿在
_torch_pytree._register_pytree_node(
inferencing embedding for corpus (number=15)--------------
inferencing embedding for queries (number=10)--------------
create index and search------------------
@karong398 , if you have many candidates, searching using the CPU takes a lot of time. You can use GPUs or reduce the size of the corpus to speed up the searching process.
运行命令 python -m FlagEmbedding.baai_general_embedding.finetune.hn_mine \ --model_name_or_path '/Volumes/移动硬盘/ptrain/output/encoder_model' \ --input_file toy_finetune_data.jsonl \ --output_file toy_finetune_data_minedHN.jsonl \ --range_for_sampling 1-200 \ --negative_number 15
非GPU运行时,一直卡顿在 _torch_pytree._register_pytree_node( inferencing embedding for corpus (number=15)-------------- inferencing embedding for queries (number=10)-------------- create index and search------------------