Closed leo23ui closed 1 week ago
Hi @leo23ui , you can use the argument --train-num-samples 1500000
.
Thanks very much, I successfully made the dataset, but found that when reading the data, an error occurred ” Process Process-1: Traceback (most recent call last): File "/home/gg/miniconda3/envs/a/lib/python3.10/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/home/gg/miniconda3/envs/a/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, self._kwargs) File "/home/gg/miniconda3/envs/a/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py", line 262, in _worker_loop torch.manual_seed(seed) File "/home/gg/miniconda3/envs/a/lib/python3.10/site-packages/torch/_compile.py", line 32, in inner return disable_fn(*args, *kwargs) File "/home/gg/miniconda3/envs/a/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn return fn(args, kwargs) File "/home/gg/miniconda3/envs/a/lib/python3.10/site-packages/torch/random.py", line 56, in manual_seed torch.xpu.manual_seed_all(seed) File "/home/gg/miniconda3/envs/a/lib/python3.10/site-packages/torch/xpu/random.py", line 114, in manual_seed_all _lazy_call(cb, seed_all=True) File "/home/gg/miniconda3/envs/a/lib/python3.10/site-packages/torch/xpu/init.py", line 85, in _lazy_call _lazy_seed_tracker.queue_seed_all(callable, traceback.format_stack()) File "/home/gg/miniconda3/envs/a/lib/python3.10/traceback.py", line 213, in format_stack return format_list(extract_stack(f, limit=limit)) File "/home/gg/miniconda3/envs/a/lib/python3.10/traceback.py", line 227, in extract_stack stack = StackSummary.extract(walk_stack(f), limit=limit) File "/home/gg/miniconda3/envs/a/lib/python3.10/traceback.py", line 383, in extract f.line File "/home/gg/miniconda3/envs/a/lib/python3.10/traceback.py", line 306, in line self._line = linecache.getline(self.filename, self.lineno) File "/home/gg/miniconda3/envs/a/lib/python3.10/linecache.py", line 30, in getline lines = getlines(filename, module_globals) File "/home/gg/miniconda3/envs/a/lib/python3.10/linecache.py", line 46, in getlines return updatecache(filename, module_globals) File "/home/gg/miniconda3/envs/a/lib/python3.10/linecache.py", line 137, in updatecache lines = fp.readlines() File "/home/gg/miniconda3/envs/a/lib/python3.10/codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb2 in position 4173: invalid start byte Error in sys.excepthook: Traceback (most recent call last): File "/home/gg/miniconda3/envs/a/lib/python3.10/linecache.py", line 46, in getlines return updatecache(filename, module_globals) File "/home/gg/miniconda3/envs/a/lib/python3.10/linecache.py", line 137, in updatecache lines = fp.readlines() File "/home/gg/miniconda3/envs/a/lib/python3.10/codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb2 in position 4173: invalid start byte
Original exception was: Traceback (most recent call last): File "/home/gg/miniconda3/envs/a/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1243, in _try_get_data data = self._data_queue.get(timeout=timeout) File "/home/gg/miniconda3/envs/a/lib/python3.10/multiprocessing/queues.py", line 113, in get if not self._poll(timeout): File "/home/gg/miniconda3/envs/a/lib/python3.10/multiprocessing/connection.py", line 262, in poll return self._poll(timeout) File "/home/gg/miniconda3/envs/a/lib/python3.10/multiprocessing/connection.py", line 429, in _poll r = wait([self], timeout) File "/home/gg/miniconda3/envs/a/lib/python3.10/multiprocessing/connection.py", line 936, in wait ready = selector.select(timeout) File "/home/gg/miniconda3/envs/a/lib/python3.10/selectors.py", line 416, in select fd_event_list = self._selector.poll(timeout) File "/home/gg/miniconda3/envs/a/lib/python3.10/site-packages/torch/utils/data/_utils/signal_handling.py", line 73, in handler _error_if_any_worker_fails() RuntimeError: DataLoader worker (pid 3323391) exited unexpectedly with exit code 1. Details are lost due to multiprocessing. Rerunning with num_workers=0 may give better error trace.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/gg/gg/MQBench-main/test/model/TinyCLIP/src/training/main.py", line 565, in
I'm looking at a txt file(caption of yfcc_15m ) encoded as ascii, and my server's locale is LANG=en_US.UTF-8 I searched and found that utf8 contains ascii, but why did I still report an error? Looking forward to your reply, thanks!
The following is my code used for detect the encoding method of 99998021_81f5616c9b.txt:
-- coding: utf-8 -- import chardet
with open('99998021_81f5616c9b.txt', 'rb') as f: result = chardet.detect(f.read()) # 读取一定量的数据进行编码检测
print(result['encoding']) # 打印检测到的编码
The following is my code used for script:
export NNODES=1 export GPUS_PER_NODE=1 export WANDB__SERVICE_WAIT=60
DISTRIBUTED_ARGS="--nproc_per_node $GPUS_PER_NODE --nnodes $NNODES" torchrun $DISTRIBUTED_ARGS src/training/main.py \ --save-frequency 1 \ --report-to wandb \ --train-data /home/gg/gg/MQBench-main/test/model/e/split_1/split_1tarpart/split_1tar \ --dataset-type webdataset \ --imagenet-val ./ImageNet \ --warmup 2000 \ --batch-size 512 \ --epochs 25 \ --workers 1 \ --model TinyCLIP-ViT-39M-16-Text-19M \ --name exp_name \ --seed 0 \ --local-loss \ --grad-checkpointing \ --output ./outputs/TinyCLIP-ViT-39M-16-Text-19M \ --lr 0.0001 \ --gather-with-grad \ --pretrained-image-file ViT-B-16@openai \ --pretrained-text-file ViT-B-16@openai \ --distillation-teacher ViT-B-32@laion2b_e16 \ --norm_gradient_clip 5 \ --train-num-samples 100000 \ --logit-scale 50
I haven't encountered this issue. You can check whether the webdataset can read the data.
I haven't encountered this issue. You can check whether the webdataset can read the data.
thanks for your reply!! i have set the dataset to webdataset in script, I'm gonna go find some other reasons. Thanks!!
Hello, how about distilla on a tenth of the yfcc15m dataset, which is about 1.5 million image-text pairs
the following linked work uses 800,000 image-text pairs, hope to get your reply, thanks very much!! https://tech.pic-collage.com/distillation-of-clip-model-and-other-experiments-f8394b7321ce