Sense-X / UniFormer

[ICLR2022] official implementation of UniFormer
Apache License 2.0
826 stars 111 forks source link

Datasets K400 and sthV #62

Closed LEM0NTE closed 2 years ago

LEM0NTE commented 2 years ago

I have noticed that the format and number of videos of K400 and SthV2 datasets I can get are quite different from the datasets used in this project. Could you please provide the relevant datasets or processing process used in your experiment?

Andy1621 commented 2 years ago

You can simply modify the code for your version of the datasets. Don't hestitate to ask me if you meet any problems.

For SthV2, the video number should be the same. To extra the frames, you can find some help here.

For Kinetics, I directly read the videos. Since there are many versions of Kinetics400, you can download the version on cvdfoundation. Besides, you can download my version of Kinetics400 in BaiduYun. Password: muq2

cat xa* >> train_k400.zip
unzip train_k400.zip

Note that you should not made the dataset public since we do not have the right. And the link will be disabled after 7 days. By the way, we do not put the dataset in Google drive because we do not have enough space (~150GB).

LEM0NTE commented 2 years ago

您好,请问这里只有训练集的数据嘛. 测试集和验证集的数据在哪里可以获得呢~

LEM0NTE commented 2 years ago

You can simply modify the code for your version of the datasets. Don't hestitate to ask me if you meet any problems.

For SthV2, the video number should be the same. To extra the frames, you can find some help here.

For Kinetics, I directly read the videos. Since there are many versions of Kinetics400, you can download the version on cvdfoundation. Besides, you can download my version of Kinetics400 in BaiduYun. Password: muq2

cat xa* >> train_k400.zip
unzip train_k400.zip

Note that you should not made the dataset public since we do not have the right. And the link will be disabled after 7 days. By the way, we do not put the dataset in Google drive because we do not have enough space (~150GB).

另外, 您是否方便提供您使用的数据集文件夹结构呢. 例如mmcv所提供的示例. 因为我不太清楚在test.sh 中应当如何正确的设置 DATA.PATH_TO_DATA_DIR 和 DATA.PATH_PREFIX 两个参数. 十分感谢您的解答! image

LEM0NTE commented 2 years ago

我尝试复述一遍我的操作以期望得到您的帮助:

  1. 首先, 我在 ./video_classification/exp/uniformer_s8*8_k400/test.sh中修改为如下内容 work_path=$(dirname $0) PYTHONPATH=$PYTHONPATH:./slowfast \ python tools/run_net.py \ --cfg $work_path/test.yaml \ DATA.PATH_TO_DATA_DIR '/research/students/sunrx/myexp/UniFormer/data/Kinetics400/kinetics_400' \ DATA.PATH_PREFIX '/research/students/sunrx/myexp/UniFormer/data/Kinetics400/kinetics_400/video_320/' \ DATA.PATH_LABEL_SEPARATOR "," \ TRAIN.EVAL_PERIOD 5 \ TRAIN.CHECKPOINT_PERIOD 1 \ TRAIN.BATCH_SIZE 128 \ NUM_GPUS 2 \ UNIFORMER.DROP_DEPTH_RATE 0.1 \ SOLVER.MAX_EPOCH 100 \ SOLVER.BASE_LR 4e-4 \ SOLVER.WARMUP_EPOCHS 10.0 \ DATA.TEST_CROP_SIZE 224 \ TEST.NUM_ENSEMBLE_VIEWS 4 \ TEST.NUM_SPATIAL_CROPS 1 \ TRAIN.ENABLE False \ TEST.CHECKPOINT_FILE_PATH '/research/students/sunrx/myexp/UniFormer/video_classification/checkpoints/uniformer_small_k400_8x8.pth' \ RNG_SEED 6666 \ OUTPUT_DIR '/research/students/sunrx/myexp/UniFormer/video_classification'

并且uniformer_small_k400_8x8.pth也已经下载并放置在了上述文件中指定的路径中.

  1. k400数据集是按照之前您提供的链接下载并解压的. 其中.mp4文件都在 ./UniFormer/data/Kinetics400/kinetics_400/video_320/下, ./UniFormer/data/Kinetics400/kinetics_400/路径下有test.csv, train.csv, val.csv文件

  2. 然后我在目录./UniFormer/video_classification 下运行 bash ./exp/uniformer_s8x8_k400/test.sh 提示如下信息 首先是[info]信息: [07/04 13:58:11][INFO] uniformer.py: 288: Use checkpoint: False [07/04 13:58:11][INFO] uniformer.py: 289: Checkpoint number: [0, 0, 0, 0] [07/04 13:58:12][INFO] build.py: 45: load pretrained model

[07/04 13:58:15][INFO] misc.py: 184: Params: 21,400,400 [07/04 13:58:15][INFO] misc.py: 185: Mem: 0.1601576805114746 MB [07/04 13:58:15][WARNING] jit_analysis.py: 499: Unsupported operator aten::add encountered 54 time(s) [07/04 13:58:15][WARNING] jit_analysis.py: 499: Unsupported operator aten::gelu encountered 18 time(s) [07/04 13:58:15][WARNING] jit_analysis.py: 499: Unsupported operator aten::div encountered 11 time(s) [07/04 13:58:15][WARNING] jit_analysis.py: 499: Unsupported operator aten::mul encountered 11 time(s) [07/04 13:58:15][WARNING] jit_analysis.py: 499: Unsupported operator aten::softmax encountered 11 time(s) [07/04 13:58:15][WARNING] jit_analysis.py: 499: Unsupported operator aten::mean encountered 1 time(s) [07/04 13:58:15][WARNING] jit_analysis.py: 511: The following submodules of the model were never called during the trace of the graph. They may be unused, or they were accessed by direct calls to .forward() or via other python methods. In the latter case they will have zeros for statistics, though their statistics will still contribute to their parent calling module. module.blocks1.1.drop_path, module.blocks1.2.drop_path, module.blocks2.0.drop_path, module.blocks2.1.drop_path, module.blocks2.2.drop_path, module.blocks2.3.drop_path, module.blocks3.0.drop_path, module.blocks3.1.drop_path, module.blocks3.2.drop_path,module.blocks3.3.drop_path, module.blocks3.4.drop_path, module.blocks3.5.drop_path, module.blocks3.6.drop_path, module.blocks3.7.drop_path, module.blocks4.0.drop_path, module.blocks4.1.drop_path, module.blocks4.2.drop_path [07/04 13:58:15][INFO] misc.py: 186: Flops: 17.613435904 G [07/04 13:58:16][WARNING] jit_analysis.py: 499: Unsupported operator aten::layer_norm encountered 26 time(s) [07/04 13:58:16][WARNING] jit_analysis.py: 499: Unsupported operator aten::add encountered 54 time(s) [07/04 13:58:16][WARNING] jit_analysis.py: 499: Unsupported operator aten::batch_norm encountered 15 time(s) [07/04 13:58:16][WARNING] jit_analysis.py: 499: Unsupported operator aten::gelu encountered 18 time(s) [07/04 13:58:16][WARNING] jit_analysis.py: 499: Unsupported operator aten::div encountered 11 time(s) [07/04 13:58:16][WARNING] jit_analysis.py: 499: Unsupported operator aten::mul encountered 11 time(s) [07/04 13:58:16][WARNING] jit_analysis.py: 499: Unsupported operator aten::softmax encountered 11 time(s) [07/04 13:58:16][WARNING] jit_analysis.py: 499: Unsupported operator aten::mean encountered 1 time(s) [07/04 13:58:16][WARNING] jit_analysis.py: 511: The following submodules of the model were never called during the trace of the graph. They may be unused, or they were accessed by direct calls to .forward() or via other python methods. In the latter case they will have zeros for statistics, though their statistics will still contribute to their parent calling module. module.blocks1.1.drop_path, module.blocks1.2.drop_path, module.blocks2.0.drop_path, module.blocks2.1.drop_path, module.blocks2.2.drop_path, module.blocks2.3.drop_path, module.blocks3.0.drop_path, module.blocks3.1.drop_path, module.blocks3.2.drop_path,module.blocks3.3.drop_path, module.blocks3.4.drop_path, module.blocks3.5.drop_path, module.blocks3.6.drop_path, module.blocks3.7.drop_path, module.blocks4.0.drop_path, module.blocks4.1.drop_path, module.blocks4.2.drop_path [07/04 13:58:16][INFO] misc.py: 191: Activations: 88.579856 M

  1. 最后给出了报错信息: RuntimeError: Failed to fetch video after 10 retries. 以及这样的信息: Traceback (most recent call last): File "", line 1, in File "/home/ps/anaconda3/envs/uni/lib/python3.9/multiprocessing/spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "/home/ps/anaconda3/envs/uni/lib/python3.9/multiprocessing/spawn.py", line 126, in _main self = reduction.pickle.load(from_parent) _pickle.UnpicklingError: pickle data was truncated

我往上翻查提示信息时发现, 还给出了这样的提示信息: /github/workspace/src/video/video_reader.cc:83: Failed to meta load video idx 5050 from /research/students/sunrx/myexp/UniFormer/data/Kinetics400/kinetics_400/video_320/hRna4_5yMYw.mp4; trial 7 ERROR opening: /research/students/sunrx/myexp/UniFormer/data/Kinetics400/kinetics_400/video_320/uSscrX2Moas.mp4, No such fileor directory Failed to meta load video idx 600 from /research/students/sunrx/myexp/UniFormer/data/Kinetics400/kinetics_400/video_320/-r8c7F4tOI8.mp4; trial 4

我不太清楚现在需要怎样操作, 请问您能给我一些帮助嘛, 十分感谢!

LEM0NTE commented 2 years ago

最新动态! 我已经成功的执行了测试代码. 虽然未能看到是否加载了各个权重文件或是正确使用了网络, 但是服务器打印出了如下消息: image 这与您在其他issue中给出的截图显示相似, 也得到了正确的精度结果.

但仍有一些问题, 首先我检查了test.csv中的文件, 确实都在videos_320文件夹中. 但是似乎test.csv中含有19787个视频, 而打印的消息似乎cur_iter = 1237就结束了全部测试过程. 此外我不太清楚图中的各个名称都代表什么含义, 即cur_iter, split, time_diff

Andy1621 commented 2 years ago

Thanks for your good try! cur_iter is equal to data_size / batch_size, which is 19787 / 16 = 1237. split is the split name for dataset. time_diff refers to the iter time.