Open KeenNest opened 4 months ago
why i am getting this error while running evaluate.py ..., [0.6902, 0.7098, 0.7176, ..., 0.2078, 0.2078, 0.2157], [0.6941, 0.7098, 0.7176, ..., 0.1804, 0.1451, 0.1608], [0.6784, 0.7294, 0.7412, ..., 0.1882, 0.1961, 0.1843]]]]]), 'key_frame_int': tensor([15])} ('office-video',) Illegal instruction (core dumped)
hello, I meet the same demand, can you please share the dataset link from where i download
@Justarrrrr i created my own dataset , if you're getting illegal instruction (core dumped ) then reduce the size of frame to 160 * 120
Thanks
@Justarrrrr what kind of problem you're facing ?
this model is already trained you have to just download it from given link .
python3 evaluate.py --lr_dir=lr-set --key_dir=key-set --target_dir=hr-set --output_dir=proj --model_dir=experiments/bix4_keyvsrc_attn --restore_file=pretrained --file_fmt="frame%d.jpg"
hello,KeenNest! Thank you for the help you provided earlier, but now I have a new question. Can you help me answer it? I would like to know why there are three folders in the output of the 'evaluate' model: 'key', 'target', and 'lr'. What is the purpose of these three outputs?
basically evaluate argument used as a input like lr (low resolution) used as a input from direct live camera and it takes input as a form of frames . key(high resolution keys) its takes live frame into interval. target used for match the converted frames are match to lr frames ..
I can understand these two inputs, but why we should provide the hr_set, and eventually we get the frames reconstructed are same resolution as hr_set
hi @Justarrrrr basically, we need hr_set to check performance of that model. are u able to produce output from that?
yeah, I can produce the output ,but as my understanding, we just need the low resolution images and some key frames, the hr_set is just needed to compute some metrics like loss rate, is that?
yes, but can you share some of doubt i have to produce output files . and what's you system requirement you're using . ?
what doubt do you have? I use the Vid4 dataset --lr_dir=./Vid4/BDx4 --key_dir=./Vid4/GT --target_dir=./Vid4/GT --model_dir=experiments/bix4_keyvsrc_attn --restore_file=pretrained --file_fmt=%08d.png --output_dir=./output You place the Vid4 dataset in the project folder, and in the end, the output can be obtained in the generated 'output' folder
i am using python3 evaluate.py --lr_dir=/home/ashish/proj/dataset/lr-set/ --key_dir=/home/ashish/proj/dataset/key-set/ --target_dir=/home/ashish/proj/dataset/hr-set/ --modeldir=experiments/bix4 keyvsrc_attn/ --restore_file=pretrained --file_fmt="frame%d.png" to run code amd my code got killed after sometime and i am using jetson nano with 4 gb ram.
what's the Traceback?
ashish@ashish-desktop:~/proj/NeuriCam$ python3 evaluate.py --lr_dir=/home/ashish/proj/dataset/lr-set/ --key_dir=/home/ashish/proj/dataset/key-set/ --target_dir=/home/ashish/proj/dataset/hr-set/ --model_dir=experiments/bix4_keyvsrc_attn/ --restore_file=pretrained --file_fmt="frame%d.png" --output=./output /usr/local/lib/python3.6/dist-packages/mmcv/init.py:21: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details. 'On January 1, 2023, MMCV will release v2.0.0, in which it will remove ' Creating the dataset...
what dataset you use?
I created my own dataset ..
May I ask what kind of preprocessing you have applied to your dataset? I tried running the model on the standard Vid4 dataset with some modifications but encountered issues
first I reduce the size to 160*120, and divide dataset into three parts like, lr-set ,hr-set and key-set. something else i have to do
can you sent your dataset to me have a try?
https://drive.google.com/drive/folders/1jhmUm9rL8zfY-JJ6Zq3GvagDTzLDsM6-?usp=sharing
what's your machine specification. ?
that's what i want to ask you haha , i use remote machine 2080Ti and 4090, but now i don't have idle GPU
I am using. jetson nano: -
128-core NVIDIA Maxwell™ architecture GPU GPU Max Frequency | 921MHz CPU | Quad-core ARM® Cortex®-A57 MPCore processor CPU Max Frequency | 1.43GHz Memory | 4GB 64-bit LPDDR425.6GB/s
First, you need to create a new folder eg. named "work" under the lr_set and other folders. Secondly, this dataset cannot be processed successfully. I have tried modifying the Vid4 dataset to 160x120, but encountered the same error.
but i already created the live folder under lr-set. and what the resolution i have to made as for you ,
Hi @Justarrrrr , any idea why it's not running.
Sorry for the late reply, I haven't found a solution either. I wanted to ask if you have the REDS4 dataset
no i don't have .
hi @Justarrrrr can send me the dataset link which you used to test this repo . and also the command which you're using .
Sure, have you successfully run the model?
not yet, i done down sampling on that dataset but now getting this error . RuntimeError: [enforce fail at CPUAllocator.cpp:68] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 5245378560 bytes. Error code 12 (Cannot allocate memory)
Hi @Justarrrrr can you send me the list of library version you install for run this respo ?
To run this command , python evaluate.py --lr_dir= --key_dir= --target_dir= --model_dir=experiments/bix4_keyvsrc_attn --restore_file=pretrained --file_fmt=<file format eg., "%08d.png">
i want these paths right -path of LR,Key and ground-truth . can you please share the dataset link from where i download .