Open rajayarli opened 1 year ago
It seems like we have the similar problem, if you see the /dataset/taskonomy_constants.py
.
the code contains BUILDINGS_TRAIN = ['allensville', 'beechwood', 'benevolence', 'coffeen', 'cosmos',
'forkland', 'hanson', 'hiteman', 'klickitat', 'lakeville',
'leonardo', 'lindenwood', 'marstons', 'merom', 'mifflinburg',
'newfields', 'onaga', 'pinesdale', 'pomaria', 'ranchester',
'shelbyville', 'stockman', 'tolstoy', 'wainscott', 'woodbine']. but your omnidownloader may not download 'woodbine'.(please follows the link https://github.com/GitGyun/visual_token_matching/issues/9#issue-1798181629)
So, I think downloading the scene manually would solve the problem.
can you please provide the link to download that or if i want to ignore that which files should i need to modify?
please follow the instruction of the below link. https://github.com/GitGyun/visual_token_matching/issues/9#issuecomment-1631739498
E:\708\visual_token_matching>python main.py --stage 0 --task_fold 4 C:\Users\rajar\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\amp\autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling') Load ckpt from model/pretrained_checkpoints\beit_base_patch16_224_pt22k.pth Load state_dict by model_key = model Expand the shared relative position embedding to each transformer block. Traceback (most recent call last): File "E:\708\visual_token_matching\main.py", line 75, in
run(config)
^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\708\visual_token_matching\train\trainer.py", line 52, in load_support_data
support_data = generate_support_data(self.config, data_path=data_path, verbose=self.verbose)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\708\visual_token_matching\dataset\dataloader_factory.py", line 218, in generate_support_data
dset = TaskonomySegmentationDataset(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\708\visual_token_matching\dataset\taskonomy.py", line 550, in init
self.class_idxs = [class_idx for class_idx in class_idxs if self.img_paths[classidx].split('')[0] in buildings]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\708\visual_token_matching\dataset\taskonomy.py", line 550, in
self.class_idxs = [class_idx for class_idx in class_idxs if self.img_paths[classidx].split('')[0] in buildings]
warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
Load ckpt from model/pretrained_checkpoints\beit_base_patch16_224_pt22k.pth
Load state_dict by model_key = model
Expand the shared relative position embedding to each transformer block.
Traceback (most recent call last):
File "E:\708\visual_token_matching\main.py", line 75, in
run(config)
File "E:\708\visual_token_matching\main.py", line 25, in run
model, ckpt_path = load_model(config, verbose=IS_RANK_ZERO)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\708\visual_token_matching\train\train_utils.py", line 228, in load_model
model = LightningTrainWrapper(config, verbose=verbose)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\708\visual_token_matching\train\trainer.py", line 37, in init
self.support_data = self.load_support_data()
^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\708\visual_token_matching\train\trainer.py", line 52, in load_support_data
support_data = generate_support_data(self.config, data_path=data_path, verbose=self.verbose)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\708\visual_token_matching\dataset\dataloader_factory.py", line 218, in generate_support_data
dset = TaskonomySegmentationDataset(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\708\visual_token_matching\dataset\taskonomy.py", line 550, in init
self.class_idxs = [class_idx for class_idx in class_idxs if self.img_paths[classidx].split('')[0] in buildings]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\708\visual_token_matching\dataset\taskonomy.py", line 550, in
self.class_idxs = [class_idx for class_idx in class_idxs if self.img_paths[classidx].split('')[0] in buildings]