Open yihang-xdu opened 10 months ago
Hi,
Try this command
python train_attribute_features.py --epoch=5000 --args.path2instruction_bert_features=./dataset/instruction_bert_features.json
Dear Authors,
Thanks for your reply ! I think this may not be a problem of not reading the json file. When I read the file "instruction_val_check.json" , I get 241 dictionaries , and before the the intersection of the three , it still has 239 dictionaries . But the the the intersection of the three is empty . I don't know whether the number of it is right . Can you help me check it? Here are some details before the intersection of the three :
Dear Authors,
Thanks for your reply ! I think this may not be a problem of not reading the json file. When I read the file "instruction_val_check.json" , I get 241 dictionaries , and before the the intersection of the three , it still has 239 dictionaries . But the the the intersection of the three is empty . I don't know whether the number of it is right . Can you help me check it? Here are some details before the intersection of the three :
Hi,
I am sorry that I uploaded the wrong JSON files. I re-upload the related JSON files just now including instruction_bert_features.json
and LGO_feature.json
. Please download them again.
The code works now ! Thanks a lot !
Dear Authors,
Thanks for your great work! I meet an error when I run ' python train_attribute_features.py --epoch=5000 ' to train a attribute module : Please ask the author, if you have ever encountered this problem. Trying to use the debug model, this problem still occurs. The online search indicates that it may be due to the lack of hardware, but I see that your code writes that one gpu can be used, but there are 8 GPUs available on my server
Can you give me some advice? Thank you again!
Dear Authors,
Thanks for your great work! I meet an error when I run ' python train_attribute_features.py --epoch=5000 ' to train a attribute module : Please ask the author, if you have ever encountered this problem. Trying to use the debug model, this problem still occurs. The online search indicates that it may be due to the lack of hardware, but I see that your code writes that one gpu can be used, but there are 8 GPUs available on my server
Can you give me some advice? Thank you again!
Hi,
I did not encounter this problem in the whole training process.
you can directly use my trained attribute model in the Materials Download
(https://github.com/whcpumpkin/Demand-driven-navigation?tab=readme-ov-file#materials-download-under-updating)
If it's a matter of GPU count, I'd suggest usingCUDA_VISIBLE_DEVICES=0 python train_attribute_features.py --epoch=5000
to run the code.
Dear Authors, Thanks for your reply ! I think this may not be a problem of not reading the json file. When I read the file "instruction_val_check.json" , I get 241 dictionaries , and before the the intersection of the three , it still has 239 dictionaries . But the the the intersection of the three is empty . I don't know whether the number of it is right . Can you help me check it? Here are some details before the intersection of the three :
Hi, I am sorry that I uploaded the wrong JSON files. I re-upload the related JSON files just now including
instruction_bert_features.json
andLGO_feature.json
. Please download them again.
hello, LGO_feature.json
in Googledrive still has the problem as belows:
$python train_attribute_features.py --epoch=5000
Traceback (most recent call last):
File "/home/datadisk3/lss/demandnav/train_attribute_features.py", line 201, in
And The link to Onedrive has failed.
Is it because the LGO_feature.json
on GoogleDrive has not been updated to the correct file yet, can you upload the updated LGO_feature.json
?
Look forward to your reply!~
Dear Authors, Thanks for your reply ! I think this may not be a problem of not reading the json file. When I read the file "instruction_val_check.json" , I get 241 dictionaries , and before the the intersection of the three , it still has 239 dictionaries . But the the the intersection of the three is empty . I don't know whether the number of it is right . Can you help me check it? Here are some details before the intersection of the three :
Hi, I am sorry that I uploaded the wrong JSON files. I re-upload the related JSON files just now including
instruction_bert_features.json
andLGO_feature.json
. Please download them again.hello,
LGO_feature.json
in Googledrive still has the problem as belows:$python train_attribute_features.py --epoch=5000 Traceback (most recent call last): File "/home/datadisk3/lss/demandnav/train_attribute_features.py", line 201, in main() File "/home/datadisk3/lss/demandnav/train_attribute_features.py", line 195, in main global_step, avg_loss, min_val_epoch, min_val_loss = train(args, train_dataset, val_dataset, model) File "/home/datadisk3/lss/demandnav/train_attribute_features.py", line 97, in train val_loss = eval(args, model, val_dataloader) File "/home/datadisk3/lss/demandnav/train_attribute_features.py", line 129, in eval return global_loss/global_step ZeroDivisionError: float division by zero
And The link to Onedrive has failed.
Is it because the
LGO_feature.json
on GoogleDrive has not been updated to the correct file yet, can you upload the updatedLGO_feature.json
?Look forward to your reply!~
Hi,
I am sorry that the link of onedrive was down. The data in googledrive is not as new as in onedrive.
I have updated theLGO_feature.json
in googledrive just now.
If you have anyother problem, feel free to ask me.
Thanks so much, I used the LGO_feature.json
in googledrive, and changed args.path2instruction_bert_features
to ./dataset/instruction_bert_features.json
, which helped me run successfully.
Dear Authors,
Thanks for your great work! I meet an error when I run ' python train_attribute_features.py --epoch=5000 ' to train a attribute module :
Traceback (most recent call last): File "/data1/zyh/Demand-driven-navigation-main/train_attribute_features.py", line 201, in
main()
File "/data1/zyh/Demand-driven-navigation-main/train_attribute_features.py", line 195, in main
global_step, avg_loss, min_val_epoch, min_val_loss = train(args, train_dataset, val_dataset, model)
File "/data1/zyh/Demand-driven-navigation-main/train_attribute_features.py", line 97, in train
val_loss = eval(args, model, val_dataloader)
File "/data1/zyh/Demand-driven-navigation-main/train_attribute_features.py", line 129, in eval
return global_loss/global_step
ZeroDivisionError: float division by zero
When I debug it, I find the code ' val_dataset = instruction_LGO_dataset(args, "val") ' get a empty self.instruction, that is because: ' self.instruction = set(self.instruction.keys()).intersection(set(self.instruction_features.keys())).intersection(set(self.LGO_features.keys())) ' the intersection of the three is empty.
Can you give me some advice? Thank you again!