Thanks for the awesome research and results.
I got the following traceback when the train results were getting printed. Can you please help me in fixing this?
[INFO|trainer.py:1344] 2022-11-13 12:06:39,632 >> Saving model checkpoint to result/my-unsup-simcse-bert-base-uncased
[INFO|configuration_utils.py:300] 2022-11-13 12:06:39,633 >> Configuration saved in result/my-unsup-simcse-bert-base-uncased/config.json
[INFO|modeling_utils.py:817] 2022-11-13 12:06:40,981 >> Model weights saved in result/my-unsup-simcse-bert-base-uncased/pytorch_model.bin
11/13/2022 12:06:41 - INFO - main - Train results
11/13/2022 12:06:41 - INFO - main - epoch = 1.0
11/13/2022 12:06:41 - INFO - main - train_runtime = 15286.503
11/13/2022 12:06:41 - INFO - main - train_samples_per_second = 1.022
11/13/2022 12:06:41 - INFO - main - Evaluate
11/13/2022 12:07:37 - INFO - root - Generating sentence embeddings
11/13/2022 12:08:06 - INFO - root - Generated sentence embeddings
11/13/2022 12:08:06 - INFO - root - Training pytorch-MLP-nhid0-rmsprop-bs128 with (inner) 5-fold cross-validation
Traceback (most recent call last):
File "/home/nizam/SimCSE/train.py", line 585, in
main()
File "/home/nizam/SimCSE/train.py", line 567, in main
results = trainer.evaluate(eval_senteval_transfer=True)
File "/home/nizam/SimCSE/simcse/trainers.py", line 129, in evaluate
results = se.eval(tasks)
File "/home/nizam/SimCSE/./SentEval/senteval/engine.py", line 59, in eval
self.results = {x: self.eval(x) for x in name}
File "/home/nizam/SimCSE/./SentEval/senteval/engine.py", line 59, in
self.results = {x: self.eval(x) for x in name}
File "/home/nizam/SimCSE/./SentEval/senteval/engine.py", line 127, in eval
self.results = self.evaluation.run(self.params, self.batcher)
File "/home/nizam/SimCSE/./SentEval/senteval/binary.py", line 57, in run
devacc, testacc = clf.run()
File "/home/nizam/SimCSE/./SentEval/senteval/tools/validation.py", line 78, in run
clf = MLP(self.classifier_config, inputdim=self.featdim,
File "/home/nizam/SimCSE/./SentEval/senteval/tools/classifier.py", line 200, in init
optim_fn, optim_params = utils.get_optimizer(self.optim)
File "/home/nizam/SimCSE/./SentEval/senteval/utils.py", line 89, in get_optimizer
expected_args = inspect.getargspec(optim_fn.init)[0]
File "/home/nizam/anaconda3/envs/simcse-new/lib/python3.9/inspect.py", line 1122, in getargspec
raise ValueError("Function has keyword-only parameters or annotations"
ValueError: Function has keyword-only parameters or annotations, use inspect.signature() API which can support them
Thanks for the awesome research and results. I got the following traceback when the train results were getting printed. Can you please help me in fixing this?
[INFO|trainer.py:1344] 2022-11-13 12:06:39,632 >> Saving model checkpoint to result/my-unsup-simcse-bert-base-uncased [INFO|configuration_utils.py:300] 2022-11-13 12:06:39,633 >> Configuration saved in result/my-unsup-simcse-bert-base-uncased/config.json [INFO|modeling_utils.py:817] 2022-11-13 12:06:40,981 >> Model weights saved in result/my-unsup-simcse-bert-base-uncased/pytorch_model.bin 11/13/2022 12:06:41 - INFO - main - Train results 11/13/2022 12:06:41 - INFO - main - epoch = 1.0 11/13/2022 12:06:41 - INFO - main - train_runtime = 15286.503 11/13/2022 12:06:41 - INFO - main - train_samples_per_second = 1.022 11/13/2022 12:06:41 - INFO - main - Evaluate 11/13/2022 12:07:37 - INFO - root - Generating sentence embeddings 11/13/2022 12:08:06 - INFO - root - Generated sentence embeddings 11/13/2022 12:08:06 - INFO - root - Training pytorch-MLP-nhid0-rmsprop-bs128 with (inner) 5-fold cross-validation Traceback (most recent call last): File "/home/nizam/SimCSE/train.py", line 585, in
main()
File "/home/nizam/SimCSE/train.py", line 567, in main
results = trainer.evaluate(eval_senteval_transfer=True)
File "/home/nizam/SimCSE/simcse/trainers.py", line 129, in evaluate
results = se.eval(tasks)
File "/home/nizam/SimCSE/./SentEval/senteval/engine.py", line 59, in eval
self.results = {x: self.eval(x) for x in name}
File "/home/nizam/SimCSE/./SentEval/senteval/engine.py", line 59, in
self.results = {x: self.eval(x) for x in name}
File "/home/nizam/SimCSE/./SentEval/senteval/engine.py", line 127, in eval
self.results = self.evaluation.run(self.params, self.batcher)
File "/home/nizam/SimCSE/./SentEval/senteval/binary.py", line 57, in run
devacc, testacc = clf.run()
File "/home/nizam/SimCSE/./SentEval/senteval/tools/validation.py", line 78, in run
clf = MLP(self.classifier_config, inputdim=self.featdim,
File "/home/nizam/SimCSE/./SentEval/senteval/tools/classifier.py", line 200, in init
optim_fn, optim_params = utils.get_optimizer(self.optim)
File "/home/nizam/SimCSE/./SentEval/senteval/utils.py", line 89, in get_optimizer
expected_args = inspect.getargspec(optim_fn.init)[0]
File "/home/nizam/anaconda3/envs/simcse-new/lib/python3.9/inspect.py", line 1122, in getargspec
raise ValueError("Function has keyword-only parameters or annotations"
ValueError: Function has keyword-only parameters or annotations, use inspect.signature() API which can support them