Closed TakanoHori closed 1 year ago
@TakanoHori Did you find error in
import datasets
?
and are training on your local machine or colab?
Hi, I just checked the version. I used transformers==4.4.2, and datasets==1.4.1, when I was conducting the experiments. Please try this version to see if it's working.
I am now getting errors:
comet_ml is installed but
COMET_API_KEYis not set.
run_emotion.py: error: the following arguments are required: --model_name_or_path, --output_dir
An exception has occurred, use %tb to see the full traceback.
@Coding511 Sorry I tried running the code but didn't see this error. It might come from somewhere using the Comet ML logger. Can you try sign up and set up an API key on https://www.comet.com/site/ ?
@TideDancer I have sign in to this link. But don't know how to set up the API key?? Could you please tell me how are you training the model from scratch?
@Coding511
Sorry I didn't encounter this issue and I didn't use CometML. Alternatively, just delete any part that involves CometML logger in the code or dependency packages.
Please follow preparation_data and use "bash run.sh" to train the model from scratch. Just let you know, training with more than 100 epochs takes days to finish. If you want a larger batch size, which could speed up training, please feel free to modify hyperparams in run.sh and test.
Thanks.
I am trying to execute the model on my local IDE (spyder). i think the problem is --model_name_or_path=facebook/$MODEL ` in run.sh file line number 27. What is that? Did you save the model there in some facebook folder? Also I don't know how to execute bash files in spyder.:(
@Coding511 The --model_name_or_path=facebook/$MODEL will directly download the pretrained model from https://huggingface.co/models (a model hub hosted by huggingface) and save to a cache directory, if there is no previous checkpoints saved. The default setting is using facebook/wav2vec2-base .
I am not familiar with spyder or other IDEs, therefore I cannot provide instructions on how to setup spyder to run bash commands. Maybe just running the code in a terminal could be simpler.
@TideDancer , Please What exactly did you do to the emotion
and 'text` columns to combine the labels here.
https://github.com/TideDancer/interspeech21_emotion/blob/main/run_emotion.py#L283
I am struggling to understand that aspect of the code and I would love it if you could explain to me about what is happening there.
I'm coming back to the original error posted in this issue "TypeError: 'tuple' object does not support item assignment". I'm running into the same output. The comment says "labels are list of tensor, not tensor, special handle here" but they are actually a tuple of tensors.
@TideDancer , Please What exactly did you do to the
emotion
and 'text` columns to combine the labels here. https://github.com/TideDancer/interspeech21_emotion/blob/main/run_emotion.py#L283I am struggling to under that aspect of the code and I would love it if you could to me about what is happening there.
Hello @owos , sorry for the late reply. In the collator, I just put the label as the last element in feature['labels'], as shown here https://github.com/TideDancer/interspeech21_emotion/blob/c36dcc0d2bd9a22602c081a5ab064ab5e9d4f019/run_emotion.py#L286
Hope this answers your question.
I'm coming back to the original error posted in this issue "TypeError: 'tuple' object does not support item assignment". I'm running into the same output. The comment says "labels are list of tensor, not tensor, special handle here" but they are actually a tuple of tensors.
@padmalcom , sorry for the late reply. Did you try transformers==4.4.2, and datasets==1.4.1 ?
@TideDancer , Please What exactly did you do to the
emotion
and 'text` columns to combine the labels here. https://github.com/TideDancer/interspeech21_emotion/blob/main/run_emotion.py#L283 I am struggling to under that aspect of the code and I would love it if you could to me about what is happening there.Hello @owos , sorry for the late reply. In the collator, I just put the label as the last element in feature['labels'], as shown here
Hope this answers your question.
Yes, later figured this. But then I added an extra head and tried to slice it with [0,1,2] and I got a list out of range error. Do you know why that could be?
@TideDancer , Please What exactly did you do to the
emotion
and 'text` columns to combine the labels here. https://github.com/TideDancer/interspeech21_emotion/blob/main/run_emotion.py#L283 I am struggling to under that aspect of the code and I would love it if you could to me about what is happening there.Hello @owos , sorry for the late reply. In the collator, I just put the label as the last element in feature['labels'], as shown here
Hope this answers your question.
Yes, thank you. I later figured that out. The current issue at hand is that I added an extra head and tried to slice it [2]
but I got a list out of range
error.
Please do you know why that could be?
@owos I am not sure. It should work if you just add another field (a number) in the end of the label. Which line gives you this error?
That exact line. Also, the CLS head uses a pooling layer but the paper didn't say why and other impact of using a pooling layer. Could you answer that please?
@owos The pooling layer is used to reduce the time dimension to one. Otherwise it has variable length thus not compatible with the other classification task (I treat it as per-utterance task, instead of per-token).
Hi Fenfin,
I don't think the text is preprocessed except capitalized. Did you figure it out why the error happens?
Best,
fenfin @.***> 于2023年3月3日周五 20:37写道:
Excuese me,Is the text in the csv file processed? I read the transcription script directly from iemocap and get the following error.list index out of range
— Reply to this email directly, view it on GitHub https://github.com/TideDancer/interspeech21_emotion/issues/11#issuecomment-1454412481, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB7QJ76N5N5UY2SPINQOHQLW2LBI3ANCNFSM55GUUQ7A . You are receiving this because you were mentioned.Message ID: @.***>
Hi Fenfin, I don't think the text is preprocessed except capitalized. Did you figure it out why the error happens? Best, fenfin @.> 于2023年3月3日周五 20:37写道: … Excuese me,Is the text in the csv file processed? I read the transcription script directly from iemocap and get the following error.list index out of range — Reply to this email directly, view it on GitHub <#11 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB7QJ76N5N5UY2SPINQOHQLW2LBI3ANCNFSM55GUUQ7A . You are receiving this because you were mentioned.Message ID: @.>
Thank you, this problem has been solved!
https://github.com/TideDancer/interspeech21_emotion/blob/6f5851604d5d2367016a020a11949e53f14e0129/run_emotion.py#L560
while running
sh run.sh
, TypeError occured like this.In my environment, installed transformers == 4.20.1 in python 3.8.6. Could you tell me which version of transformers/python you had when you work?