Closed Brecony76 closed 6 months ago
Hey @Brecony76. I am not able to replicate this error. I just tried it and I get the following scores:
Prediction([('scores', [0.8417137265205383, 0.7745385766029358]), ('system_score', 0.8081261515617371)])
Hi @Brecony76 I'm observing the same issue.
The behavior is particularly odd, because sometimes it does actually return a score, with no change in code or data... I'm not sure how to reproduce the 0.0 scores, nor the proper scores. Sometimes it just works, sometimes it doesn't. I will retest this tomorrow, to see if I can make any sense of it. For now I completed my task of evaluating some translations with Comet (thanks to the devs and researchers for making this so intuitive!)
I can confirm that this issue exists on Windows. It might be related to this CUDA warning:
[W CudaIPCTypes.cpp:16] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]
But I am not sure and do not have time to dig into this deeper. It is a shame though, as this makes COMET unfortunately unreliable on Windows.
I've done some digging but haven't found a solution, although I have pinpointed the place in the PL Trainer where something goes wrong. The model weights are turned to zero but I do not know why.
To put this into higher priority, feel free to comment on the issue that I raised over at PyTorch Lightning to indicate that you are also experiencing this problem. https://github.com/Lightning-AI/pytorch-lightning/issues/19537
I left a reply in https://github.com/Lightning-AI/pytorch-lightning/issues/19537#issuecomment-1974787881 with a suggestion. I hope it provides some useful insights.
What is your question?
I keep getting scores of 0 no matter what input I give it
Code
`from comet import download_model, load_from_checkpoint
model_path = download_model("Unbabel/wmt22-comet-da") model = load_from_checkpoint(model_path)
data = [ { "src": "10 到 15 分钟可以送到吗", "mt": "Can I receive my food in 10 to 15 minutes?", "ref": "Can it be delivered between 10 to 15 minutes?" }, { "src": "Pode ser entregue dentro de 10 a 15 minutos?", "mt": "Can you send it for 10 to 15 minutes?", "ref": "Can it be delivered between 10 to 15 minutes?" } ]
if name == 'main': model_output = model.predict(data, batch_size=8, gpus=1) print(model_output) print(model_output["scores"]) # sentence-level scores print(model_output["system_score"]) # system-level score `
-output Prediction([('scores', [0.0, 0.0]), ('system_score', 0.0)]) [0.0, 0.0] 0.0
What's your environment?