CQCL / lambeq

A high-level Python library for Quantum Natural Language Processing
https://cqcl.github.io/lambeq-docs
Apache License 2.0
451 stars 108 forks source link

accuracy performance #4

Closed nlpirate closed 2 years ago

nlpirate commented 2 years ago

I'm trying to reproduce the example described in section 4 of the documentation, the classifier based on Lorentz et al., 2021 using their resources for training and testing.

I'm using pytket instead of IBMQ, but the final value is always much lower than the one described in the documentation (around 0.5 compared to 0.9 expected).

How is this behavior possible if the settings and resources are the same?

dimkart commented 2 years ago

Hi, the example in section 4 of the documentation is "classical", it doesn't use any quantum backend. Are you trying to convert it to quantum and you get low accuracy? Note that in docs/examples there are two notebooks that run "quantum" experiments on the same dataset, one of them is using pytket. Can you run them successfully? And what do you mean exactly by saying you are using resources for training and testing from the paper? (the code in the paper repo doesn't use lambeq).

dimkart commented 2 years ago

@nlpirate I'm going to close this due to inactivity. Feel free to open a new one in case you are still interested in the question.