Closed Stephenito closed 2 years ago
Hi, it seems as if your model output shape doesn't match the shape of the labels. How do you generate your diagrams? Also, the loss function from the tutorial notebooks is designed for 2-d outputs. If your model yields a scalar value, you need to modify it.
Hi, Labels are 2-d arrays, like in the documentation's example. I tried to change the labels to a 1-d array (changing the read_data function), and the program has a strange behaviour. It got stuck after the first epoch, with 0 loss function of both validation and training datasets. In order not to make any mistake in my code, i tried also with your full-code example of trainer_quantum, but still i got the same behaviour. The data is in this format: 1 woman teaches simple categories 1 woman describes simple maths I think it got parsed correctly, as the dataset arrays match your runs.
I will try to look at it in the next few days. Thanks for your help!
@Stephenito Hi -- As @Thommy257 said, the problem is that while your labels are 2-D (as you confirm), the output of the model is 1-D (a scalar). After getting the output of the model, you need to convert it into 2-d before passing it to the loss function. Hope this helps.
Hi, instead of working with the model I made the labels 1-d. As i said before, the error is not there anymore, but the training is working with a strange behaviour. Each epoch is 40 seconds long, and the output is like this:
Epoch 1: train/loss: 0.0000 valid/loss: 0.0000 train/acc: 0.2458 valid/acc: 0.3000 Epoch 2: train/loss: 0.0000 valid/loss: 0.0000 train/acc: 0.2458 valid/acc: 0.3000
Training completed!
I tried with the following samples:
Thanks again!
Have you also adjusted your loss function? Or it still assumes your labels are 2-D?
Yes, i adjusted it for scalar values, but it returns the same behaviour. I tried to modify the Ansatz to make the output 2-d (and work like in the beginning with 2-d labels) and now it's giving normal values. Even though i haven't really understood what an Ansatz is and how to design it. I have a last question: why is it so slow? What should i modify to make it faster?
Hi I am getting the same error
The resolve to this error is due to one or more diagrams having 2 output wires, one way to resolve this is to manually check for all the diagrams and see which sentences have 2 output wires instead of one S wire output. Based on experience usually it's the sentences which start with a verb such as : Do not come here, Learn how to drive, kill the traitors, Love your neighbors, etc. If you have too many instances to check just make sure they all start with a noun, like "I, you, he, she, they, man, woman, it, person, names,etc.".
@ACE07-Sev your issue is different, it arises from Bobcat correctly parsing imperative sentences to pregroup type n.r @ s
. For example:
Tell me what you think
───────────── ── ─────────── ─── ─────────
n.r·s·n.l·n.l n n·n.l.l·s.l n n.r·s·n.l
│ │ │ ╰───╯ │ │ │ ╰────╯ │ │
│ │ ╰───────────╯ │ ╰────────────╯ │
│ │ ╰────────────────────╯
@Stephenito since the original issue has been resolved, I will close the issue.
The TketModel
is typically used with IBM's Aer
simulator which is much slower in comparison to NumpyModel
.
If you have problems with performance, please open a new issue.
Hi, I am trying to run the quantum trainer algorithm. When running the following line:
trainer.fit(train_dataset, val_dataset, evaluation_step=1, logging_step=100)
i get the following error:
I have just fixed the .py file in the lib following #12. The algorithm raised an error even before. I can't recall exactly, but i don't think it was the same error.
What can i do to solve this? Thank you for your time.