Open lpawela opened 1 year ago
After merging master into lp/experiments
I get the following error
11:25:50 |base|lpawela@strange PyQBench ±|lp/ibmq-experiments|→ qbench disc-fourier resolve
cd -- qbench disc-fourier resolve
bash: cd: too many arguments
After installing in a clean conda environment the problem in comment https://github.com/iitis/PyQBench/issues/39#issuecomment-1277308342 has vanished. I still get the original error.
Try with my branch pl/fix-resolve-error.
Now it looks like an infinite download loop
01:12:16 |pyqbench|lpawela@strange PyQBench ±|lp/ibmq-experiments ✗|→ qbench disc-fourier resolve ibmq-experiments/test_batch_washington.yml ibmq-experiments/results/test_batch_washington.yml
INFO | 2022-10-13 13:12:23,746 | qbench | Enabling account and creating backend
INFO | 2022-10-13 13:12:29,369 | qbench | Reading jobs ids from the input file
INFO | 2022-10-13 13:12:29,370 | qbench | Fetching total of 22 jobs
WARNING | 2022-10-13 13:12:33,156 | qbench | IBMQJobFailureError for job 63347e3bfcf0f35ca5eae594
WARNING | 2022-10-13 13:12:34,804 | qbench | IBMQJobFailureError for job 63347e3bfcf0f35ca5eae594
WARNING | 2022-10-13 13:12:36,624 | qbench | IBMQJobFailureError for job 63347e3bfcf0f35ca5eae594
WARNING | 2022-10-13 13:12:38,597 | qbench | IBMQJobFailureError for job 63347e3bfcf0f35ca5eae594
WARNING | 2022-10-13 13:12:40,212 | qbench | IBMQJobFailureError for job 63347e3bfcf0f35ca5eae594
WARNING | 2022-10-13 13:12:41,450 | qbench | IBMQJobFailureError for job 63347e3bfcf0f35ca5eae594
WARNING | 2022-10-13 13:12:43,126 | qbench | IBMQJobFailureError for job 63347e3bfcf0f35ca5eae594
WARNING | 2022-10-13 13:12:44,812 | qbench | IBMQJobFailureError for job 63347e3bfcf0f35ca5eae594
WARNING | 2022-10-13 13:12:46,658 | qbench | IBMQJobFailureError for job 63347e3bfcf0f35ca5eae594
WARNING | 2022-10-13 13:12:48,304 | qbench | IBMQJobFailureError for job 63347e3bfcf0f35ca5eae594
This continues indefinitely with the same job id.
When some of the jobs in an experiment have failed, the entire
resolve
fails. Here is an exampleCan we skip the failed jobs and only
resolve
succesful ones? We can have "gaps" in the data, that is fine. We should just have a way to mark the parts that have failed.