Open 101AlexMartin opened 10 months ago
Support for windows was dropped after 0.3. I think the TF dependency of 0.3 was like tf 2.1.0 or something, but that's probably an error. Also see: https://github.com/tensorflow/quantum/issues/798
Thanks for the answer @lockwo . I migrated to Linux (CentOS Linux release 7.9.2009) to try solving the issue, but I still get an error with the _simulateops:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "QML/venv/lib/python3.7/site-packages/tensorflow_quantum/__init__.py", line 18, in <module>
from tensorflow_quantum.core import (append_circuit, get_expectation_op,
File "QML/venv/lib/python3.7/site-packages/tensorflow_quantum/core/__init__.py", line 17, in <module>
from tensorflow_quantum.core.ops import (get_expectation_op,
File "QML/venv/lib/python3.7/site-packages/tensorflow_quantum/core/ops/__init__.py", line 18, in <module>
from tensorflow_quantum.core.ops.circuit_execution_ops import (
File "QML/venv/lib/python3.7/site-packages/tensorflow_quantum/core/ops/circuit_execution_ops.py", line 20, in <module>
from tensorflow_quantum.core.ops import (cirq_ops, tfq_simulate_ops,
File "QML/venv/lib/python3.7/site-packages/tensorflow_quantum/core/ops/tfq_simulate_ops.py", line 19, in <module>
SIM_OP_MODULE = load_module("_tfq_simulate_ops.so")
File "QML/venv/lib/python3.7/site-packages/tensorflow_quantum/core/ops/load_module.py", line 46, in load_module
return load_library.load_op_library(path)
File "QML/venv/lib/python3.7/site-packages/tensorflow/python/framework/load_library.py", line 54, in load_op_library
lib_handle = py_tf.TF_LoadLibrary(library_filename)
tensorflow.python.framework.errors_impl.NotFoundError:QML/venv/lib/python3.7/site-packages/tensorflow_quantum/core/ops/_tfq_simulate_ops.so: undefined symbol: _ZNK10tensorflow8OpKernel11TraceStringERKNS_15OpKernelContextEb
The installed packages and versions in python 3.7.7 are:
absl-py==2.1.0
astunparse==1.6.3
cachetools==4.2.4
certifi==2023.11.17
charset-normalizer==3.3.2
cirq-core==0.13.1
cirq-google==0.13.1
cycler==0.11.0
duet==0.2.8
flatbuffers==23.5.26
fonttools==4.38.0
gast==0.4.0
google-api-core==1.21.0
google-auth==1.18.0
google-auth-oauthlib==0.4.6
google-pasta==0.2.0
googleapis-common-protos==1.52.0
grpcio==1.60.0
h5py==3.8.0
idna==3.6
importlib-metadata==6.7.0
keras==2.11.0
kiwisolver==1.4.5
libclang==16.0.6
Markdown==3.4.4
MarkupSafe==2.1.3
matplotlib==3.5.3
mpmath==1.3.0
networkx==2.6.3
numpy==1.21.6
oauthlib==3.2.2
opt-einsum==3.3.0
packaging==23.2
pandas==1.3.5
Pillow==9.5.0
protobuf==3.17.3
pyasn1==0.5.1
pyasn1-modules==0.3.0
pyparsing==3.1.1
python-dateutil==2.8.2
pytz==2023.3.post1
requests==2.31.0
requests-oauthlib==1.3.1
rsa==4.9
scipy==1.7.3
six==1.16.0
sortedcontainers==2.4.0
sympy==1.8
tensorboard==2.11.2
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.1
tensorflow==2.11.0
tensorflow-estimator==2.11.0
tensorflow-io-gcs-filesystem==0.34.0
tensorflow-quantum==0.7.2
termcolor==2.3.0
tqdm==4.66.1
typing_extensions==4.7.1
urllib3==1.26.6
Werkzeug==2.2.3
wrapt==1.16.0
zipp==3.15.0
Yes, this is a common issue which almost always results from version mismatches. See: https://github.com/tensorflow/quantum/issues/800, https://github.com/tensorflow/quantum/issues/779, https://github.com/tensorflow/quantum/issues/798, https://github.com/tensorflow/quantum/issues/771, https://github.com/tensorflow/quantum/issues/757, https://github.com/tensorflow/quantum/issues/768, https://github.com/tensorflow/quantum/issues/714
I would then suggest not to specify in the installation guide (https://www.tensorflow.org/quantum/install) to install TF2.11 if it ends up not being compatible with the current TFQ version, it is misleading.
If this repo were not abandoned, I'm sure that would be taken into consideration
Why is it abandoned? I think it would be good to clarify the version compatibility, at least to put one working version for windows and another for linux. If an external user finds that after following the installation steps, the package does not work and throws a weird error, then they would tend to look for another library that does the same thing, losing thus market share. I can make a PR to clarify a bit the installation procedure, if you think it'd be worth it.
Not sure why it is abandoned, you would have to ask Google. You can make a PR, but it won't get merged, because there are no active maintainers. You can see there is a PR very similar to what you want that's been open for months: https://github.com/tensorflow/quantum/pull/803
That's a pity then.
Changing topics. Do you know how to integrate a PQC into a Sequential block of Dense layers? I'm not an expert on quantum computing so I'm not sure if this even makes any sense, but I've seen a paper doing this (on a different library tho). I created a dummy code (inspired from the _Hello_manyworlds tutorial) to fit the line y=x, but get an error in the interface between Dense and PQC layers. Any idea how to implement this?
import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
# visualization tools
import matplotlib.pyplot as plt
# Invented dataset
x = np.array(range(100))
y = x
# The classical neural network layers.
controller = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation='elu'),
tf.keras.layers.Dense(1)
])
commands_input = tf.keras.Input(shape=(1,),
dtype=tf.dtypes.float32,
name='commands_input')
dense_2 = controller(commands_input)
# Parameters to be trained in the Quantum part of the MLP.
quantum_params = sympy.symbols('theta_1 theta_2 theta_3')
# Create the parameterized circuit.
qubit = cirq.GridQubit(0, 0)
model_circuit = cirq.Circuit(
cirq.rz(quantum_params[0])(qubit),
cirq.ry(quantum_params[1])(qubit),
cirq.rx(quantum_params[2])(qubit))
# TFQ layer for classically controlled circuits.
expectation_layer = tfq.layers.PQC(model_circuit,
# Observe Z
operators = cirq.Z(qubit))
expectation = expectation_layer(tf.strings.as_string(dense_2))
# The full Keras model is built from our layers.
model = tf.keras.Model(inputs=commands_input,
outputs=expectation)
model.summary()
optimizer = tf.keras.optimizers.Adam(learning_rate=0.05)
loss = tf.keras.losses.MeanSquaredError()
model.compile(optimizer=optimizer, loss=loss)
history = model.fit(x=x,
y=y,
epochs=30)
Sure its possible, like fig 12 of https://arxiv.org/pdf/2003.02989.pdf. Feeding the outputs of a QVC into a NN is straightforward (just pass the layer outputs into sequential). To pass classical outputs to quantum requires a little work but is doable, e.g. https://www.tensorflow.org/quantum/tutorials/hello_many_worlds#2_hybrid_quantum-classical_optimization.
Thanks for the answer.
I assume the PQC
layer cannot have a classical layer upstream. I was trying to connect a 3 neurons dense layer (_dense2) to a PQC
, by artificially embedding these floats into a 0-rank tf.string, as follows:
expectation_layer = tfq.layers.PQC(model_circuit,
# Observe Z
operators = cirq.Z(qubit))
dense_2_string = tf.strings.as_string(dense_2)
dense_2_cat_str = tf.strings.reduce_join(dense_2_string, separator="")
expectation = expectation_layer(dense_2_cat_str)
Unfortunately it loses the batch dimension and thus does not work (i.e. _dense_2_catstr ends up being a tf.string of shape () instead of (None,)). Any idea if this strategy could work somehow?
In Figure 12 of the paper sent, I see quantum layers upstream from classical layers, but I'm not sure how the interface could be achieved. I guess it's probably using the ControlledPQC
layer, to which you can feed classical numbers as the parameters to the quantum circuit, as in the example you provide. What I don't fully understand is how would you code Figure 13, since you are just interested in the upstream classical layer parameters, but the ControlledPQC
still needs some quantum data as input, and in here we are not interested in simulating any noise as in the Hello Worlds example. Would it be correct to just change the variable _noisypreparation from:
noisy_preparation = cirq.Circuit(
cirq.rx(random_rotations[0])(qubit),
cirq.ry(random_rotations[1])(qubit),
cirq.rz(random_rotations[2])(qubit)
)
to:
noisy_preparation = cirq.Circuit()
Would this create a dummy input to the layer and thus be equivalent to a block DNN-QNN as the one in figure 13?
Finally, I'd also like to ask you if the TFQ library has any function to transform classical data into quantum one. This could also be a work-around for the stacking of DNN-QNN layers, allowing the usage of the PQC
layer and thus simplifying the code.
Yes, if you want to use DNNs upstream of a PQC, you would probably want to use ControlledPQC or a custom layer (my go to approach, it gives you a lot of flexibility, see: https://github.com/lockwo/quantum_computation/blob/master/TFQ/data_reupload/reup.py). ControlledPQC will always have quantum data input, because that is simply the starting state of the quantum circuit (which must exist). You could always just initialize it to the |0>^N state, which I often do. Simply creating a cirq.Circuit is sufficient for those purposes most of the time as you have. There isn't any set of default for converting data, but doing encodings is very possible (see: https://github.com/lockwo/quantum_computation/blob/master/TFQ/vqc/boston_housing.py). In general, QML on classical data is probably much less potent/capable of giving any meaningful speedups/results than QML on quantum data, which is probably why there isn't a big emphasis on tooling for it.
Thank you so much for the answer.
ReUpload
layer over the ControlledPQC
? I saw the original paper, and they stress the fact of using inputs from classical networks and not so much the parameters theta and w (that could be similarly achieved with a Dense upstream layer, if I'm not mistaken)I see. It's a pity Google has abandoned it. It might be because it has grown to its full potential for now, or maybe because on classical data, QML does not outperform classical algorithms. What was your role in the development of TFQ? The deterministic nature of QML still does not convince me:
readout_operators = [cirq.Z(qs[0]), cirq.X(qs[0])]
. Would this make sense? Doesn't it go against Heisenberg principle?Again, thank you very much for your help, it's highly appreciated in such a complex field.
Most people wouldn't consider QML deterministic (although there are some philosophical interpretations of quantum mechanics that might).
yes a qubit is a superposition of states. Yes you will draw samples from that distribution, so each time might be different.
you can measure those in cirq, yes. That doesn't violate anything because in hardware those would be different executions. We would run the circuit and measure Z then wipe it run it again and measure Y. Of course there is no need to do that in software.
If the output of the circuit can be different for the same input, how does the concept of QML make sense? For training I assume you run the circuit several times for each sample and epoch, and then you backpropagate comparing histograms. However, for testing I don't see how can you circumvent this issue. Should the circuit be run several times for each sample of the test set, and then average those samples to check the accuracy? (I've seen sometimes the argument repetition, probably it has to do with it. Since I don't always see it then I assumed it was not needed)
Yes, in order to get an expectation value of any operator you want to sample the circuit many times (10-10,000) on real hardware. This is not unique to QML though, all of quantum computation requires this
I see. How do you usually handle this issue when using TFQ? I see that for the PQC layer, for instance, repetitions is set to None by default. Does it mean that the circuit is only evaluated once?
No, None means that it is evaluated analytically. Ie given the state vector or density matrix we just exactly compute the values. 1 shot would be repetitions = 1
But if they are computed analytically, doesn't it lose the probabilistic nature of the measurement?
I mean, yes and no. Give a probability density, instead of sampling from it to compute an expected value of some operator, you just analytically compute it. It's still probabilistic, it just loses the shot noise. If we could in real quantum devices we would do this. The shot noise of measurement gives us no value (almost always) and hinders our accuracy of what we want to compute unless we do enough repetitions
Hi! Sorry to pick up this thread again, but I got a question regarding your last answer and you might be able to solve it! :) If at the output of a measurement I have a density matrix, what would it happen if I have set the circuit to just measure with 1 shot? Would I randomly get one point within the density matrix? Or is there a method that inherently pushes that 1 shot I have towards the mean of the matrix? A doubt related to this is what is the difference between a PQC circuit with 1000 repetitions and a NoisyPQC with 1000 repetitions? Is the second also taking into account hardware noise such as gate imperfection, depolarization error and so on? If so, is there a way to make this hardware noise bigger and smaller to see the impact is has on the predictions?
If at the output of a measurement I have a density matrix, what would it happen if I have set the circuit to just measure with 1 shot?
The output of a "shot" is just a bit string in a given basis from your circuit. If you tried to approximate a density matrix from a single shot, you would just get a very bad approximation of the density matrix (you need exponentially many shots to estimate the density matrix, see https://en.wikipedia.org/wiki/Quantum_tomography).
PQC circuit with 1000 repetitions and a NoisyPQC with 1000 repetitions?
One is a circuit executed without noise 1000 times and the other is a circuit executed with noise 1000 times.
Is the second also taking into account hardware noise such as gate imperfection, depolarization error and so on? If so, is there a way to make this hardware noise bigger and smaller to see the impact is has on the predictions?
It only simulates the noise you program it to simulate, see https://www.tensorflow.org/quantum/tutorials/noise. So if you increase or decrease the noise, then you can see the effects. And you can add any hardware noise you want to simulate.
After installing (on a Windows machine) tensorflow and tensorflow-quantum as explained in the repository, I don't manage to import tensorflow_quantum, since I get the _tfq_simulate_ops.so error (_tensorflow_quantum\core\ops_tfq_simulateops.so not found). I installed version 2.11 of tensorflow and 0.3.1 of tensorflow quantum (I see the current version of the repository is already 0.7, so I'm also not sure why I get 0.3 when running pip install). Should I downgrade tensorflow? These are the installed packages and versions of the virtual environment I'm using to import tensorflow-quantum: absl-py==2.1.0 astunparse==1.6.3 cachetools==5.3.2 certifi==2023.11.17 charset-normalizer==3.3.2 cirq==0.8.0 cycler==0.11.0 dill==0.3.7 flatbuffers==23.5.26 fonttools==4.38.0 freezegun==0.3.15 gast==0.4.0 google-api-core==1.34.0 google-auth==2.26.2 google-auth-oauthlib==0.4.6 google-pasta==0.2.0 googleapis-common-protos==1.62.0 grpcio==1.60.0 grpcio-status==1.60.0 h5py==3.8.0 idna==3.6 importlib-metadata==6.7.0 keras==2.11.0 kiwisolver==1.4.5 libclang==16.0.6 Markdown==3.4.4 MarkupSafe==2.1.3 matplotlib==3.5.3 mpmath==1.3.0 multiprocess==0.70.15 networkx==2.6.3 numpy==1.21.6 oauthlib==3.2.2 opt-einsum==3.3.0 packaging==23.2 pandas==1.3.5 pathos==0.2.5 Pillow==9.5.0 pox==0.3.3 ppft==1.7.6.7 protobuf==3.19.6 pyasn1==0.5.1 pyasn1-modules==0.3.0 pyparsing==3.1.1 python-dateutil==2.8.2 pytz==2023.3.post1 requests==2.31.0 requests-oauthlib==1.3.1 rsa==4.9 scipy==1.7.3 six==1.16.0 sortedcontainers==2.4.0 sympy==1.4 tensorboard==2.11.2 tensorboard-data-server==0.6.1 tensorboard-plugin-wit==1.8.1 tensorflow==2.11.0 tensorflow-estimator==2.11.0 tensorflow-intel==2.11.0 tensorflow-io-gcs-filesystem==0.31.0 tensorflow-quantum==0.3.1 termcolor==2.3.0 typing-extensions==4.7.1 urllib3==2.0.7 Werkzeug==2.2.3 wrapt==1.16.0 zipp==3.15.0