Open SouPhyzics opened 7 months ago
Hey there :) Thanks for bringing this up!
That is indeed very much possible, as it seems that there are also no proper unittests implemented for this currently.
Is this a feature that you need or would like to see implemented? Then I can fix it for you :)
Thinking about this a bit: Can you briefly describe the use-case you have for this and why the dependency is necessary?
There are a few choices in where to implement this precisely and it would be helpful to learn a bit about how this is intended to be used.
@SouPhyzics any input on this? Is this a feature you would like to see implemented? If yes, can you describe your use-case a little bit, so I can understand at which point of the code this should be done.
I would be especially curious to hear your input on why this should be done on each generation of the optimization. Is the use case that you have multiple parameters (x,y,z), where z =F(y) depends on y and you optimize over only (x,y) and then want to update z to the set of (x,y) that come out of every generation?
Hi @dmark04 , apologies for the late response, I was away at a conference followed by a leave.
Thanks for checking this out. Yes, the dependency us something I want to implement in my optimization.
I am using YOTSE for optimizing different parameters for quantum teleportation over a repeater chain, where the cutoff_time is a tunable parameter (it does not contribute to the hardware cost), but have significant effect on the fidelity and rate of teleportation over the repeater chain. I want the cutoff_time to be dependent on the coherence time, such that cutoff_time / coherence_time \in [0.01, 1]. For an updated point in a new generation, both the coherence time and efficiency of elementary link entanglement generation are updated, so I also want this dependency for the cutoff time to be updated over every generation. This is exactly the same as you explained in your example with (x, y, z).
Looking forward to your response. Cheers, Soubhadra
Hello @dmark04 , is there any update regarding this? Thanks.
Hey @SouPhyzics , this time I was on leave :)
I think the use-case is now clear to me! I will try and implement this as soon as possible.
Hi @dmark04, that sounds good. Do you have a timeline in mind when it will be done? Also, I am having trouble setting the hyperparameters, i.e. I am unable to understand a clever estimate or guess for the hyperparameters. Could you or someone please provide any suggestions regarding this? Thanks.
Hey, no timeline yet. but you can check out the progress in the pull request that is linked in this issue.
Sorry it is taking a bit longer but the blueprint setup and workflow is so convoluted and weird, that I am trouble making sense of what it is exactly you want to do.
Maybe you can help me out here:
In your example are you setting cutoff_ time
to a single constant value ? Or are you expecting to also pass a range of cutoff_time
s?
Could you maybe write a minimal example of the input you would ideally like to give and the result this should have over a few steps?
I always get a bit confused because in the example I have in the code currently the "cutoff_time" is still given as a range of values and that just doesn't make any sense to me. Because that to me means that this is indeed part of the optimization and should be passed as information to the optimization algorithm. Edit: something like writing out a specific datapoint would maybe even already be helpful: like you want to start with something like
datapoint1 = (carbon_t2_value1, electron_t2_value1, cutoff_time_constant_value/carbon_t2_value1)
datapoint2 = (carbon_t2_value1, electron_t2_value2, cutoff_time_constant_value/carbon_t2_value1)
datapoint3 = (carbon_t2_value2, electron_t2_value1, cutoff_time_constant_value/carbon_t2_value2)
datapoint2 = (carbon_t2_value2, electron_t2_value2, cutoff_time_constant_value/carbon_t2_value2)
where then carbon_t2.datapoints = [carbon_t2_value1, carbon_t2_value2]
and cutoff_time.datapoints = [cutoff_time_constant_value/carbon_t2_value1, cutoff_time_constant_value/carbon_t2_value2
?
carbon_t2
and therefore two new datapoints for cutoff_time
(which are again calculated using the same constant? or a new constant?)If you like we can also have a short zoom meeting to discuss and clear up any confusion, that should speed up development by quite a bit.
Hi there @dmark04, I think it's best if we have a zoom call to clarify this. Does sometime on Friday (26 Apr) or next week (29 Apr - 3 May) work for you?
I am using the blueprint_example template, and there is the following chunk of code: Parameter( name="cutoff_time", param_range=[0.01, 1.0], constraints={"low": 0.01, "high": 1.0}, number_points=2, distribution="uniform", param_type="continuous", depends_on={"name": "coherence_time", "function": linear_dep},
todo: test if this dependency is also used in each generation
it seems that the dependency is not implemented in each generation, but the parameter value for cutoff_time goes way beyond the limit set by the dependency.