QTM3x / Quantum-Internet

In this repository, we will collaborate on building the quantum internet and developing applications for it.
41 stars 15 forks source link

Find entanglement swapping protocols #19

Open BassemSafieldeen opened 4 years ago

BassemSafieldeen commented 4 years ago

Entanglement swapping is an important step in creating long-range entanglement. Finding the best protocol for a given hardware model is non-trivial.

In common/entanglement swapping protocols I have added a simple hardware model, and I have tested a naive entanglement swapping protocol. It would be nice if someone could investigate better protocols.

Possible approach: First try to play with the implemented protocol. Then try implementing a smarter protocol or finding one in the literature.

saum-g commented 4 years ago

Hi,

I noticed that after the swap() operation takes place the timer for the link to expire is restarted. So, will doing the swap "on the go" be useful? For example after q1--q2--q3 q4, we swap at q2 without waiting to connect q3 and q4, and achieve the state /------\ q1 q2 q3--q4

in the very next step?

BassemSafieldeen commented 4 years ago

That sounds like it makes sense. Please write the protocol and create a pull request so that others can play with it and get ideas :)

adityakrgupta25 commented 4 years ago

I was wondering if we can re-try link creation if the link creation between two nodes fails at first attempt while chain creation step? Moreover, let's say chain creation fails due to a few failure of link creation between two particular nodes, should we continue with the transmission? Rather at this point, I suppose we can classically communicate that link creation between the two nodes is failing and instruct not to continue with the communication (the benefit being sender won't have to waste resources in the regeneration of the qbit due to failed transmission).

BassemSafieldeen commented 4 years ago

You are right, in the protocol in the notebook it is pointless to continue making links to the right when one of the links to the left failed to be created. It is a very naive protocol. Like you said, a better protocol might continue creating links to the right and simultaneously try to recreate the failed links to the left.

adityakrgupta25 commented 4 years ago

Hi, I was curious what would the values for average link_creation_time and average link_lifetime would look like with real-world hardware? If we can have those, we can play around with those values on our implemented protocols.

BassemSafieldeen commented 4 years ago

I don't know experimental values off the top of my head to be able to answer that. But I remember Filip's thesis had some. Page 138 has some numbers on it. Feel free to have a look and share what you find with us by may be adding it as a section in the entanglement swapping notebooks.

The plan is to make our simulation models increasingly realistic as we build up the repository, so if you can summarize the experimental values in the thesis I linked to above you'd be helping out future us.

BassemSafieldeen commented 4 years ago

Status update for this issue:

adityakrgupta25 commented 4 years ago

Hi, I will be going through the above-linked document in a while. Sorry for the delay, have been a bit preoccupied :( . This sounds like a good metric to compare protocols. Also, I had a few other ideas in mind. Other metrics could be the amount of time the protocol requires in cases of successful transmission. Other relatively nuanced analysis could include analysing until how many hops before the further transmission of the information fails.

adityakrgupta25 commented 4 years ago

Also, we can go around exploring how our protocols behave for complex topologies. On the top of my head, it would be interesting to explore them just because we can extend our existing protocols and re-route the qbit. But we need an implementation for it first. @saum-g , would you be interested in taking up a simple implementation for the complex topologies? Would be super helpful in getting things started. We can start with something as simple as hot-potato routing . There's an open issue for it

saum-g commented 4 years ago

Hi @BassemSafieldeen, @adityakrgupta25, I think going for complex topologies will be an interesting step, but think that we should reach good standards with the linear topology first. What is your opinion on this?

Regarding entanglement swapping, I could not find literature for good protocols, but I think we can take inspiration from classical protocols. I am thinking of using something like windowing like used by TCP for ensuring that once points A and B share an entangled pair, they continue sharing a pair till the connection is established. So what I propose is that there always remain w number of parallel connections between the routers A and B. These are created at some gap of time so that when A feels the first connection is about to expire, it starts establishing a new parallel connection. w=2 suffices for our case I think, but might increase for complex topologies, or when finally a quantum internet is to be established.

I believe there are classical approaches for comparing classical protocols. I think we can try to take inspiration from them?

adityakrgupta25 commented 4 years ago

Hi, this sounds nice to me. I realize that this is trying to improve the reliability of our network. You should check the Transport layer protocols specific to the quantum network defined in the Transport Layer folder of the repo, might help you come up with more interesting ideas.

Also, to provide more reliable communication with primitive hardware, with only a single pair allocation between each link, I suppose we can combine protocol 1.1 and protocol 2. So a caveat I can foresee with 1.1 is that is can only be successful if the total_transmission_time (from the transmitter to receiver which would be link_creation_time + (entanglment_swap_time * number_of_hops ) ) < link_lifetime. In case we have way too many repeaters (ie. huge number_of_swaps, the protocol will fail. To solve this we can chunk out link creation and combine protocol 2.

Let's say we have 100 repeaters between A and B. And protocol 1.1 can only last until 10 entanglement swaps between the repeaters (assumption: we know this by calculating it beforehand based on the worst-case scenario). So in such cases, we can up first 10 repeaters at once. Once we have the information qbit at 10th repeater, we inform the router 11-20 to establish the link and so on.

saum-g commented 4 years ago

Hi, I was having this doubt regarding the network stack we are using. Entanglement swapping is a part of network layer (establish entanglement between far away nodes), while the link layer is tasked with ensuring connection between near by nodes.

I suggest we set link creation probability to 1, or remove it entirely, since it is link layer's job to ensure nearby nodes are entangled whenever we want them to. What are your opinions?

BassemSafieldeen commented 4 years ago

@saum-g

Hi @BassemSafieldeen, @adityakrgupta25, I think going for complex topologies will be an interesting step, but think that we should reach good standards with the linear topology first. What is your opinion on this?

Regarding entanglement swapping, I could not find literature for good protocols, but I think we can take inspiration from classical protocols. I am thinking of using something like windowing like used by TCP for ensuring that once points A and B share an entangled pair, they continue sharing a pair till the connection is established. So what I propose is that there always remain w number of parallel connections between the routers A and B. These are created at some gap of time so that when A feels the first connection is about to expire, it starts establishing a new parallel connection. w=2 suffices for our case I think, but might increase for complex topologies, or when finally a quantum internet is to be established.

Cool idea! Please implement it! I believe there are classical approaches for comparing classical protocols. I think we can try to take inspiration from them?

Can you point us to some references?

BassemSafieldeen commented 4 years ago

@adityakrgupta25

Hi, this sounds nice to me. I realize that this is trying to improve the reliability of our network. You should check the Transport layer protocols specific to the quantum network defined in the Transport Layer folder of the repo, might help you come up with more interesting ideas.

Also, to provide more reliable communication with primitive hardware, with only a single pair allocation between each link, I suppose we can combine protocol 1.1 and protocol 2. So a caveat I can foresee with 1.1 is that is can only be successful if the total_transmission_time (from the transmitter to receiver which would be link_creation_time + (entanglment_swap_time * number_of_hops ) ) < link_lifetime. In case we have way too many repeaters (ie. huge number_of_swaps, the protocol will fail. To solve this we can chunk out link creation and combine protocol 2.

Good observation! Let's say we have 100 repeaters between A and B. And protocol 1.1 can only last until 10 entanglement swaps between the repeaters (assumption: we know this by calculating it beforehand based on the worst-case scenario). So in such cases, we can up first 10 repeaters at once. Once we have the information qbit at 10th repeater, we inform the router 11-20 to establish the link and so on.

Please implement this protocol so that we can see how it handles this issue.

To related this to @saum-g's idea of giving the repeaters the ability to maintain 4 links at the same time, I think having backup links would make the protocol you describe perform better. In fact, I think it would make any protocol perform better. So may be you should collaborate on this.

I am curious how this protocol performs with long chains. So please test it on chains with 20 repeaters or something.

BassemSafieldeen commented 4 years ago

@saum-g

Hi, I was having this doubt regarding the network stack we are using. Entanglement swapping is a part of network layer (establish entanglement between far away nodes), while the link layer is tasked with ensuring connection between near by nodes.

I suggest we set link creation probability to 1, or remove it entirely, since it is link layer's job to ensure nearby nodes are entangled whenever we want them to. What are your opinions?

I think that's a valid point. In your next pull request, feel free to set the link creation probability parameter to 1 throughout the code --- and may be also remove it from function calls where possible. I wouldn't remove it as a class property just yet, because it might be useful for quick testing in the future.

saum-g commented 4 years ago

Hi, I have created a pull request with link creation probability set to 1 throughout. @adityakrgupta25 could you remove version2 of repeaters and move the changes to the original class instead? Your version is only an improvement over the original so it does not make sense to keep both. This way we will be able to merge different protocols easily.

BassemSafieldeen commented 4 years ago

Sorry, it's not clear to me how repeater_v2 is an improvement over the original. Could you elaborate?

saum-g commented 4 years ago

I may have misunderstood, but I think repeater_v2 only introduces repeatedly trying to connect if connection fails at some point. The number of tries can be set to 1 to get to the definition of the simple_repeater. By improvement I meant more like version 2 being a superset of the first version.

Pardon me if I misunderstood something

adityakrgupta25 commented 4 years ago

@saum-g , I suppose we can move it. So, other than retrying, what repeater_v2 does is inform the parent_chain if there was a failure while link creation on their part (which would be informing all the repeaters involved in the current transmission by broadcasting).

Also, we can incorporate the usage of repeater_chain.is_active() into the other protocols to not transmit if the chain is not active.

What say,@BassemSafieldeen , should we do it?

BassemSafieldeen commented 4 years ago

@saum-g

Yeah, I guessed that the retries are what you meant. I am just not sure whether they belong inside the repeater class definition or in a protocol that uses the repeater. I feel everything would be more modular if we left it up to the protocol to decide how many times to attempt link creation, and didn't specify it in the repeater.

In our future models, the link layer will be responsible for determining the number of retries. But even when we incorporate the link layer in our simulations I think it will be more modular to not have the number of retries implemented in the repeater, and to instead have it determined by a link layer sub-protocol that is using an instance of the repeater class.

Does that make sense?

Also, I think repeater_v2 has a little bug:

def create_right_link(self, other, retry=True, retry_count=3):
      retry_count = retry_count if retry else 1
      for _ in range(retry_count):
          # Creating link takes time.
          if self.rightNode is None:
            success = self.create_right_link_helper(other)
            if success:
              return
            time.sleep(self.link_creation_time)     # I think this should go before success = self.create ....
                                                                       # because link creation takes time even if it will fail. 
          self.parent_chain.indicate_link_failiure([self, other])

But that can be easily fixed.


@adityakrgupta25

Sorry, what do you mean by "transmit" in "to not transmit if the chain ... "? What are we transmitting?

adityakrgupta25 commented 4 years ago

Sorry, what do you mean by "transmit" in "to not transmit if the chain ... "? What are we transmitting?

I meant to stop any further entanglement swaps.


Thanks for pointing that out, will make those changes and create a PR.


In our future models, the link-layer will be responsible for determining the number of retries. But even when we incorporate the link layer in our simulations I think it will be more modular to not have the number of retries implemented in the repeater and to instead have it determined by a link-layer sub-protocol that is using an instance of the repeater class.

I would agree. I think it makes more sense to let the hardware handle number of retries. I would say that the protocol should only be concerned with chain creation, link creation should be handled by the hardware (as a part of link-layer; link creation is concerned with only two local repeaters).

As you said, this should be sub-protocol implemented over the base hardware we have defined. In future, we might be coming up with better/ different repeater definitions (like we'd be sharing multiple ERP pairs, that would go as newer hardware definition?). Plus would help us keep track of how our repeaters have evolved.

But that being said, we also need to figure out a way to streamline and test different protocols on multiple repeater definitions.

BassemSafieldeen commented 4 years ago

Sorry, what do you mean by "transmit" in "to not transmit if the chain ... "? What are we transmitting?

I meant to stop any further entanglement swaps.

Thanks for pointing that out, will make those changes and create a PR.

In our future models, the link-layer will be responsible for determining the number of retries. But even when we incorporate the link layer in our simulations I think it will be more modular to not have the number of retries implemented in the repeater and to instead have it determined by a link-layer sub-protocol that is using an instance of the repeater class.

I would agree. I think it makes more sense to let the hardware handle number of retries. I would say that the protocol should only be concerned with chain creation, link creation should be handled by the hardware (as a part of link-layer; link creation is concerned with only two local repeaters).

I think the link-layer protocols will run locally on the hardware of each repeater. So in that sense the hardware will be handling the number of retries like you say should happen, no? I am just arguing from a code-structure point of view whether the protocols should be put inside the repeater class or outside.

As you said, this should be sub-protocol implemented over the base hardware we have defined. In future, we might be coming up with better/ different repeater definitions (like we'd be sharing multiple ERP pairs, that would go as newer hardware definition?). Plus would help us keep track of how our repeaters have evolved.

But that being said, we also need to figure out a way to streamline and test different protocols on multiple repeater definitions.

I like the thing we have now, where we add improvements to repeaters by writing new classes that extend simpler classes and add new hardware features. I think that's a nice way to define new repeaters.

I think this whole disagreement essentially exists because it is not clear yet how the link-layer protocols are going to fit in the code we have now. I will try to write some exploratory code soon. Until then, I think we should clean our code a bit --- and add new features! --- but not remove anything.

So let's just continue as we were for a minute.

adityakrgupta25 commented 4 years ago

I think the link-layer protocols will run locally on the hardware of each repeater. So in that sense, the hardware will be handling the number of retries as you say should happen, no? I am just arguing from a code-structure point of view whether the protocols should be put inside the repeater class or outside.

Yes, I agree protocols should exist outside the repeater class. There will be a many-to-many relationship between the protocols and the repeaters. So does not make sense for putting protocols inside repeater classes.

What we can rather aim for is defining the compatibility of between the protocols and repeaters for better code structure. So that we know what protocol is supposed to be tested on what hardware.

I think this whole disagreement essentially exists because it is not clear yet how the link-layer protocols are going to fit in the code we have now. I will try to write some exploratory code soon. Until then, I think we should clean our code a bit --- and add new features! --- but not remove anything. So let's just continue as we were for a minute.

Yes, this would definitely help and we can continue working with the existing structure for now.

adityakrgupta25 commented 4 years ago

Hi, I have stuck myself in a problem while trying to implement the following protocol:

Let's say we have 100 repeaters between A and B. And protocol 1.1 can only last until 10 entanglement swaps between the repeaters (assumption: we know this by calculating it beforehand based on the worst-case scenario). So in such cases, we can up first 10 repeaters at once. Once we have the information qbit at 10th repeater, we inform the router 11-20 to establish the link and so on.

So what I realise now is that, despite creating chains in the chunks, it would still be impossible to transmit information on between long chains. The original link lifetime was let's say 10 sec. After swapping we set the link lifetime between the two distant repeaters as the minimum of all the remaining link expiry time of the repeaters in between. So according to the above protocol, let's say we have successfully created a link in the first subchain (0-10) and now we're in 2nd subchain (11-20) and swapping at the 13th repeater happens at 10.1th sec (just when the link expires). At this point, the repeater 0's right link is bound to expire and we lose any connection with it. Thus establishing a link with repeater 0 is now not possible.

I am not sure if such scenarios can anyways be eliminated without storing qubits but then again, no-cloning theorem comes into the picture.

I tried experimenting protocol 1 and 2 with the long chains (20) but all suffered the same fate since the right link of repeater 0 is bound to expire after link_lifetime seconds (5 sec in our current code).

It seems to be a very fundamental problem with the longer chains. Chain lengths are bound by the decoherence time? Or maybe I am missing something here?

Would you guys have any suggestions on how can we work around this?

saum-g commented 4 years ago

Hi, the protocol I suggested using multiple shared qubits tries to overcome the exact problem you mentioned @adityakrgupta25 . I am working on it right now

adityakrgupta25 commented 4 years ago

I am not sure if that would help us solve the problem. The protocol would provide us with steady local-links between two adjacent repeaters. That would have no effect on the distant entangled pairs (like between repeater 0 and 13 in the above case), and in the long-chain scenario, it is the decoherence between the particles at distant repeaters that is causing the issue.

BassemSafieldeen commented 4 years ago

What if we used this protocol:

r     r     r     r     r     r
r     r     r --- r     r      r
r     r --- r --- r --- r      r
       /---------------\
r     r     r     r     r     r
       /---------------\
r --- r     r     r     r --- r
 /----------------------------\
r      r     r     r     r     r

That way, any given qubit has to support a link only for a constant multiple of link_creation_time --- i.e, the waiting time does not depend on the length of the chain.

The network layer would be responsible for knowing which two users need a link, then send signals to the repeaters to act as above --- with link creation starting from the middle repeater.

adityakrgupta25 commented 4 years ago

Hi, so the problem I see is not with the link_creation_time, it is with the link_lifetime.

What I can understand is even the above protocol is that starting from the middle allows us to make use of both left and right links rather than using only right links as we had been previously been doing. This would save us some multiples link_creation_time?

I suppose this will still bind anything we can do in the chain to link_lifetime and our problem remains still solved.

for example,

                       /---------------\
                r     r     r     r     r       r

In such a case, what happens after the entangled qbits at the two ends decohere before they get to establish the link with their unlinked left and right repeater respectively?

We would lose the connection, right?

BassemSafieldeen commented 4 years ago

Ok, I see where the problem is. A link should expire when either of the qubits that support it decoheres. This was discussed in the comments on a closed pull request. And was supposed to be fixed in this pull request. But I see now that the implementation may not cover all cases.

@saum-g could you have a look at that? This is relevant to the code you are currently working on because it extends the old code.

The idea behind the protocol I suggested above was that the growing link in the diagrams has its expiry timer refreshed because new qubits support it at each iteration. This is an improvement over, say, the very first protocol because in that protocol the growing link is always supported by the leftmost repeater --- so for long chains the link doesn't make it.

@adityakrgupta25 does that make sense?

adityakrgupta25 commented 4 years ago

The idea behind the protocol I suggested above was that the growing link in the diagrams has its expiry timer refreshed because new qubits support it at each iteration.

I did not quite get why would we reset the timer with each iteration?

Let's say we have this repeater chain A--B--C. The qbits inside the link would look like -A--[B1 B2]--C (let's call it state1), ( where B1 is the B's qbit linked with A, and B2 is B's qbit linked with C). So the aim of entanglement swapping would be to clone B1 on C's qbit. Now the qbit state would look like

             /---------------\
            A     [x B2]     B1  

(let's call it state 2).

So if the entanglement A--B1 in state 1 was bound to expire in 5 sec. Resetting the timer would mean that link A--B1 in state 2 is also gonna last 5 sec after the state 2 was achieved. But would it be safe to assume that?

IMO, the link in state 2 might still actually last lesser than 5 seconds (which is what we are doing right now).

BassemSafieldeen commented 4 years ago

Good question!

So let us first remember that in our models so far we have been assuming perfect gates. This assumption is stated at the top of the Protocols notebook. Perfect gates are gates that do not add noise to the states of the qubits in the repeaters.

Also let us remember that we have been assuming that links between repeaters have maximum fidelity --- i.e., two linked repeaters always share a maximally entangled state, not some low-quality entanglement. This assumption has been more implicit, I think.

We have also been making the implicit (but discussed briefly here) assumption that a link expires if and only if one of the qubits that support it decoheres. In our models the qubits decohere after time link_lifetime of the link being established. Admittedly, naming the expiry time link_lifetime may not have been the best choice of name. A better name might have been decoherence_time, I think. Anyway, once a qubit supporting the link decoheres --- and that happens after link_lifetime --- the link expires. So far so good?

For our swap operation, these assumptions mean that links behave in the following manner. First we begin as usual:

A     [B1 B2] --- [C1 C2]     D

Qubits B2 and C1 are supporting a link, and so their decoherence timers are counting. Qubits A and D are not supporting links, so their decoherence timers are not counting yet.

Now the next step:

A --- [B1 B2] --- [C1 C2] --- D

Now A and D's timers have started. At this point B2 and C1's timers are old and the qubits are closer to decoherence --- and consequently, by assumption 3 above, the links they support are closer to collapse. In spite of that, by assumption 2, the links still have maximum fidelity.

Now we swap:

   / -----------------------\
A     [B1 B2]     [C1 C2]     D

The gates that constitute the swap operation, by assumption 1, did not add any noise to the shared state that is the link. And so the result is a perfect link supported on qubits A and B, which have only relatively recently started their decoherence timers. And in that sense, the link's timer has been reset.


A bit of a side point:

In your comment, you say "clone B1 on C's qbit". This is a bit of a problematic phrase. First of all, because of the no-cloning theorem, you should be very careful when putting the word "clone" before the word "qubit" (or "qbit"). It is generally impossible to clone qubits.

No cloning is involved in the swap operation. What happens is that we teleport the state of the qubit B1 to the qubit C1, and we teleport the state of the qubit C1 to the qubit D.


The assumptions above are of course not realistic at all. Real gates add noise to states of the qubits. And links gradually lose fidelity because the qubits supporting them decohere gradually, not at once after a fixed time as we have been assuming.

So I have no idea how well my proposed protocol will work when we make our simulations more realistic. But for the current model, under the used assumptions, it should out-perform many of the other protocols we have; especially ones that have the first qubit supporting a link from the very beginning till the very end.


Please let me know if you have more doubts. It helps me find holes in my thinking.

By the way, I mentioned earlier that I was going to write some exploratory code for where the link layer protocols (like the retries) should be placed. You can have a peek at the code here. It's not done yet but it should give you an idea of what I meant earlier. See the BB84.ipynb file in the Simulations folder and follow the imports through the layers.

And please feel free to let me know what you think. Once it's mature enough I think we will want to push it to master.

saum-g commented 4 years ago

@saum-g could you have a look at that? This is relevant to the code you are currently working on because it extends the old code.

I checked the expiry time code in the pull request but I am not able to think of a case that might have been missed. Could you point it out so I can correct it?

saum-g commented 4 years ago

I was just thinking about the protocol @BassemSafieldeen suggested, and I have a doubt there.

I considered decoherence as the state of the qubit changing to a fully mixed state. In other words, it is the same as any other qubit, but the state it is in is more mixed than what we wanted. In that case, even if we teleport a qubit it's state will remain noisy and it is still decohered. If this had not been the case, we could swap states in two qubits repeatedly and prevent decoherence altogether. Am I right?

saum-g commented 4 years ago

Hi, I had another idea in mind in order to minimise the time a created link has to wait. Instead of creating links left to right, we could use a recursive procedure to create links in the left half and right half and then swapping at the middle. I expect this to reduce our creation time logarithmically, though I am not sure. What do you think? I will post calculations soon if I am able to.

saum-g commented 4 years ago

So the recursion for time taken formed with our original approach is: T(n)=T(n-1)+T(1) while for my previous comment is: T(n)=2T(n/2). So it does not give us any advantage if we are going to wait for one link to be established before establishing the other.

This does bring to mind the question, why are we waiting for links to be established? Tell each node i to establish a link to node (i+1) and to do it at the same time for each i. We can then perform swaps by whatever strategy we like. This way we save on link creation time as well as avoid links expiring.

adityakrgupta25 commented 4 years ago

The gates that constitute the swap operation, by assumption 1, did not add any noise to the shared state that is the link. And so the result is a perfect link supported on qubits A and B, which have only relatively recently started their decoherence timers. And in that sense, the link's timer has been reset.

Hi, thanks for this awesome explainer @BassemSafieldeen.

I would still agree with this assumption, though.

I considered decoherence as the state of the qubit changing to a fully mixed state. In other words, it is the same as any other qubit, but the state it is in is more mixed than what we wanted. In that case, even if we teleport a qubit it states will remain noisy and it is still decohered. If this had not been the case, we could swap states in two qubits repeatedly and prevent decoherence altogether. Am I right?

In state, A --- [B1 B2] --- [C1 C2] --- D Since

Qubits B2 and C1 are supporting a link, and so their decoherence timers are counting.

So, the actual state would look like A --- [B1 (B2+noise)] --- [(C1+noise) C2] --- D

So the best we can teleport would be somewhat noisy mixed state, so resetting timers still doesn't seem like the right choice to do.

/--------------------\
A.  [x (B2+noise)]   [(B1+ noise) C2]--D

(B1+ noise) - because operations were performed on an already noisy qbit - C1+noise. This is under the assumption that gates did not add any noise, the noise was already there.

Resetting timers would mean that we still have the original qbit states intact.

adityakrgupta25 commented 4 years ago

This does bring to mind the question, why are we waiting for links to be established? Tell each node i to establish a link to node (i+1) and to do it at the same time for each i. We can then perform swaps by whatever strategy we like. This way we save on link creation time as well as avoid links expiring.

I think this is similar to Protocol 1.1. The number of swaps is still bounded by the decoherence time even if you create all links at once.

BassemSafieldeen commented 4 years ago

@adityakrgupta25 and @saum-g , I think you make a good argument against continuing to use the current model with its assumptions. Thank you.

I am trying to think about how to change it so that it's more realistic without involving actual quantum gates just yet.

Praveen91299 commented 4 years ago

Hey! I'm new here and thought I could contribute to this repository. I've read through this discussion and parts of the code, and as @saum-g suggests, we can, after creating entanglement between all adjacent nodes and then swap (measure) at the same time. It seems to work and the same are on the thesis shared. Am I missing something? The swaps can be performed at the same time afaik. All that needs to be done is to transmit this information to the end nodes to correct to proper bell states at the end.

If you wish to make it realistic, you just need two steps, creating the entanglement between nodes and swapping. Then there's a classical communication step where every node transmits information to the ends so that they can determine which bell state(s) they have. Also one can have multiple of these simultaneously so that we can distil out at the end. I hope I didn't miss out or misunderstand something. If so, please do let me know. Thanks!

Edit: I tried it out, it works. In the circuit below, first and last wires belong to Alice and bob and intermediate pairs are repeaters: https://algassert.com/quirk#circuit={%22cols%22:[[%22H%22,1,%22H%22,1,%22H%22,1,%22H%22,1,%22H%22],[%22%E2%80%A2%22,%22X%22],[1,1,%22%E2%80%A2%22,%22X%22],[1,1,1,1,%22%E2%80%A2%22,%22X%22],[1,1,1,1,1,1,%22%E2%80%A2%22,%22X%22],[1,%22%E2%80%A6%22,%22%E2%80%A6%22,1,1,%22%E2%80%A6%22,%22%E2%80%A6%22,1,%22%E2%80%A2%22,%22X%22],[1,1,1,%22%E2%80%A6%22,%22%E2%80%A6%22,1,1,%22%E2%80%A6%22,%22%E2%80%A6%22],[1,%22%E2%80%A2%22,%22X%22],[1,1,1,%22%E2%80%A2%22,%22X%22],[1,1,1,1,1,%22%E2%80%A2%22,%22X%22],[1,1,1,1,1,1,1,%22%E2%80%A2%22,%22X%22],[1,%22H%22,1,%22H%22,1,%22H%22,1,%22H%22],[1,1,%22Measure%22,1,%22Measure%22,1,%22Measure%22,1,%22Measure%22],[1,1,%22%E2%80%A2%22,1,1,1,1,1,1,%22X%22],[1,1,1,1,%22%E2%80%A2%22,1,1,1,1,%22X%22],[1,1,1,1,1,1,%22%E2%80%A2%22,1,1,%22X%22],[1,1,1,1,1,1,1,1,%22%E2%80%A2%22,%22X%22],[1,%22%3C%3C9%22],[%22Chance2%22]]}

BassemSafieldeen commented 4 years ago

Hi @Praveen91299 , welcome to the repo!

I have been a bit too busy to think about @saum-g's protocol. But I have thought about it a bit just now after reading your comment, and it does indeed seem to be the best possible protocol on the simple simulation we have been using.

EDIT: After checking the code and reading the comments again, I think @adityakrgupta25 also proposed a similar protocol earlier. To avoid confusion with who gets credit for what, I have edited the rest of the post so that the "link all at once then swap all at once" protocol is referred to as The Protocol instead of someone's protocol. If the Nobel prize committee comes looking someday, they can figure out who did what from the commit history and the discussion threads.


Thanks for the circuit btw, it helped :) There was a little mistake that I had to fix before seeing your point, though. There were missing gates: after the final Hadamards there should be measurements conditioned upon which Z gates must be applied on the last qubit. You can see these gates in the circuit in the quantum teleportation Wikipedia article.

You couldn't see this mistake because you were looking at the expectation values of the Z operator and not the density matrix, and you can't determine if two qubits are entangled just by measuring in the Z basis and looking at the result. Here is an example: If you have either of the states

rho = 0.5 |00><00| + 0.5 |11><11|

and

sigma = 0.5 |00><00| + 0.5 |00><11| + 0.5 |11><00| + 0.5 |11><11| 
      =  0.5 (|00> + |11>)(<00| + <11|)

you have the same probability for measuring |00> and |11>, even though the first state has no entanglement at all and the second state has maximum entanglement. So to know if two qubits are entangled you have to look at the density matrix, not the expectation values (of measurements in the Z basis alone).

Here is your circuit but in addition to looking at the expectation values we also look at the density matrix. You can see that the density matrix corresponds to the state rho above.

And here is the correct circuit, with the missing gates added. The density matrix corresponds to the state sigma above.


So I think we can start wrapping up this issue now. We can compare how The Protocol performs compared to the other protocols. I have written a simple BB84 benchmarking class that I will push to the master branch in a bit. Then we can use it to compare the protocols in terms of how many times we can create end-to-end entanglement in a given amount of time.


The next step is to make our simulation model more realistic. I am working on that now in a separate branch. If you have a look at the code, in particular the code in the Physical Layer folder, you will see that it uses actual quantum gates (implemented by QuTip); And qubits decohere now, too, which implies that link creation will become probabilistic again and we will have to do distillation and stuff. There is also an optical fiber that carries photons between the repeaters, and the state of the photons deteriorate as they go through fibers, so that's another example of a source of errors that we are not considering now.

In this more realistic model, it is not clear (at least to me) if the best protocol we have now will remain the best.

Thank you for your interest in contributing to the repository. I suggest you stand-by for just a tiny bit longer until I push the new code to master. There is still a lot of work to be done there and contributions will be very much welcome!

Praveen91299 commented 4 years ago

@BassemSafieldeen Thanks for the correction to the circuit. Anyways, I'd love to contribute to the repo and will post in the relevant issue thread after going through the codes and documents up to date.