Closed AhmedHanafy725 closed 4 months ago
Update: Still trying to figure out what is causing this.
What I have found so far is that during an old tfchain migration, a state cleaning was performed to remove some bad IPs from the chain. This might have resulted in having more IPs than what we currently have in the chain (since the cleaning didn't emit any events). However, the OP mentioned that there are fewer IPs in the Graphql, indicating that there might be a different issue at play.
i think we maybe have also a problem with deleting, some farms has ips on the database but not on the chain, what i noticed is farms 138, 33, 14
@Omarabdul3ziz yes there is another issue but not with deleting, note what i was saying in my previous comment. There is indeed a two issues in play here, and the one you described is due to previous chain migration where we added a validation and removed invalid IPs info from chain state without propagating this to our processor via tracked events. a gateway always have to be public and to be on the same subnet as the IP address. In your case, these IPs shows in graphql and not exists in chain due to being invalid. A graphql fix should follow to remove these ips from graphql as well.
Update:
Farm 52 was created in block 1313143 with an empty set of IPs it was updated a handful of times in different extrinsic since then, but this one is the most interesting
The same farm was updated about 15 times in the same batch extrinsic with different IP addresses each time.
The processor received all farm updated events. Because they were propagated from the same externsic and included in the same block, it processes the events in an incorrect order resulting in an outdated set of IPs to persist in processor db.
I expect the processor to process them based on their index within the block, but seems this is not in check.
Update: Based on @AhmedHanafy725 feedback, this batch extrinsic was initiated from the dashboard by adding an IP range. Will try to reproduce.
Update: I wasn't able to reproduce the issue. I tried adding an IP range from the dashboard, and it seems that events from batch call are being processed with respect to the event index in the block, just as expected.
It would be very helpful if you could reproduce the issue and share the steps with us.
For now, what i can do a migration to remove the invalid IPs from GraphQL.
I tried it again adding 5 ips with batch call and nothing is showing in graphql
while they are added on the chain
@AhmedHanafy725 Which network?
@AhmedHanafy725 Which network?
devnet
@AhmedHanafy725 There is a validation on the processor to not allow saving duplicate IPs and all these IPs you tried to add were stored on other farms previously, so it wasn't added to your farm.
you verify this using:
query MyQuery {
publicIps(where: {ip_eq: "1.1.1.1/24"}) {
ip
id
farm {
id
name
farmID
dedicatedFarm
certification
twinID
pricingPolicyID
}
contractId
}
}
It appears that the validation was intended to prevent the addition of the same public IP address to different farms. However, this should have been implemented at the chain level to prevent such an occurrence.
How should we approach this? do we need to alter this validation?
Update: After discussing this with @xmonader and @AhmedHanafy725 I suggested a solution here
Update: a PR that handling this case https://github.com/threefoldtech/tfchain_graphql/issues/157#issuecomment-2021848765 is ready fore review
env: devnet
farm: 52
also, farm 4432 has the same issue