Closed MaxRos1234 closed 4 years ago
Hello @MaxRos1234
What is in the contents of the permissions config file?
Could you confirm what IPs you've set in there for the enodes please? I think from memory you need the pod's IP, rather than the service IP, because the p2p layer uses the IP of the sender (pod) in this case
Cheers Josh
Hi Joshua. Thanks for the reply. Setting the IPs of the "POD" works. But the point is that using k8s I have to publish all validators on a single node and use Docker's ip. I can't deploy validators on a cluster. Are there alternative solutions?
thx, Massimo
Hi Massimo,
Thats good to know :)
I dont quite follow the bit about validators being on a single node - could you give us a little more details here please? Typically you have n nodes in clusters that you deploy validators to.
To get IPs for them you can do a few things :
validator1-0.<kubernetes.cluster.tld>
Also note that with this type of permissions scheme, it is applicable to only the nodes that you specify i.e other nodes don't obey these rules so you have to think about how you want to update the lists when the network grows for example
Hope this helps
Cheers Josh
Hi Joshua, I try to make a summary of the scenario. I would like a "permissioning" network of Besu on kubernetes. I must be able to use the "enodes" with the IP addresses of the internal services (type = clusterIP) and the worker addresses for the services in "nodeport" to be able to access from the outside in order to hook up other nodes and vote for other validators. In the validator yaml we have set the nat to "auto". The only evidence we have is the following exception:
KubernetesNatManager | Starting kubernetes NAT manager. 2020-05-26 06:51:50.511+00:00 | main | DEBUG | KubernetesNatManager | Trying to update information using Kubernetes client SDK. 2020-05-26 06:51:52.926+00:00 | main | DEBUG | NatService | Caught exception while trying to start the manager or service. org.hyperledger.besu.nat.core.exception.NatInitializationException: Failed update information using Kubernetes client SDK. at org.hyperledger.besu.nat.kubernetes.KubernetesNatManager.doStart(KubernetesNatManager.java:85) ~[besu-nat-1.4.5-RC1.jar:1.4.5-RC1] at org.hyperledger.besu.nat.core.AbstractNatManager.start(AbstractNatManager.java:90) ~[besu-nat-1.4.5-RC1.jar:1.4.5-RC1]
... ... ... I have no other elements.
Massimo.
Hi Massimo,
Thanks for that. Could you share a little more info on your env please? i.e
Also when the other Besu nodes join the chain are they local to you /cloud (i.e are they also part of the same same cloud provider / LAN or completely different)? The key question here is that they can route to your existing validator pool from the outside via TCP & UDP on 30303
I'm looping in @matkt who's the expert on the NAT manager in code to assist.
Cheers Josh
Hello Joshua.
I am using on linux ubuntu 20.04 with minikube version: v1.9.2. (I also used desktop docker on mac osx, where the problem seems the same).
My intention is to launch your example on minikube in "permissioning" mode and then interact with the host machine by pulling on other nodes. Initially, the nodes interact (I'm trying with validators only at the moment) then they start to disconnect in a decreasing way until a peer count of 0 is obtained.
So I stop before even interacting with the host to add more nodes.
I forgot ... Even in "permissionless" mode (the original example) I get the exception of NAT but everything seems to work correctly.
thx,
Massimo.
LOG FILE:
Setting logging level to DEBUG
2020-05-26 06:51:34.405+00:00 | main | INFO | AltBN128PairingPrecompiledContract | Using native alt bn128
2020-05-26 06:51:39.213+00:00 | main | INFO | SECP256K1 | Using native secp256k1
2020-05-26 06:51:39.219+00:00 | main | INFO | Besu | Starting Besu version: besu/v1.4.5-RC1/linux-x86_64/oracle_openjdk-java-11
2020-05-26 06:51:39.514+00:00 | main | DEBUG | ResourceLeakDetector | -Dio.netty.leakDetection.level: simple
2020-05-26 06:51:39.515+00:00 | main | DEBUG | ResourceLeakDetector | -Dio.netty.leakDetection.targetRecords: 4
2020-05-26 06:51:39.529+00:00 | main | DEBUG | InternalThreadLocalMap | -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024
2020-05-26 06:51:39.530+00:00 | main | DEBUG | InternalThreadLocalMap | -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096
2020-05-26 06:51:39.622+00:00 | main | DEBUG | MultithreadEventLoopGroup | -Dio.netty.eventLoopThreads: 2
2020-05-26 06:51:39.816+00:00 | main | DEBUG | NioEventLoop | -Dio.netty.noKeySetOptimization: false
2020-05-26 06:51:39.816+00:00 | main | DEBUG | NioEventLoop | -Dio.netty.selectorAutoRebuildThreshold: 512
2020-05-26 06:51:39.909+00:00 | main | DEBUG | PlatformDependent0 | -Dio.netty.noUnsafe: false
2020-05-26 06:51:39.909+00:00 | main | DEBUG | PlatformDependent0 | Java version: 11
2020-05-26 06:51:39.912+00:00 | main | DEBUG | PlatformDependent0 | sun.misc.Unsafe.theUnsafe: available
2020-05-26 06:51:39.913+00:00 | main | DEBUG | PlatformDependent0 | sun.misc.Unsafe.copyMemory: available
2020-05-26 06:51:39.914+00:00 | main | DEBUG | PlatformDependent0 | java.nio.Buffer.address: available
2020-05-26 06:51:39.915+00:00 | main | DEBUG | PlatformDependent0 | direct buffer constructor: unavailable
java.lang.UnsupportedOperationException: Reflective setAccessible(true) disabled
at io.netty.util.internal.ReflectionUtil.trySetAccessible(ReflectionUtil.java:31) ~[netty-common-4.1.42.Final.jar:4.1.42.Final]
at io.netty.util.internal.PlatformDependent0$4.run(PlatformDependent0.java:224) ~[netty-common-4.1.42.Final.jar:4.1.42.Final]
at java.security.AccessController.doPrivileged(Native Method) ~[?:?]
at io.netty.util.internal.PlatformDependent0.
Hello, this stacktrace does not seem to me to be the cause of the issue that you currently have. It is a message which means that the automatic configuration of NAT has failed (to be able to use automatic detection you need a LoadBalancer
type service but in your case it does not seem necessary because you provide the IP manually). Since a recent update when this happens we automatically switch to manual mode. You can test by switching from auto
mode tonone
mode. You should no longer have the stacktrace but in my opinion it will not resolve the issue.
Could you try with 1.4.7-SNAPSHOT (the current master) and confirm that it still doesn't work?
Hi Massimo,
Right I think I get what you're trying to do but not sure it'll work on minikube, and will probably require a distributed setup. So if I understand right (please correct me if Im wrong), you have lets say 4 validators (v1, v2, v3, v4) on minikube and you want to connect your a node on your host (h1) / external to minikube to them?
There are a few hiccups with this setup on Minikube because of the way its runs locally and the routing can only do so much, but I'd suggest going about solving it like so:
As a side note: on a distributed setup in the cloud the above disappears but you will likely need an ingress on each validator to get TCP & UDP working. I'm not quite sure I follow the use of permissioning for your use case though - if you use a node permissioning scheme won't you need to update each validator as well when a new node joins in? Perhaps an onchain solution is more effective for the validator pool and you use a Dapp to allow/disallow nodes?
The NAT exception Im not sure of - Karim is the best person to advise
Cheers Josh
Hi Massimo,
I'm going to close this one. I'll see if we can create a permissioning example as well over the next few weeks. I believe we have a fix for the NAT message too - that will be in the next release. Please let us know if there is anything else we can help with.
Cheers
using your scripts in permissionless mode everything works!
when I ADD two args parameters for permissioned network : --permissions-nodes-config-file-enabled=true --permissions-nodes-config-file=/mydata/permissions_config.toml
I get this error: Disconnecting from peer that is not permitted to maintain ongoing connection: org.hyperledger.besu.ethereum.p2p.rlpx.connections.RlpxConnection$RemotelyInitiatedRlpxConnection@63300ba4
both networks --bootnodes=enode://${VALIDATOR1_PUBKEY}@${BESU_VALIDATOR1_SERVICE_HOST}:30303