Open berelton opened 4 months ago
Hello @berelton ! Thanks for reporting this issue. Tried to reproduce it on my end but wasn't able to verify the issue.
But this doesn't mean that this is a non-issue.
It would help if you could provide the nodes list via nmctl node list
.
hello @NEETweeb !
Appreciate the swift reply!
Sure, here is the response:
$ nmctl node list
+--------------------------------------+-----------------+-----------+--------+-----------------------+-------+
| ID | ADDRESSES | NETWORK | EGRESS | REMOTE ACCESS GATEWAY | RELAY |
+--------------------------------------+-----------------+-----------+--------+-----------------------+-------+
| 02a36777-62b2-4ff9-8952-75ff3796a03f | 10.10.0.2/16 | test1 | false | false | false |
| 74e5e2f6-f5c5-4716-82a2-5371c2bef3ea | 192.168.10.1/24 | test2 | false | false | false |
| cd9a6669-8dd9-47a0-aff9-53324df69273 | 10.10.0.1/16 | test1 | false | false | false |
+--------------------------------------+-----------------+-----------+--------+-----------------------+-------+
If you could share your email and ssh key, I can share share you the access to the VMs to investigate, or we can have a quick call to debug together.
hello @NEETweeb !
I'd like adjust more context about @berelton netmaker setup.
This situation happen when deploying netmaker via helm to a Kubernetes cluster with 2 replicas as indicated in the documentation examples. Most likely, 2 requests to connect to the network came on different replicas due to the Robin Round balancing on gateway. It seems that the problem may be related to synchronization between netmaker servers. I hope this information will help in studying the problem.
hello @NEETweeb , @afeiszli, any updates on that?
hello @NEETweeb , @afeiszli, any updates on that?
While using HA setup make sure caching is set to false CACHING_ENABLED=false
Hello @abhishek9686 , it is set to false.
$ nmctl server config
{
"CoreDNSAddr": "SERVER_PUBLIC_IP",
"APIConnString": "api.id0.mydomain.com:443",
"APIHost": "api.id0.mydomain.com",
"APIPort": "8081",
"Broker": "",
"ServerBrokerEndpoint": "",
"BrokerType": "mosquitto",
"EmqxRestEndpoint": "",
"NetclientAutoUpdate": "enabled",
"NetclientEndpointDetection": "",
"MasterKey": "(hidden)",
"DNSKey": "(hidden)",
"AllowedOrigin": "*",
"NodeID": "netmaker-a77-0",
"RestBackend": "on",
"MessageQueueBackend": "",
"DNSMode": "on",
"DisableRemoteIPCheck": "off",
"Version": "v0.21.2",
"SQLConn": "",
"Platform": "linux",
"Database": "postgres",
"Verbosity": 1,
"AuthProvider": "",
"OIDCIssuer": "",
"ClientID": "",
"ClientSecret": "",
"FrontendURL": "",
"DisplayKeys": "on",
"AzureTenant": "",
"Telemetry": "on",
"HostNetwork": "",
"Server": "id0.mydomain.com",
"PublicIPService": "",
"MQPassword": "",
"MQUserName": "",
"MetricsExporter": "",
"BasicAuth": "",
"LicenseValue": "",
"NetmakerTenantID": "",
"IsEE": "no",
"StunPort": 3478,
"StunList": "",
"TurnServer": "",
"TurnApiServer": "",
"TurnPort": 0,
"TurnUserName": "",
"TurnPassword": "",
"UseTurn": false,
"UsersLimit": 0,
"NetworksLimit": 0,
"MachinesLimit": 0,
"IngressesLimit": 0,
"EgressesLimit": 0,
"DeployedByOperator": false,
"Environment": "",
"JwtValidityDuration": 43200000000000,
"RacAutoDisable": true,
"CacheEnabled": "",
"endpoint_detection": false,
"AllowedEmailDomains": ""
}
hello @NEETweeb , @afeiszli, any updates on that?
@berelton what is the rate at which you are joining clients to the network? are multiple clients joining the network at same time?
@abhishek9686 it is like 5 seconds difference, yes.
Is there a way to handle this?
Hello @abhishek9686 , it is set to false.
$ nmctl server config { "CoreDNSAddr": "SERVER_PUBLIC_IP", "APIConnString": "api.id0.mydomain.com:443", "APIHost": "api.id0.mydomain.com", "APIPort": "8081", "Broker": "", "ServerBrokerEndpoint": "", "BrokerType": "mosquitto", "EmqxRestEndpoint": "", "NetclientAutoUpdate": "enabled", "NetclientEndpointDetection": "", "MasterKey": "(hidden)", "DNSKey": "(hidden)", "AllowedOrigin": "*", "NodeID": "netmaker-a77-0", "RestBackend": "on", "MessageQueueBackend": "", "DNSMode": "on", "DisableRemoteIPCheck": "off", "Version": "v0.21.2", "SQLConn": "", "Platform": "linux", "Database": "postgres", "Verbosity": 1, "AuthProvider": "", "OIDCIssuer": "", "ClientID": "", "ClientSecret": "", "FrontendURL": "", "DisplayKeys": "on", "AzureTenant": "", "Telemetry": "on", "HostNetwork": "", "Server": "id0.mydomain.com", "PublicIPService": "", "MQPassword": "", "MQUserName": "", "MetricsExporter": "", "BasicAuth": "", "LicenseValue": "", "NetmakerTenantID": "", "IsEE": "no", "StunPort": 3478, "StunList": "", "TurnServer": "", "TurnApiServer": "", "TurnPort": 0, "TurnUserName": "", "TurnPassword": "", "UseTurn": false, "UsersLimit": 0, "NetworksLimit": 0, "MachinesLimit": 0, "IngressesLimit": 0, "EgressesLimit": 0, "DeployedByOperator": false, "Environment": "", "JwtValidityDuration": 43200000000000, "RacAutoDisable": true, "CacheEnabled": "", "endpoint_detection": false, "AllowedEmailDomains": "" }
can you check the CacheEnabled
config in the configmap deployed?
can you check if both vm's have the same mac address?
To verify if both virtual machines have identical MAC addresses, you would need to check the network configuration settings of each VM. Typically, MAC addresses should be unique to each device to avoid network conflicts.
What happened?
Hello netmaker team, I got the issue when 2 VMs got the same ip address.
So we have 3 VMs:
main
-> has netmaker instance deployedvm1
-> connecting to meshnetvm2
-> connecting to meshnetOn main VM we are creating 2 subnets:
On
vm1
:On
vm2
:So as you see both the VMs are getting 192.168.10.1 ip, but they should have different ip addresses.
Version
v0.24.0
What OS are you using?
No response
Relevant log output
No response
Contributing guidelines