EvernodeXRPL / evernode-host

Evernode host installer
Other
55 stars 8 forks source link

Multiple reputation contracts/ Failing to destroy instances #379

Closed Sovelo closed 4 months ago

Sovelo commented 4 months ago

I have multiple instances of the reputation contract running on one host, I have been unable to destroy either of the instances and they both start back up after a reboot. It looks like after every check in the reputation contract attempts to stop and restart the process? But it looks like it's only stopping one process and creating two new processes, resulting in lots of reputation processes left running and a constant climb in RAM usage. Also I have had the same experience that other have mentioned about the reputation contract only checking in every other hour. Right now it is checking in almost every hour (and some times twice per hour), but before the second contract popped up it was only every other hour.

This is for address : ratL9W1nKcFTS75y8Dv5yvEB9wcnNoL7FJ and reputation address: rKdEx4U7UiVma88rApf5hQsC6hVhYs2wHB

Screenshot 2024-05-20 125953 Screenshot 2024-05-20 130341 Screenshot 2024-05-20 125827

Edit: I was able to destroy one of the instances after multiple attempts. But after it running over night it has gone from 1 active instance to 4 active instances and the memory utilization continues to climb. All the instances are from the same Reputation address.

I am also seeing the reputation contract being rejected and even with 3-4 instances running the URITOKEN BUY is only happening every other hour. Screenshot 2024-05-21 120455

Edit 2: My evernode host is losing 0.01 evr per moment when it should be gaining 0.01 per moment from the URITOKEN BUY.

chalith commented 4 months ago

Run the command evernode logs and send the reputationd log section. Then run evernode reputationd opt-out command, it'll stop the reputation until we figure out the issue. Do you see 4 active instances at once when you are running the evernode list command?

Sovelo commented 4 months ago

I see anywhere from 1 to 4 instances. It seems to fluctuate quite a bit but they are always from the same reputation address. EvrLog.txt

chalith commented 4 months ago

Can you check how many instances are there when you run evernode list and, Available lease offers in evernode status command.

And can you also send us the output of ps -u sashireputationd -f command

chalith commented 4 months ago

And also can you run the following commands and send us the 1.log, 2.log and 3.log files

sudo -u sashimbxrpl bash -c journalctl --user -u sashimono-mb-xrpl | grep "May 18" > 1.log &&
sudo -u sashimbxrpl bash -c journalctl --user -u sashimono-mb-xrpl | grep "May 20" > 2.log &&
sudo -u sashimbxrpl bash -c journalctl --user -u sashimono-mb-xrpl | grep "May 22" > 3.log

And also check evernode list command and whether any instances are running with following names.

0E397991A664A030378E2AD9BD9F0AD26BF1EB0F1C594E05D070FF76DE873E5C
0E9B9F65890D03BF27FB234C696506C829ACBC17DB25C6FAEE2BC0BBEAAE692F
3D6F3C682BCBA365EE05857F44154D00C54252F1FA8FCD2A62F3D0E8EDD9D2E2
metamorphosis143 commented 4 months ago

rUZz4hn1Dbh7kZnb3Q31F7iG2KjhW3kzmP

Mine have 3 instances left after transferring my host to other xahau account. Ill transfer back the host to this address if the log is needed by the dev

chalith commented 4 months ago

rUZz4hn1Dbh7kZnb3Q31F7iG2KjhW3kzmP

Mine have 3 instances left after transferring my host to other xahau account. Ill transfer back the host to this address if the log is needed by the dev

I take this is your previous host address before you transfer the host? Did you transfer the host using evernode transfer or curl command?

Sovelo commented 4 months ago

Can you check how many instances are there when you run evernode list and, Available lease offers in evernode status command.

And can you also send us the output of ps -u sashireputationd -f command

Decided to let it keep running, for science. It went inactive over night. List and status both seem to fluctuate randomly as well.

1.txt

Sovelo commented 4 months ago

And also can you run the following commands and send us the 1.log, 2.log and 3.log files

sudo -u sashimbxrpl bash -c journalctl --user -u sashimono-mb-xrpl | grep "May 18" > 1.log &&
sudo -u sashimbxrpl bash -c journalctl --user -u sashimono-mb-xrpl | grep "May 20" > 2.log &&
sudo -u sashimbxrpl bash -c journalctl --user -u sashimono-mb-xrpl | grep "May 22" > 3.log

And also check evernode list command and whether any instances are running with following names.

0E397991A664A030378E2AD9BD9F0AD26BF1EB0F1C594E05D070FF76DE873E5C
0E9B9F65890D03BF27FB234C696506C829ACBC17DB25C6FAEE2BC0BBEAAE692F
3D6F3C682BCBA365EE05857F44154D00C54252F1FA8FCD2A62F3D0E8EDD9D2E2

1.log 2.log 3.log

metamorphosis143 commented 4 months ago

rUZz4hn1Dbh7kZnb3Q31F7iG2KjhW3kzmP Mine have 3 instances left after transferring my host to other xahau account. Ill transfer back the host to this address if the log is needed by the dev

I take this is your previous host address before you transfer the host? Did you transfer the host using evernode transfer or curl command?

curl -fsSL https://raw.githubusercontent.com/EvernodeXRPL/evernode-resources/main/sashimono/installer/evernode.sh | sudo bash -s transfer

around 3+ previous nodes got same problem. the rest are fine

Sovelo commented 4 months ago

Hi! Just a quick update on the current state of this host. I have opted out of the reputation contract and attempted to delete the reputation instances followed by a full reboot of the server. The host is active again, but it still only has 3/8 instances available. The reputation status says inactive, however there are still two reputation instances running that won't stop.

5-23-24.txt

Zamolxis969 commented 4 months ago

hello I have been going through a lots of. issue with this reputation also. cant understand why it created different network? messing network config added a extra IPv6 address. It is keep trying to conect to wss:// myhost.mydomain.com. On the DNS I dont have A CNAME record with this name so ovesly connection fails. Paste some log here: May 23 17:30:37 rip1 node[132269]: 20240523 22:30:37 [dbg] Missing-connections timeout reached. May 23 17:30:40 rip1 node[132269]: 20240523 22:30:40 [dbg] Preparing reputation contract for the Moment 3881... May 23 17:30:40 rip1 node[132269]: 20240523 22:30:40 [dbg] Skipping acquire since there is already created instance for the moment 3881. May 23 17:30:40 rip1 node[132269]: 20240523 22:30:40 [dbg] Waiting 0 milliseconds until other hosts are ready. May 23 17:30:40 rip1 node[132269]: 20240523 22:30:40 [dbg] Deploying the reputation contract instance. May 23 17:30:41 rip1 node[132269]: 20240523 22:30:41 [dbg] Placed contract content. May 23 17:30:41 rip1 node[132269]: 20240523 22:30:41 [dbg] Prepared hp.cfg.override file. May 23 17:30:41 rip1 node[132269]: 20240523 22:30:41 [dbg] Added prerequisite installer script. May 23 17:30:41 rip1 node[132269]: 20240523 22:30:41 [dbg] Prepared contract bundle at /tmp/reputation-bundle-target-ticIZJ/bundle.zip May 23 17:30:41 rip1 node[132269]: 20240523 22:30:41 [dbg] My public key is: edc43de75de728e4c491b252ebeccffa6109e1057ed4d80cdf0d23b4480f71d369 May 23 17:30:41 rip1 node[132269]: 20240523 22:30:41 [dbg] Connecting to wss:// myhost.mydomain.com:26202 May 23 17:30:44 rip1 node[132269]: 20240523 22:30:44 [dbg] Connecting to wss:// myhost.mydomain.com:26202 May 23 17:30:46 rip1 node[132269]: 20240523 22:30:46 [dbg] Missing-connections timeout reached. May 23 17:30:48 rip1 node[132269]: 20240523 22:30:48 [dbg] Preparing reputation contract for the Moment 3881... May 23 17:30:48 rip1 node[132269]: 20240523 22:30:48 [dbg] Skipping acquire since there is already created instance for the moment 3881. May 23 17:30:48 rip1 node[132269]: 20240523 22:30:48 [dbg] Waiting 0 milliseconds until other hosts are ready. May 23 17:30:49 rip1 node[132269]: 20240523 22:30:49 [dbg] Deploying the reputation contract instance. May 23 17:30:49 rip1 node[132269]: 20240523 22:30:49 [dbg] Placed contract content. May 23 17:30:49 rip1 node[132269]: 20240523 22:30:49 [dbg] Prepared hp.cfg.override file. May 23 17:30:49 rip1 node[132269]: 20240523 22:30:49 [dbg] Added prerequisite installer script. May 23 17:30:49 rip1 node[132269]: 20240523 22:30:49 [dbg] Prepared contract bundle at /tmp/reputation-bundle-target-6FkT8M/bundle.zip May 23 17:30:49 rip1 node[132269]: 20240523 22:30:49 [dbg] My public key is: edc43de75de728e4c491b252ebeccffa6109e1057ed4d80cdf0d23b4480f71d369 May 23 17:30:49 rip1 node[132269]: 20240523 22:30:49 [dbg] Connecting to wss:// myhost.mydomain.com:26202 May 23 17:30:52 rip1 node[132269]: 20240523 22:30:52 [dbg] Connecting to wss:// myhost.mydomain.com:26202 May 23 17:30:54 rip1 node[132269]: 20240523 22:30:54 [dbg] Missing-connections timeout reached. May 23 17:30:54 rip1 node[132269]: 20240523 22:30:54 [err] wss:// myhost.mydomain.com:26202 connection failed.

Zamolxis969 commented 4 months ago

I can see a lot of junk in logs. paste some log from a different host: May 23 14:26:17 ivy1 node[122583]: 20240523 19:26:17 [dbg] Skipping reputation contract preparation since there's no universe info for the moment 3878. May 23 14:27:15 ivy1 systemd[675]: home-sashi1716488816750628971-671F000DDAD8CAB95F726720FC783C987D159BF1EC1E02FEBA1F2386A59F866F-contract_fs-mnt.mount: Succeeded. May 23 14:27:15 ivy1 systemd[675]: home-sashi1716488816750628971-671F000DDAD8CAB95F726720FC783C987D159BF1EC1E02FEBA1F2386A59F866F-ledger_fs-mnt.mount: Succeeded. May 23 14:27:16 ivy1 systemd[675]: run-user-1003.mount: Succeeded. May 23 15:25:05 ivy1 node[122583]: 20240523 20:25:05 [dbg] Reporting reputations at Moment 3878 Without scores... May 23 15:25:11 ivy1 node[122583]: 20240523 20:25:11 [dbg] Transaction result: tesSUCCESS May 23 15:26:15 ivy1 node[122583]: 20240523 20:26:15 [dbg] Preparing reputation contract for the Moment 3879... May 23 15:26:27 ivy1 node[122583]: 20240523 20:26:27 [dbg] Waiting until 64 node is acquired... May 23 15:30:05 ivy1 node[122583]: 20240523 20:30:05 [err] Maximum timeout reached for preparation May 23 15:30:05 ivy1 node[122583]: 20240523 20:30:05 [dbg] Skipping since first host failed for the moment 3879. May 23 16:25:05 ivy1 node[122583]: 20240523 21:25:05 [dbg] Reporting reputations at Moment 3879 Without scores... May 23 16:25:12 ivy1 node[122583]: 20240523 21:25:12 [dbg] Transaction result: tecHOOK_REJECTED May 23 16:25:12 ivy1 node[122583]: 20240523 21:25:12 [dbg] Reputation rejected by the hook. May 23 16:26:14 ivy1 node[122583]: 20240523 21:26:14 [dbg] Skipping reputation contract preparation since there's no universe info for the moment 3880. May 23 17:25:05 ivy1 node[122583]: 20240523 22:25:05 [dbg] Reporting reputations at Moment 3880 Without scores... May 23 17:25:10 ivy1 node[122583]: 20240523 22:25:10 [dbg] Transaction result: tesSUCCESS May 23 17:26:15 ivy1 node[122583]: 20240523 22:26:15 [dbg] Preparing reputation contract for the Moment 3881... May 23 17:26:39 ivy1 node[122583]: 20240523 22:26:39 [dbg] Waiting until 128 node is acquired... May 23 17:30:05 ivy1 node[122583]: 20240523 22:30:05 [err] Maximum timeout reached for preparation May 23 17:30:05 ivy1 node[122583]: 20240523 22:30:05 [dbg] Skipping since first host failed for the moment 3881.

metamorphosis143 commented 4 months ago

I can see a lot of junk in logs. paste some log from a different host: May 23 14:26:17 ivy1 node[122583]: 20240523 19:26:17 [dbg] Skipping reputation contract preparation since there's no universe info for the moment 3878. May 23 14:27:15 ivy1 systemd[675]: home-sashi1716488816750628971-671F000DDAD8CAB95F726720FC783C987D159BF1EC1E02FEBA1F2386A59F866F-contract_fs-mnt.mount: Succeeded. May 23 14:27:15 ivy1 systemd[675]: home-sashi1716488816750628971-671F000DDAD8CAB95F726720FC783C987D159BF1EC1E02FEBA1F2386A59F866F-ledger_fs-mnt.mount: Succeeded. May 23 14:27:16 ivy1 systemd[675]: run-user-1003.mount: Succeeded. May 23 15:25:05 ivy1 node[122583]: 20240523 20:25:05 [dbg] Reporting reputations at Moment 3878 Without scores... May 23 15:25:11 ivy1 node[122583]: 20240523 20:25:11 [dbg] Transaction result: tesSUCCESS May 23 15:26:15 ivy1 node[122583]: 20240523 20:26:15 [dbg] Preparing reputation contract for the Moment 3879... May 23 15:26:27 ivy1 node[122583]: 20240523 20:26:27 [dbg] Waiting until 64 node is acquired... May 23 15:30:05 ivy1 node[122583]: 20240523 20:30:05 [err] Maximum timeout reached for preparation May 23 15:30:05 ivy1 node[122583]: 20240523 20:30:05 [dbg] Skipping since first host failed for the moment 3879. May 23 16:25:05 ivy1 node[122583]: 20240523 21:25:05 [dbg] Reporting reputations at Moment 3879 Without scores... May 23 16:25:12 ivy1 node[122583]: 20240523 21:25:12 [dbg] Transaction result: tecHOOK_REJECTED May 23 16:25:12 ivy1 node[122583]: 20240523 21:25:12 [dbg] Reputation rejected by the hook. May 23 16:26:14 ivy1 node[122583]: 20240523 21:26:14 [dbg] Skipping reputation contract preparation since there's no universe info for the moment 3880. May 23 17:25:05 ivy1 node[122583]: 20240523 22:25:05 [dbg] Reporting reputations at Moment 3880 Without scores... May 23 17:25:10 ivy1 node[122583]: 20240523 22:25:10 [dbg] Transaction result: tesSUCCESS May 23 17:26:15 ivy1 node[122583]: 20240523 22:26:15 [dbg] Preparing reputation contract for the Moment 3881... May 23 17:26:39 ivy1 node[122583]: 20240523 22:26:39 [dbg] Waiting until 128 node is acquired... May 23 17:30:05 ivy1 node[122583]: 20240523 22:30:05 [err] Maximum timeout reached for preparation May 23 17:30:05 ivy1 node[122583]: 20240523 22:30:05 [dbg] Skipping since first host failed for the moment 3881.

I also encountered it. and many have encountered it too. so out of the many nodes I have, only one opted-in for reputation contract because of this issue

chalith commented 4 months ago

Hi! Just a quick update on the current state of this host. I have opted out of the reputation contract and attempted to delete the reputation instances followed by a full reboot of the server. The host is active again, but it still only has 3/8 instances available. The reputation status says inactive, however there are still two reputation instances running that won't stop.

5-23-24.txt

Did you try destroying using evernode delete nnn? Can you also try sashi destroy -n nnn

Sovelo commented 4 months ago

Hi! Just a quick update on the current state of this host. I have opted out of the reputation contract and attempted to delete the reputation instances followed by a full reboot of the server. The host is active again, but it still only has 3/8 instances available. The reputation status says inactive, however there are still two reputation instances running that won't stop. 5-23-24.txt

Did you try destroying using evernode delete nnn? Can you also try sashi destroy -n nnn

Yes, I am pretty sure I ran it a few times a few days ago without any errors, today I tried running them again and both commands are giving the same destroy errors.

sudo evernode delete 0E9B9F65890D03BF27FB234C696506C829ACBC17DB25C6FAEE2BC0BBEAAE692F Stopping the message board... Deleting instance 0E9B9F65890D03BF27FB234C696506C829ACBC17DB25C6FAEE2BC0BBEAAE692F Destroying the instance... { type: 'destroy_error', content: 'container_destroy_error' } MB_CLI_EXITED There was an error in deleting the instance. Starting the message board...

sudo sashi destroy -n 0E9B9F65890D03BF27FB234C696506C829ACBC17DB25C6FAEE2BC0BBEAAE692F {"type":"destroy_error","content":"container_destroy_error"}

chalith commented 4 months ago

rUZz4hn1Dbh7kZnb3Q31F7iG2KjhW3kzmP

We have found the issue here. We'll fix it. Are you going to install Evernode using the same account again?

chalith commented 4 months ago

Hi! Just a quick update on the current state of this host. I have opted out of the reputation contract and attempted to delete the reputation instances followed by a full reboot of the server. The host is active again, but it still only has 3/8 instances available. The reputation status says inactive, however there are still two reputation instances running that won't stop. 5-23-24.txt

Did you try destroying using evernode delete nnn? Can you also try sashi destroy -n nnn

Yes, I am pretty sure I ran it a few times a few days ago without any errors, today I tried running them again and both commands are giving the same destroy errors.

sudo evernode delete 0E9B9F65890D03BF27FB234C696506C829ACBC17DB25C6FAEE2BC0BBEAAE692F Stopping the message board... Deleting instance 0E9B9F65890D03BF27FB234C696506C829ACBC17DB25C6FAEE2BC0BBEAAE692F Destroying the instance... { type: 'destroy_error', content: 'container_destroy_error' } MB_CLI_EXITED There was an error in deleting the instance. Starting the message board...

sudo sashi destroy -n 0E9B9F65890D03BF27FB234C696506C829ACBC17DB25C6FAEE2BC0BBEAAE692F {"type":"destroy_error","content":"container_destroy_error"}

We have found a possible cause. To verify can you run sudo evernode delete and after that send us the output of evernode log command.

Sovelo commented 4 months ago

Hi! Just a quick update on the current state of this host. I have opted out of the reputation contract and attempted to delete the reputation instances followed by a full reboot of the server. The host is active again, but it still only has 3/8 instances available. The reputation status says inactive, however there are still two reputation instances running that won't stop. 5-23-24.txt

Did you try destroying using evernode delete nnn? Can you also try sashi destroy -n nnn

Yes, I am pretty sure I ran it a few times a few days ago without any errors, today I tried running them again and both commands are giving the same destroy errors. sudo evernode delete 0E9B9F65890D03BF27FB234C696506C829ACBC17DB25C6FAEE2BC0BBEAAE692F Stopping the message board... Deleting instance 0E9B9F65890D03BF27FB234C696506C829ACBC17DB25C6FAEE2BC0BBEAAE692F Destroying the instance... { type: 'destroy_error', content: 'container_destroy_error' } MB_CLI_EXITED There was an error in deleting the instance. Starting the message board... sudo sashi destroy -n 0E9B9F65890D03BF27FB234C696506C829ACBC17DB25C6FAEE2BC0BBEAAE692F {"type":"destroy_error","content":"container_destroy_error"}

We have found a possible cause. To verify can you run sudo evernode delete and after that send us the output of evernode log command.

Here are the logs from today. evernode.L78h7HlzS.log

chalith commented 4 months ago

Closing the issue since fixed with the patch