r4yan2 / peaks

GNU Affero General Public License v3.0
5 stars 0 forks source link

Peaks does not sync with Hockeypuck #11

Open skasks opened 2 years ago

skasks commented 2 years ago

commit 07e24a1679175f59d8460aa42b21557bc42206bf (HEAD -> master, origin/master, origin/HEAD) Author: Andrea Grazioso ray grazioandre@gmail.com Date: Sun Jun 5 12:25:11 2022 +0200

Building from source, I get when I recon an empty DB with a hockeypuck server:

level=error msg="recon with 10.20.205.45:48724 failed" error="EOF\nhockeypuck/conflux/recon.ReadInt\n\t/root/HOST/hockeypuck/src/hockeypuck/conflux/recon/messages.go:137\nhockeypuck/conflux/recon.ReadLen\n\t/root/HOST/hockeypuck/src/hockeypuck/conflux/recon/messages.go:144\nhockeypuck/conflux/recon.ReadMsg\n\t/root/HOST/hockeypuck/src/hockeypuck/conflux/recon/messages.go:611\nhockeypuck/conflux/recon.(Peer).interactWithClient\n\t/root/HOST/hockeypuck/src/hockeypuck/conflux/recon/peer.go:798\nhockeypuck/conflux/recon.(Peer).Accept\n\t/root/HOST/hockeypuck/src/hockeypuck/conflux/recon/peer.go:536\nhockeypuck/conflux/recon.(Peer).Serve.func2\n\t/root/HOST/hockeypuck/src/hockeypuck/conflux/recon/peer.go:337\ngopkg.in/tomb%2ev2.(Tomb).run\n\t/root/HOST/hockeypuck/pkg/mod/gopkg.in/tomb.v2@v2.0.0-20161208151619-d5d1b5820637/tomb.go:163\nruntime.goexit\n\t/usr/lib/go-1.15/src/runtime/asm_amd64.s:1374\nhockeypuck/conflux/recon.ReadLen\n\t/root/HOST/hockeypuck/src/hockeypuck/conflux/recon/messages.go:146\nhockeypuck/conflux/recon.ReadMsg\n\t/root/HOST/hockeypuck/src/hockeypuck/conflux/recon/messages.go:611\nhockeypuck/conflux/recon.(Peer).interactWithClient\n\t/root/HOST/hockeypuck/src/hockeypuck/conflux/recon/peer.go:798\nhockeypuck/conflux/recon.(Peer).Accept\n\t/root/HOST/hockeypuck/src/hockeypuck/conflux/recon/peer.go:536\nhockeypuck/conflux/recon.(Peer).Serve.func2\n\t/root/HOST/hockeypuck/src/hockeypuck/conflux/recon/peer.go:337\ngopkg.in/tomb%2ev2.(Tomb).run\n\t/root/HOST/hockeypuck/pkg/mod/gopkg.in/tomb.v2@v2.0.0-20161208151619-d5d1b5820637/tomb.go:163\nruntime.goexit\n\t/usr/lib/go-1.15/src/runtime/asm_amd64.s:1374\nhockeypuck/conflux/recon.ReadMsg\n\t/root/HOST/hockeypuck/src/hockeypuck/conflux/recon/messages.go:613\nhockeypuck/conflux/recon.(Peer).interactWithClient\n\t/root/HOST/hockeypuck/src/hockeypuck/conflux/recon/peer.go:798\nhockeypuck/conflux/recon.(Peer).Accept\n\t/root/HOST/hockeypuck/src/hockeypuck/conflux/recon/peer.go:536\nhockeypuck/conflux/recon.(Peer).Serve.func2\n\t/root/HOST/hockeypuck/src/hockeypuck/conflux/recon/peer.go:337\ngopkg.in/tomb%2ev2.(Tomb).run\n\t/root/HOST/hockeypuck/pkg/mod/gopkg.in/tomb.v2@v2.0.0-20161208151619-d5d1b5820637/tomb.go:163\nruntime.goexit\n\t/usr/lib/go-1.15/src/runtime/asm_amd64.s:1374\nhockeypuck/conflux/recon.(Peer).interactWithClient\n\t/root/HOST/hockeypuck/src/hockeypuck/conflux/recon/peer.go:800\nhockeypuck/conflux/recon.(Peer).Accept\n\t/root/HOST/hockeypuck/src/hockeypuck/conflux/recon/peer.go:536\nhockeypuck/conflux/recon.(Peer).Serve.func2\n\t/root/HOST/hockeypuck/src/hockeypuck/conflux/recon/peer.go:337\ngopkg.in/tomb%2ev2.(Tomb).run\n\t/root/HOST/hockeypuck/pkg/mod/gopkg.in/tomb.v2@v2.0.0-20161208151619-d5d1b5820637/tomb.go:163\nruntime.goexit\n\t/usr/lib/go-1.15/src/runtime/asm_amd64.s:1374" label="serve :11370"

r4yan2 commented 2 years ago

That's weird, from the hockepuck log you posted it's not clear what it is exactly the issue Do the peaks logs show something meaningful? You can enable verbose logging with the -d 7 (and eventually -s for console output) By the way, did you build the prefix tree peaks build? It is required even with an empty DB

edit 26-06-22 14:47 7 is the most verbose level of logging

- -d 6
+ -d 7
skasks commented 2 years ago

This is an empty database, there had been some pulls from the hpckeypuck server, after an import; but a gut pull destroyed the DB and I have to import again.

This is the output of git 2588c1f [FIX] forgot stats.tmpl file with an empty DB:

peaks recon[5201]: starting gossip client peaks recon[5201]: choose as partner: 10.20.205.44 peaks recon[5201]: Socket open ok peaks recon[5201]: Connect ok peaks recon[5201]: Sending Peer config peaks recon[5201]: Receiving Peer config peaks recon[5201]: received remote config peaks recon[5201]: Sending Config ok peaks recon[5201]: Config check ok peaks recon[5201]: choosen partner 10.20.205.44 peaks recon[5201]: Receiving Recon Request Poly for tset:: peaks recon[5201]: handling message 0 peaks recon[5201]: Execute prepared query SELECT FROM ptree WHERE node_key = (?) and key_size = (?) peaks recon[5201]: Param 1 -> peaks recon[5201]: Param 2 -> 0 peaks recon[5201]: Could not interpolate because size_diff (6422388) > size of values(6)! peaks recon[5201]: current value of n: 0 peaks recon[5201]: handling message 6 peaks recon[5201]: current value of n: 0 peaks recon[5201]: Sending Full Elements peaks recon[5201]: Receive operation would block, try to raise the timeout (11) peaks recon[5201]: closing remote connection 9 peaks recon[5201]: Resource temporarily unavailable peaks recon[5201]: going to sleep... peaks recon[5201]: ...resuming gossip peaks recon[5201]: starting gossip client peaks recon[5201]: choose as partner: 10.20.205.44 peaks recon[5201]: Socket open ok peaks recon[5201]: Connect ok peaks recon[5201]: Sending Peer config peaks recon[5201]: Receiving Peer config peaks recon[5201]: received remote config peaks recon[5201]: Sending Config ok peaks recon[5201]: Config check ok peaks recon[5201]: choosen partner 10.20.205.44 peaks recon[5201]: Receiving Recon Request Poly for tset:: peaks recon[5201]: handling message 0 peaks recon[5201]: Execute prepared query SELECT FROM ptree WHERE node_key = (?) and key_size = (?) peaks recon[5201]: Param 1 -> peaks recon[5201]: Param 2 -> 0 peaks recon[5201]: Could not interpolate because size_diff (6422388) > size of values(6)! peaks recon[5201]: current value of n: 0 peaks recon[5201]: handling message 6 peaks recon[5201]: current value of n: 0 peaks recon[5201]: Sending Full Elements peaks recon[5201]: handling message 5 peaks recon[5201]: current value of n: 0 peaks recon[5201]: Should recover 0 elements, starting double check! peaks recon[5201]: closing remote connection 9 peaks recon[5201]: going to sleep...

skasks commented 1 year ago

The funny thing is that I had it running with the previous variant: commit https://github.com/r4yan2/peaks/commit/07e24a1679175f59d8460aa42b21557bc42206bf in so far as it pulled data from hockeypuck, but denied any saves, because if "DRY RUN", although I have no dryrun setting anywhere

I'm still trying to replicate it

Yes, I ran peaks build

r4yan2 commented 1 year ago

Sorry for the delay. Indeed it's a regression in the reconciliation, the networking is currently poorly implemented. Anyway while investigating and fixing that one I found there were other 2-3 issues to fix (including the DRYRUN always active). I'll update the codebase asap after finishing some manual testing

r4yan2 commented 1 year ago

Hello, again sorry for the delay in dealing with those issues, with:

The discussed issues (and more) should have been addressed

skasks commented 1 year ago

With current version 0efa560 I get this:

peaks build ; peaks -c /etc/peaks/peaks_config recon -d 7 -s

peaks recon[1446]: Execute prepared query SELECT * FROM ptree WHERE node_key = (?) and key_size = (?)
peaks recon[1446]: Param 2 -> 0
peaks recon[1446]: Execute prepared query SELECT * FROM ptree WHERE node_key = (?) and key_size = (?)
peaks recon[1446]: Param 2 -> 0
peaks recon[1446]: Could not interpolate because size_diff (6429143) > size of values(6)!
peaks recon[1446]: current value of n: 0
peaks recon[1446]: handling message 6
peaks recon[1446]: current value of n: 0
peaks recon[1446]: Sending Full Elements
peaks recon[1446]: handling message 5
peaks recon[1446]: current value of n: 0
peaks recon[1446]: Should recover 0 elements, starting double check!
peaks recon[1446]: closing remote connection 6
peaks recon[1446]: going to sleep...

and Hockeypuck bails out with:

time="2022-08-04T11:26:08+02:00" level=error msg="recon with 10.20.205.45:35560 failed" error="read length 50331648 exceeds maximum limit\nhockeypuck/conflux/recon.ReadLen\n\t/root/HOST/hockeypuck/src/hockeypuck/conflux/recon/messages.go:149\nhockeypuck/conflux/recon.ReadMsg\n\t/root/HOST/hockeypuck/src/hockeypuck/conflux/recon/messages.go:611\nhockeypuck/conflux/recon.(*Peer).interactWithClient\n\t/root/HOST/hockeypuck/src/hockeypuck/conflux/recon/peer.go:798\nhockeypuck/conflux/recon.(*Peer).Accept\n\t/root/HOST/hockeypuck/src/hockeypuck/conflux/recon/peer.go:536\nhockeypuck/conflux/recon.(*Peer).Serve.func2\n\t/root/HOST/hockeypuck/src/hockeypuck/conflux/recon/peer.go:337\ngopkg.in/tomb%2ev2.(*Tomb).run\n\t/root/HOST/hockeypuck/pkg/mod/gopkg.in/tomb.v2@v2.0.0-20161208151619-d5d1b5820637/tomb.go:163\nruntime.goexit\n\t/usr/lib/go-1.15/src/runtime/asm_amd64.s:1374\nhockeypuck/conflux/recon.ReadMsg\n\t/root/HOST/hockeypuck/src/hockeypuck/conflux/recon/messages.go:613\nhockeypuck/conflux/recon.(*Peer).interactWithClient\n\t/root/HOST/hockeypuck/src/hockeypuck/conflux/recon/peer.go:798\nhockeypuck/conflux/recon.(*Peer).Accept\n\t/root/HOST/hockeypuck/src/hockeypuck/conflux/recon/peer.go:536\nhockeypuck/conflux/recon.(*Peer).Serve.func2\n\t/root/HOST/hockeypuck/src/hockeypuck/conflux/recon/peer.go:337\ngopkg.in/tomb%2ev2.(*Tomb).run\n\t/root/HOST/hockeypuck/pkg/mod/gopkg.in/tomb.v2@v2.0.0-20161208151619-d5d1b5820637/tomb.go:163\nruntime.goexit\n\t/usr/lib/go-1.15/src/runtime/asm_amd64.s:1374\nhockeypuck/conflux/recon.(*Peer).interactWithClient\n\t/root/HOST/hockeypuck/src/hockeypuck/conflux/recon/peer.go:800\nhockeypuck/conflux/recon.(*Peer).Accept\n\t/root/HOST/hockeypuck/src/hockeypuck/conflux/recon/peer.go:536\nhockeypuck/conflux/recon.(*Peer).Serve.func2\n\t/root/HOST/hockeypuck/src/hockeypuck/conflux/recon/peer.go:337\ngopkg.in/tomb%2ev2.(*Tomb).run\n\t/root/HOST/hockeypuck/pkg/mod/gopkg.in/tomb.v2@v2.0.0-20161208151619-d5d1b5820637/tomb.go:163\nruntime.goexit\n\t/usr/lib/go-1.15/src/runtime/asm_amd64.s:1374" label="serve :11370" 
skasks commented 1 year ago

Hm, with the current version of peaks I get out of mem of hockeypuck:

Aug  4 12:01:01 sv-2s127 hockeypuck[798]: fatal error: runtime: cannot allocate memory
Aug  4 12:01:01 sv-2s127 hockeypuck[798]: runtime stack:
Aug  4 12:01:01 sv-2s127 hockeypuck[798]: runtime.throw(0xbabec8, 0x1f)
Aug  4 12:01:01 sv-2s127 hockeypuck[798]: #011/usr/lib/go-1.15/src/runtime/panic.go:1116 +0x72
Aug  4 12:01:01 sv-2s127 hockeypuck[798]: runtime.persistentalloc1(0x4000, 0x0, 0x1081cb8, 0x3f)
Aug  4 12:01:01 sv-2s127 hockeypuck[798]: #011/usr/lib/go-1.15/src/runtime/malloc.go:1376 +0x2e5
Aug  4 12:01:01 sv-2s127 hockeypuck[798]: runtime.persistentalloc.func1()
Aug  4 12:01:01 sv-2s127 hockeypuck[798]: #011/usr/lib/go-1.15/src/runtime/malloc.go:1330 +0x45
Aug  4 12:01:01 sv-2s127 hockeypuck[798]: runtime.persistentalloc(0x4000, 0x0, 0x1081cb8, 0x12)
Aug  4 12:01:01 sv-2s127 hockeypuck[798]: #011/usr/lib/go-1.15/src/runtime/malloc.go:1329 +0x85
Aug  4 12:01:01 sv-2s127 hockeypuck[798]: runtime.(*fixalloc).alloc(0x107ef38, 0x7f622b0351d0)
Aug  4 12:01:01 sv-2s127 hockeypuck[798]: #011/usr/lib/go-1.15/src/runtime/mfixalloc.go:80 +0xf7
Limit                     Soft Limit           Hard Limit           Units     
Max cpu time              unlimited            unlimited            seconds   
Max file size             unlimited            unlimited            bytes     
Max data size             unlimited            unlimited            bytes     
Max stack size            2147483648           2147483648           bytes     
Max core file size        0                    unlimited            bytes     
Max resident set          unlimited            unlimited            bytes     
Max processes             31842                31842                processes 
Max open files            49152                49152                files     
Max locked memory         unlimited            unlimited            bytes     
Max address space         unlimited            unlimited            bytes     
Max file locks            unlimited            unlimited            locks     
Max pending signals       31842                31842                signals   
Max msgqueue size         819200               819200               bytes     
Max nice priority         0                    0                    
Max realtime priority     0                    0                    
Max realtime timeout      unlimited            unlimited            us