sahib / brig

File synchronization on top of ipfs with git like interface & web based UI
https://brig.readthedocs.io
GNU Affero General Public License v3.0
569 stars 33 forks source link

fetch: replay: type-conflict-remove: panic rollback: cannot put a ghost in a ghost #112

Open hulkabob opened 3 years ago

hulkabob commented 3 years ago

Tried using brig lik-my-grandma-would-do, where you don't think too much about how files are getting stored. Generally I have multiple file dirs, which I consider useful to be shared across multiple hosts. And faced rather interesting (logical?) issue with the history update aspects of a file.

What did you do?

Imported a file, then found out that it's obsolete. Wanted to store a copy of that file in head in old directory with the same name. Moved files from /dir/ to /dir/old. Added a new file with content. Committed it. When trying a sync with a remote, it argues with a message below.

What did you expect to see?

Sync with new files added and old ones shifted around.

What did you see instead?

This:

18.10.2021/23:27:15 ⚡ cmd/parser.go:543:  fetch: replay: type-conflict-remove: panic rollback: cannot put a ghost in a ghost; stack: goroutine 20675 [running]:
runtime/debug.Stack(0xc000732cf0, 0xc001424780, 0x12569d0)
        /usr/lib/go/src/runtime/debug/stack.go:24 +0x9d
github.com/sahib/brig/catfs/core.(*Linker).AtomicWithBatch.func1(0x1292720, 0xc0016f4360, 0xc001c48140, 0xc000575620)
        /home/sahib/dev/brig/catfs/core/linker.go:1741 +0xf8
panic(0xf5a4c0, 0x12569d0)
        /usr/lib/go/src/runtime/panic.go:522 +0x1b5
github.com/sahib/brig/catfs/nodes.MakeGhost(0x129d880, 0xc000732450, 0x99, 0x0, 0x0, 0x0)
        /home/sahib/dev/brig/catfs/nodes/ghost.go:30 +0x181
github.com/sahib/brig/catfs/core.Remove.func1(0xc0015ed570, 0x20, 0xc0015ed568)
        /home/sahib/dev/brig/catfs/core/coreutils.go:155 +0x259
github.com/sahib/brig/catfs/core.(*Linker).Atomic.func1(0x1292720, 0xc0016f4360, 0x1292720, 0xc0016f4360, 0xc001c48140)
        /home/sahib/dev/brig/catfs/core/linker.go:1726 +0x26
github.com/sahib/brig/catfs/core.(*Linker).AtomicWithBatch(0xc001c48140, 0xc0015ed630, 0x0, 0x0)
        /home/sahib/dev/brig/catfs/core/linker.go:1745 +0xcb
github.com/sahib/brig/catfs/core.(*Linker).Atomic(0xc001c48140, 0xc0015ed698, 0x129b740, 0xc000732450)
        /home/sahib/dev/brig/catfs/core/linker.go:1725 +0x53
github.com/sahib/brig/catfs/core.Remove(0xc001c48140, 0x129d880, 0xc000732450, 0x1290101, 0xc0000e6680, 0x0, 0x0, 0x3)
        /home/sahib/dev/brig/catfs/core/coreutils.go:143 +0x20e
github.com/sahib/brig/catfs/vcs.replayAddWithUnpacking(0xc001c48140, 0xc0004b6000, 0xb, 0x1157090)
        /home/sahib/dev/brig/catfs/vcs/change.go:133 +0x147
github.com/sahib/brig/catfs/vcs.(*Change).Replay.func1(0xc0015ed7d8, 0x20, 0xc0015ed7d0)
        /home/sahib/dev/brig/catfs/vcs/change.go:253 +0x122
github.com/sahib/brig/catfs/core.(*Linker).Atomic.func1(0x1292720, 0xc0016f4360, 0x1292720, 0xc0016f4360, 0xc001c48140)
        /home/sahib/dev/brig/catfs/core/linker.go:1726 +0x26
github.com/sahib/brig/catfs/core.(*Linker).AtomicWithBatch(0xc001c48140, 0xc0015ed898, 0x0, 0x0)
        /home/sahib/dev/brig/catfs/core/linker.go:1745 +0xcb
github.com/sahib/brig/catfs/core.(*Linker).Atomic(0xc001c48140, 0xc0015ed8d8, 0xb, 0xc0015ed958)
        /home/sahib/dev/brig/catfs/core/linker.go:1725 +0x53
github.com/sahib/brig/catfs/vcs.(*Change).Replay(0xc0004b6000, 0xc001c48140, 0x110ec5a, 0xb)
        /home/sahib/dev/brig/catfs/vcs/change.go:249 +0x61
github.com/sahib/brig/catfs/vcs.ApplyPatch(0xc001c48140, 0xc000732030, 0x0, 0x0)
        /home/sahib/dev/brig/catfs/vcs/patch.go:315 +0x147
github.com/sahib/brig/catfs.(*FS).ApplyPatch(0xc001d0a150, 0xc000c3c548, 0x2000, 0x2008, 0x0, 0x0)
        /home/sahib/dev/brig/catfs/fs.go:1879 +0x121
github.com/sahib/brig/server.(*base).doFetch.func1.1(0xc001d0a150, 0xc0024fc0c0, 0x28)
        /home/sahib/dev/brig/server/base.go:490 +0x1da
github.com/sahib/brig/server.(*base).withRemoteFs(0xc001338240, 0xc0024fc0c0, 0x28, 0xc0015edb50, 0xc000f80310, 0x67)
        /home/sahib/dev/brig/server/base.go:360 +0xac
github.com/sahib/brig/server.(*base).doFetch.func1(0xc0007cf400, 0xc001af4100, 0xc0024fc0c0)
        /home/sahib/dev/brig/server/base.go:464 +0x77
github.com/sahib/brig/server.(*base).withNetClient(0xc001338240, 0xc0024fc0c0, 0x28, 0xc0015edc58, 0x0, 0x0)
        /home/sahib/dev/brig/server/base.go:389 +0x15c
github.com/sahib/brig/server.(*base).doFetch(0xc001338240, 0xc0024fc0c0, 0x28, 0x28, 0x29)
        /home/sahib/dev/brig/server/base.go:463 +0x83
github.com/sahib/brig/server.(*base).doSync(0xc001338240, 0xc0024fc0c0, 0x28, 0x1, 0x0, 0x0, 0x28, 0x0, 0x0)
        /home/sahib/dev/brig/server/base.go:497 +0x132
github.com/sahib/brig/server.(*vcsHandler).Sync(0xc001ab8ad0, 0x128cfa0, 0xc001af4040, 0xc0007b8720, 0xc001660330, 0x800000068, 0x1, 0x3c, 0x0, 0xc0016603c0, ...)
        /home/sahib/dev/brig/server/vcs_handler.go:453 +0x14b
github.com/sahib/brig/server/capnp.API_Methods.func26(0x128cfa0, 0xc001af4040, 0xc0007b8720, 0xc001660330, 0x800000068, 0x1, 0x3c, 0x0, 0xc0016603c0, 0x8, ...)
        /home/sahib/dev/brig/server/capnp/local_api.capnp.go:15353 +0x12f
zombiezen.com/go/capnproto2/server.(*server).startCall.func1(0xc001ce60e0, 0xc0007b8720, 0xc0016603c0, 0x8, 0x1, 0xffffffffffffffff, 0x0)
        /home/sahib/go/pkg/mod/zombiezen.com/go/capnproto2@v2.17.0+incompatible/server/server.go:87 +0xd0
created by zombiezen.com/go/capnproto2/server.(*server).startCall
        /home/sahib/go/pkg/mod/zombiezen.com/go/capnproto2@v2.17.0+incompatible/server/server.go:86 +0x1e7

Do you still see this issue with a development binary?

No. (Dev binary doesn't even start daemon for some reason)

Did you check if a similar bug report was already opened?

Yes.

System details:

go version: go version go1.17.1 linux/amd64 uname -s -v -m: Linux #1 SMP Wed Oct 13 20:59:13 EEST 2021 x86_64 IPFS config: { "API": { "HTTPHeaders": {} }, "Addresses": { "API": "/ip4/127.0.0.1/tcp/5001", "Announce": [], "Gateway": "/ip4/127.0.0.1/tcp/8080", "NoAnnounce": [], "Swarm": [ "/ip4/0.0.0.0/tcp/4001", "/ip6/::/tcp/4001", "/ip4/0.0.0.0/udp/4001/quic", "/ip6/::/udp/4001/quic" ] }, "AutoNAT": {}, "Bootstrap": [ "/dnsaddr/bootstrap.libp2p.io/p2p/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb", "/dnsaddr/bootstrap.libp2p.io/p2p/QmcZf59bWwK5XFi76CZX8cbJ4BhTzzA3gU1ZjYZcYW3dwt", "/ip4/104.131.131.82/tcp/4001/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ", "/ip4/104.131.131.82/udp/4001/quic/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ", "/dnsaddr/bootstrap.libp2p.io/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN", "/dnsaddr/bootstrap.libp2p.io/p2p/QmQCU2EcMqAqQPR2i9bChDtGNJchTbq5TbXJJ16u19uLTa" ], "DNS": { "Resolvers": {} }, "Datastore": { "BloomFilterSize": 0, "GCPeriod": "1h", "HashOnRead": false, "Spec": { "mounts": [ { "child": { "path": "blocks", "shardFunc": "/repo/flatfs/shard/v1/next-to-last/2", "sync": true, "type": "flatfs" }, "mountpoint": "/blocks", "prefix": "flatfs.datastore", "type": "measure" }, { "child": { "compression": "none", "path": "datastore", "type": "levelds" }, "mountpoint": "/", "prefix": "leveldb.datastore", "type": "measure" } ], "type": "mount" }, "StorageGCWatermark": 90, "StorageMax": "10GB" }, "Discovery": { "MDNS": { "Enabled": true, "Interval": 10 } }, "Experimental": { "AcceleratedDHTClient": false, "FilestoreEnabled": false, "GraphsyncEnabled": false, "Libp2pStreamMounting": true, "P2pHttpProxy": false, "ShardingEnabled": false, "StrategicProviding": false, "UrlstoreEnabled": false }, "Gateway": { "APICommands": [], "HTTPHeaders": { "Access-Control-Allow-Headers": [ "X-Requested-With", "Range", "User-Agent" ], "Access-Control-Allow-Methods": [ "GET" ], "Access-Control-Allow-Origin": [ "*" ] }, "NoDNSLink": false, "NoFetch": false, "PathPrefixes": [], "PublicGateways": null, "RootRedirect": "", "Writable": false }, "Identity": { "PeerID": "12D3KooWDSeyMuCTka1RCvZw9A3YH7MFxR8fvcB4sa1FFbY5FPw4" }, "Ipns": { "RecordLifetime": "", "RepublishPeriod": "", "ResolveCacheSize": 128 }, "Migration": { "DownloadSources": [], "Keep": "" }, "Mounts": { "FuseAllowOther": false, "IPFS": "/ipfs", "IPNS": "/ipns" }, "Peering": { "Peers": null }, "Pinning": { "RemoteServices": {} }, "Plugins": { "Plugins": null }, "Provider": { "Strategy": "" }, "Pubsub": { "DisableSigning": false, "Router": "" }, "Reprovider": { "Interval": "1h", "Strategy": "all" }, "Routing": { "Type": "dht" }, "Swarm": { "AddrFilters": null, "ConnMgr": { "GracePeriod": "60s", "HighWater": 900, "LowWater": 600, "Type": "basic" }, "DisableBandwidthMetrics": false, "DisableNatPortMap": false, "EnableAutoRelay": true, "EnableRelayHop": false, "Transports": { "Multiplexers": {}, "Network": {}, "Security": {} } } }

brig client version: v0.4.1+68f8766 [build: 2019-03-31T00:10:39+01:00] brig server version: v0.4.1+68f8766+68f8766fd9fe8929e8b3fc6cefca01454f380a5b IPFS Version: 0.9.1+

sahib commented 3 years ago

Hello @hulkabob,

thanks for taking a try and taking time to write a good bug report. Also that's a fun error message. Sadly, neither me or @evgmik are investing time into brig, so I just keep this issue open until either of us find some time and motivation to continue here. Chances are that things on develop are fixed already. What issue with the daemon start did you see there?

Best, Chris

hulkabob commented 3 years ago

Hi Chris, pardon for a late response. I can file in another bug report for the daemon start, but in a nutshell - it acts like the BRIG_PASSWORD is already set to some random value. Daemon doesn't ask password on init and fails afterwards. I experienced such password behaviour with stable brig and password set via env.

Here's the log:

hulkabob@terminus-alpha ~/work/brig $ ~/go/bin/brig --repo ~/.brig_dev_test/ init hulkabob@test/dev
-- Guessed IPFS repository as /var/lib/ipfs/.ipfs
-- The API address of the repo is: /ip4/127.0.0.1/tcp/5001
-- IPFS Daemon does not seem to be running.
-- Will start one for you with the following command:
-- IPFS_PATH='/var/lib/ipfs/.ipfs' ipfs daemon --enable-pubsub-experiment
-- Done waiting.                               
-- Started IPFS as child of this process.
-- Will set some default settings for IPFS.
-- These are required for brig to work smoothly.
-- The IPFS version is »0.9.1«.
  -- Setting config: IPFS_PATH='/var/lib/ipfs/.ipfs' ipfs config --json Experimental.Libp2pStreamMounting true
  -- Setting config: IPFS_PATH='/var/lib/ipfs/.ipfs' ipfs config --json Reprovider.Interval "1h"
  -- Setting config: IPFS_PATH='/var/lib/ipfs/.ipfs' ipfs config --json Swarm.ConnMgr.GracePeriod "60s"
  -- Setting config: IPFS_PATH='/var/lib/ipfs/.ipfs' ipfs config --json Swarm.EnableAutoRelay true
29.10.2021/00:12:04 ⚠ brig-repo/cmd/util.go:235: waiting a bit long for daemon to bootup...
29.10.2021/00:12:24 ⚡ brig-repo/cmd/parser.go:594: Unable to start daemon: Daemon could not be started or took to long

It is truly a pity that brig is on the shelf now, really fascinating piece of tech and an idea behind it. And btw, good paperwork in LaTeX. I could hope that you and @evgmik will stay safe and some when a spark of interest will light up. And brig will live and prosper :smile:

sahib commented 3 years ago

Hi @hulkabob

I can file in another bug report for the daemon start, but in a nutshell - it acts like the BRIG_PASSWORD is already set to some random value. Daemon doesn't ask password on init and fails afterwards. I experienced such password behaviour with stable brig and password set via env.

I actually removed all that password code some time ago (1bc74632811ed3067f16d5831a82d891fa45ca9e). What commit did you build from? (edit: nevermind, you wrote that above)

It is truly a pity that brig is on the shelf now, really fascinating piece of tech and an idea behind it. And btw, good paperwork in LaTeX.

Yes, it is a pity and I feel bad for it from time to time. But then I remember I already wrote way over 50k lines of code for that idea and nobody cared enough to develop it further. That's sad, but it would not be healthy to continue it alone (@evgmik already helped a lot here). On my end, motivation is not the big problem, more the time factor. If there are developers for brig, then there will be development.

evgmik commented 3 years ago

Hi mates, let me echo @sahib. I am quite motivated to continue, but I have almost no time for this project right now. I am looking forward to the long December holidays to do another round of improvements (last year we made huge push).

@hulkabob please use 'develop' branch, it has a lot of bugs fixed. But the 'init' sequence changed a bit.

hulkabob commented 2 years ago

@evgmik , @sahib pardon for a long wait. Had a bunch of issues recently. Built latest develop and it is not starting for some reason.

masterbob@terminus-alpha ~/work/brig/brig-repo $ brig init masterbob/personal-2
-- Guessed IPFS repository as /var/lib/ipfs/.ipfs
-- The API address of the repo is: /ip4/127.0.0.1/tcp/5001
-- IPFS Daemon does not seem to be running.
-- Will start one for you with the following command:
-- IPFS_PATH='/var/lib/ipfs/.ipfs' ipfs daemon --enable-pubsub-experiment
-- Done waiting.                               
-- Started IPFS as child of this process.
-- Will set some default settings for IPFS.
-- These are required for brig to work smoothly.
-- The IPFS version is »0.12.2«.
  -- Setting config: IPFS_PATH='/var/lib/ipfs/.ipfs' ipfs config --json Experimental.Libp2pStreamMounting true
  -- Setting config: IPFS_PATH='/var/lib/ipfs/.ipfs' ipfs config --json Reprovider.Interval "1h"
  -- Setting config: IPFS_PATH='/var/lib/ipfs/.ipfs' ipfs config --json Swarm.ConnMgr.GracePeriod "60s"
  -- Setting config: IPFS_PATH='/var/lib/ipfs/.ipfs' ipfs config --json Swarm.EnableAutoRelay true
01.06.2022/21:53:40 ⚠ brig-repo/cmd/util.go:235: waiting a bit long for daemon to bootup...
01.06.2022/21:54:00 ⚡ brig-repo/cmd/parser.go:594: Unable to start daemon: Daemon could not be started or took to long
hulkabob commented 2 years ago

And here are additional verification of brig version

masterbob@terminus-alpha ~/work/brig/brig-repo $ brig version
01.06.2022/21:59:01 ⚡ brig-repo/cmd/parser.go:594: Daemon not running
masterbob@terminus-alpha ~/work/brig/brig-repo $ brig -v
brig version v0.5.3-develop+6b7eccf [buildtime: 2022-06-01T20:45:09+03:00] (client version)
evgmik commented 2 years ago

Hi. If I remember correctly this happens if your password manager cleared the passphrase. Usually you have about 15 minutes and then you need to retype it. But often there are no warnings visible (since everything done outside of a user view). If you are using a password manager, request any password via usual means, it will grant you or brig a time window to communicate with password manager.