celestiaorg / celestia-node

Celestia Data Availability Nodes
Apache License 2.0
926 stars 924 forks source link

How to modify the storage path of celestia node? #1510

Closed LusWar closed 1 year ago

LusWar commented 1 year ago

When the Celestia node is initialized, it will specify some storage directories. For the sake of server security, I want to change these directories to specified directories, and I want to know which files and codes should be modified to ensure that the node runs without errors

renaynay commented 1 year ago

There is a flag called --node.store available in the binary. Please let me know if this resolves this issue @LusWar

LusWar commented 1 year ago

This flag can only solve the directory problem when the celestia init command is executed, but when the celestia start command is executed, the keys directory displayed in the log is still the original directory @renaynay

renaynay commented 1 year ago

Do you pass --node.store flag on both commands @LusWar ?

renaynay commented 1 year ago

@LusWar in order to use a node store that is not the default, you need to pass the --node.store flag with path to custom store on both init and start -- please let me know if this fixes the issue for you as i cannot reproduce it myself.

LusWar commented 1 year ago

2022-12-22T09:38:24.772+0800 INFO node nodebuilder/init.go:19 Initializing Full Node Store over '/data/blockdata/tia/store' 2022-12-22T09:38:24.773+0800 INFO node nodebuilder/init.go:50 Saving config {"path": "/data/blockdata/tia/store/config.toml"} 2022-12-22T09:38:24.773+0800 INFO node nodebuilder/init.go:51 Node Store initialized java_app@ip-172-30-147-243:/data/blockdata/tia/bin$ ./celestia full start --core.ip 172.30.147.243:9985 --node.store /data/blockdata/tia/store 2022-12-22T09:38:30.448+0800 INFO badger v2@v2.2007.4/levels.go:183 All 0 tables opened in 0s 2022/12/22 09:38:30 failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/lucas-clemente/quic-go/wiki/UDP-Receive-Buffer-Size for details. 2022-12-22T09:38:30.464+0800 INFO module/state state/keyring.go:46 NO KEY FOUND IN STORE, GENERATING NEW KEY... {"path": "/data/blockdata/tia/store/keys"} 2022-12-22T09:38:30.475+0800 INFO module/state state/keyring.go:52 NEW KEY GENERATED... NAME: my_celes_key ADDRESS: celestia1tl0rhr7agkfqcpgcr77jvz2a58p4ylv9pc9pra MNEMONIC (save this somewhere safe!!!): wife ahead regret minute genuine question chimney service enact soul chuckle author cost chest donkey hollow dragon person liar stuff robust taste jungle stand 2022-12-22T09:38:30.480+0800 INFO module/state state/keyring.go:69 constructed keyring signer {"backend": "test", "path": "/data/blockdata/tia/store/keys", "key name": "my_celes_key", "chain-id": "mocha"} 2022-12-22T09:38:30.480+0800 INFO module/header header/config.go:54 No trusted peers in config, initializing with default bootstrappers as trusted peers 2022-12-22T09:38:30.947+0800 ERROR module/header header/constructors.go:101 initializing header store failed: failed to dial 12D3KooWKvPXtV1yaQ6e3BRNUHa5Phh8daBwBi3KkGaSSkUPys6D: * [/ip4/35.234.94.146/tcp/2121] failed to negotiate security protocol: read tcp4 172.30.147.243:2121->35.234.94.146:2121: read: connection reset by peer 2022-12-22T09:38:30.947+0800 WARN module/header header/module.go:72 Syncer running on uninitialized Store - headers won't be synced 2022-12-22T09:38:30.947+0800 INFO header/p2p p2p/server.go:59 server: listening for inbound header requests 2022-12-22T09:38:30.947+0800 WARN das das/daser.go:88 checkpoint not found, initializing with height 1 2022-12-22T09:38:30.947+0800 INFO das das/daser.go:101 starting DASer from checkpoint: SampleFrom: 1, NetworkHead: 1 2022-12-22T09:38:30.947+0800 INFO rpc rpc/server.go:81 server started {"listening on": "0.0.0.0:26658"} 2022-12-22T09:38:30.947+0800 INFO node nodebuilder/node.go:99

I run the node and get the above log information, including an error @renaynay

renaynay commented 1 year ago

@LusWar the above error is not related to storage path of celestia-node. Please open up a new issue if you still experience what is documented above.