Closed sync-by-unito[bot] closed 2 years ago
➤ Ganna Kulikova commented:
To test out current hardware requirements
➤ Ganna Kulikova commented:
Ivan Popovych please document the options and proposals on Confluence
➤ Ganna Kulikova commented:
Go forward with scratch space solution and allocate disk storage to it
➤ Automation for Jira commented:
Corresponding Pull Request https://github.com/skalenetwork/skaled/pull/644
➤ Alex Danko commented:
wait for skale_admin with shared space for testing this feature Dmitry Tkachuk ping when it will be deployed
➤ Ganna Kulikova commented:
Unblocking
➤ Automation for Jira commented:
Corresponding Pull Request https://github.com/skalenetwork/skaled/pull/672
➤ Dima Litvinov commented:
Added 4 logs after implemebtation: 1 nodes 4 times tries to download from another 3 nodes (seems to work correct)
➤ Oleksandr Sydorenko commented:
Still actual for schain:3.7.4-develop.0
Steps to reproduce:
actual state: “CRITICAL FATAL: tried to download snapshot from everywhere!“ appears when node trying to download snapshot
[^snapshot_occeupied_melodic-yildun.log] [^snapshot_occupied_tinkling-zibal.log]
➤ Ganna Kulikova commented:
Closing per discussion with Stan Kladko and Dima Litvinov
Issues are covered by other Jira tickets (see related)
The following approach is suggested:
a. Saving data to the temporary space (reserved space inside attached storage) without limiting number of schains that currently downloading snapshots (modifications in both skale node components and skaled).
b. Send and receive snapshots using streams without saving snapshots to any non btrfs file/directory (require only skaled changes).
┆Issue is synchronized with this Jiraserver Task ┆Attachments: 1_ok.txt | 2_ok.txt | 3_ok.txt | 4_fail.txt | snapshot_occeupied_melodic-yildun.log | snapshot_occupied_tinkling-zibal.log