Open lapineige opened 11 months ago
!testme
@oiseauroch do you know where I can find the files in the filesystem to check if a file addition did work ?
You mean if you added some file with S3 ? The data folder is specified in the file garage.toml
that you can find in /var/www/garage/
. But S3 is an object storage protocol. That means that your file will be cut in chunk so that you wont be able to find data as file.
Yes, and it point to /home/yunohost.app/garage/data/data, but this is empty. I thought I misunderstood something, but if you confirm that the folder… then nothing is added :thinking:
From CI:
Your Garage node have been installed. You can now connect to other nodes with the following identifiers :
rpc_secret: 5278684b47304d494c42797665463436566b7538687a6a6b756e46614357656e
bootstrap_peers: __SELF_BOOTSTRAP_PEERS__
Current garage layout: ==== CURRENT CLUSTER LAYOUT ====
No nodes currently have a role in the cluster.
See `garage status` to view available nodes.
Current cluster layout version: 0
==== STAGED ROLE CHANGES ====
ID Tags Zone Capacity
3f7761b4e2cc889e sub.domain.tld sub.domain.tld 10 B
Error while trying to compute the assignment: The storage capacity of he cluster is to small. It is impossible to store partitions of size 1.
This new layout cannot yet be applied.
You can also revert all proposed changes with: garage layout revert --version 1
Which means we should at least give it 1(GB) of space for this node ?
ping @alexAubin
!testme
Which means we should at least give it 1(GB) of space for this node ?
ping @alexAubin
This is a changed behavior. Now capacity has a unit. We should keep values but change behavior. I'm made a commit to fix this : https://github.com/YunoHost-Apps/garage_ynh/pull/13/commits/ee4f09ee549861cf0471e6731194fcd123d8bb2f
Great, perfect ! :)
This little experiment help us to discover new issues, nice :D
!testme
The storage capacity of he cluster is to small
Oh, typo spotted :D
Still not working, it stays at 1B 🤔 Uh wait fork not synced properly 🤔
Closing because if this bug (?).
Reopening because of https://github.com/YunoHost-Apps/garage_ynh/pull/20#issuecomment-1862688241
Copy-pasting content from it :
It works with 1GB. Not with 10GB #13.
32406 INFO WARNING - ./install: line 153: garage_command: unbound variable 32407 INFO DEBUG - + self_bootstrap_peers=
ping @alexAubin @oiseauroch : the parameter is still not set 🤔
Concerning
$garage_command
it should be rename to$garage
. Concerningself_bootstrap_peers
, I'm investigate
!testme
So… we're making progress !
The SELF_BOOTSTRAP_PEERS thing is still there. Reinstall after removal doesn't work (why ? :thinking:). But install works !
!testme
It seems to be a removal issue.
255 INFO Removing garage...
616 WARNING ======== PANIC (internal Garage error) ========
618 WARNING panicked at 'failed printing to stdout: Broken pipe (os error 32)', library/std/src/io/stdio.rs:1008:9
619 WARNING Panics are internal errors that Garage is unable to handle on its own.
620 WARNING They can be caused by bugs in Garage's code, or by corrupted data in
620 WARNING the node's storage. If you feel that this error is likely to be a bug
620 WARNING in Garage, please report it on our issue tracker a the following address:
621 WARNING https://git.deuxfleurs.fr/Deuxfleurs/garage/issues
622 WARNING Please include the last log messages and the the full backtrace below in
622 WARNING your bug report, as well as any relevant information on the context in
622 WARNING which Garage was running when this error occurred.
623 WARNING GARAGE VERSION: v0.9.0 [features: k2v, sled, lmdb, sqlite, consul-discovery, kubernetes-discovery, metrics, telemetry-otlp, bundled-libs]
624 WARNING BACKTRACE:
625 WARNING 0: garage::main::{{closure}}::{{closure}}
626 WARNING 1: std::panicking::rust_panic_with_hook
626 WARNING 2: std::panicking::begin_panic_handler::{{closure}}
627 WARNING 3: std::sys_common::backtrace::__rust_end_short_backtrace
627 WARNING 4: rust_begin_unwind
627 WARNING 5: core::panicking::panic_fmt
627 WARNING 6: std::io::stdio::_print
628 WARNING 7: garage::cli::layout::cli_layout_command_dispatch::{{closure}}
628 WARNING 8: garage::cli_command::{{closure}}
628 WARNING 9: tokio::runtime::park::CachedParkThread::block_on
629 WARNING 10: tokio::runtime::context::runtime::enter_runtime
629 WARNING 11: tokio::runtime::runtime::Runtime::block_on
629 WARNING 12: garage::main
630 WARNING 13: std::sys_common::backtrace::__rust_begin_short_backtrace
630 WARNING 14: std::rt::lang_start::{{closure}}
630 WARNING 15: std::rt::lang_start_internal
630 WARNING 16: main
657 WARNING Unable to apply layout. No enough nodes
Is there an issue with the removal operations order ?
!testme
I'm just triggering CI in the no replication (1 node) case scenario, to see the result. This might be left open if that's a useful test scenario.
PR Status