Closed visit1985 closed 11 years ago
@xtfxme please update the AUR package.
This contains fixes for the problems reported by srf21c
.
when you want it updated, can you send me a msg on here with the rev to push? that way if you ask me a couple times in rapid succession it'll be simpler to know which is the latest, and it will also help me not forget :)
i don't think the PKGBUILD pkg(ver|rel) was bumped... i'll let you go ahead and commit that as it seems like it may warrant a minor ver bump (in future i may not check this though, so you'll want to handle it as needed)
i should probably add a -git variant of this package too... for my other -git packages, i usually do a manual push when i make significant updates, as this updates that last-modified timestamp, and IIRC triggers some AUR helpers to update.
just drop me a PM once you bump the PKGBUILD and i'll push the AUR package (later tonight i can comment on your other tickets)
PKGBUILD is ok - AUR is on 0.4.0-2 and Git HEAD is at 0.4.1-1. You can push it, or go to 0.5.0 if you like.
If I afk you to push, I always refer to the HEAD revision of master.
Sorry for the mess of commits. GitHub adds all commits pushed to my fork's master branch to the latest open pull request. Now I know and will work on new features in a different branch next time...
I'm not sure how -git packages handle their pkgver/rel. How do AUR users recognize changes in -git packages?
you've got it a little mixed up; Github assigns one branch to each pull request... if you've got no other branches, it has no choice but to use master
. what you want to do is create a new branch first, then gGithub will display an option to make that into a pull req... this is nice because you can then update/rebase/whatever everything individually, as needed (don't worry about this one).
tbh i don't have much to add here -- i think your analysis is pretty well spot on. this may eventually be baked into btrfs, but if you wanted to implement a "hot spare" feature, i think that would be OK to enable by default. in this scenario, the array would have a backup disk ready in the event a disk fails; once the array is repaired by adding the spare and removing the failed drive, it should be fully functional again.
Acually, raids provide redundancy in order to keep systems running even if some parts fail.
Of course that also includes when parts fail when powerd down. So the system should definitely come up degraded by default.
With old software raid, there was an issue the resyncs being very slow. Thus, a configurable timeout was and may still be appropriate (even setting it to -1) to allow avoiding unnecessary resyncs of large disks, but the raid implementation really shouldn't default to introduce any artificial system breakages/downtime.
@testbird, I'm 100% with you. But I red some posts online, where people complain about Btrfs self-healing features destroying data in degraded mode. That's why we leave it disabled by default. So it's the users choice to activate it and take that risk.
I see. A temporary measure, until btrfs is stable, to kindly allow the user to do backup before proceeding to be on the safe side. That is all right of course. And thanks! Hinting to the current degradation risks is in a message is also good.
@testbird, I can't find the post I red 3 months ago. But there is a recent discussion in this thread: http://www.spinics.net/lists/linux-btrfs/msg30391.html
Through there are no explicit issues mentioned (maybe they are fixed already), the Btrfs devs are aware of our problem and will come up with a solution before Btrfs goes stable.
So our solution is temporary.
I pull this for discussion on 2916680cb700a3cb22cc8b89827e9918f0add035.
Recently I gave a lecture on btrfs filesystem at our local LUG. One upcoming concern in our discussion was about not being able to auto-mount degraded volumes. So if you have a server outside in a data-center using btrfs as root partition, you would not be able to boot after a power failure which can cause a disk to fail. So I wanted to come up with a solution for that.
I tried to implement this using
btrfs filesystem show
, but afterwards I thought it would be better to not rely on the output of btrfs commands, because they are still in development. Also there is no additional benefit using this method, beside preventing the first unsuccessful mount and its error output.In my opinion we need to get the explicit approval from a user to mount degraded volumes automatically, because it may do additional harm to the filesystem. That's why I implemented it disabled by default.