JustinTimperio / pacback

Advanced Version Control for Arch Linux
MIT License
148 stars 4 forks source link

Problems with big rollbacks #31

Open mirh opened 3 years ago

mirh commented 3 years ago

So, funnily enough it seems like I'm always gravitating around your program after big releases.

JustinTimperio commented 3 years ago

Yeah, this is something I have run into more than once. This mainly has to do with the fact that pacback can't detect when pacman runs into an error, it just sends the command and hopes for the best.

This is actually next on my feature path but is going to be an absolute bitch to add.

For one, Arch just hates doing big/long distance downgrades. Like I'm not sure I have ever gotten pacback to perform a downgrade farther back than 6 months just due to how dependencies work. Pacvis is one of my favorite projects and it really shows how difficult dealing with circular dependencies, replaced packages, and new repos are to deal with. The fact that pacman can't even deal with these issues is a pretty good indication of how hard they are to solve in a programmatic way.

Second, a lot of times its basically impossible to get certain downgrades to work. For instance, when you bumped from pacback 1.6.1 to 2.0.0 a shell script ran that upgrade the metadata files that comprise each restore point. If you then downgraded to pacback 1.6.1, none of the restore points you created would actually even worked because I never wrote an alpha-downgrade shell script. Not only that, I would have had to write that downgrade script when I wrote 1.6.1, somehow foreseeing all the changes I would make during 2.0.0. Most large upgrades or even ones that seem minor (like kernel 5.7 -> 5.8) end up being massive and virtually 'un-downgradeable'. Even though from a versioning perspective 5.7 to 5.8 is a minor bump, it ended up being the single largest release of Linux history. In these cases pacback can really only provide a best-effort downgrade.

Ultimately pacback is best when you are doing small/short time distance downgrades. While I think getting pacback to detect old packages, conflicting dependencies and fix them on the fly, unfortunately it will always be relegated to best-effort downgrade.

I do really have to recommend timeshift for big downgrades since it is able to store the whole state of your disk, avoiding all the issues that pacback inherently runs into.

mirh commented 3 years ago

Ehrm.. I'm not sure what the linux package has to do with anything, anyway pacback version differences are not a problem here. I mean, yes they may, but I was just talking about rolling back from \ to <2018/12/31> (which should not be that crazy with those little adjustments)

Anyway if there's no way for pacback to access what packages will actually be installed/handled.. I can't see a solution other than a "lookup table" across dates.

JustinTimperio commented 3 years ago

Sorry should have been more clear, the Linux kernel doesn't make a diff in this context, just that it highlights a lot of the challenges when downgrading a system. Same with pacback, I'm just trying to illustrate that developers put a lot of work into making things upgrade but not that much if any getting them to downgrade.

Anyway if there's no way for pacback to access what packages will actually be installed/handled.. I can't see a solution other than a "lookup table" across dates.

So this is just a current limitation in the way I've written it. Right now it's just an os.system(pacman -U somepkgs) . The output is super easy to capture but the real work is processing the output, then passing it to an error handler in the event of an issue. I basically need to figure out all the most common errors (like conflicts with new dependencies) then solve for them on the fly.

Also not quite sure what you mean by a lookup table.

mirh commented 3 years ago

A list of dates that, when crossed, will cause "adjustments" to be made. For example, when you go from a date after 2019-02-12 to one before, you need to specifically uninstall systemd-libs and install libsystemd.

The output is super easy to capture but the real work is processing the output, then passing it to an error handler in the event of an issue.

Relying on the fact the program has to fail in the first place, doesn't really sound like the neatest of designs.