libretro / RetroArch

Cross-platform, sophisticated frontend for the libretro API. Licensed GPLv3.
http://www.libretro.com
GNU General Public License v3.0
10.11k stars 1.82k forks source link

[Proposal] Make SRAM writes more opportunistic #16492

Closed luluco250 closed 4 months ago

luluco250 commented 4 months ago

Description

This is less of a bug/feature request but more of a proposal for a broader improvement to the save system in RetroArch. Currently RetroArch, specially on Android, seems to not write SRAM unless I close content/close RetroArch cleanly.

For example, forgetting to do so and just swiping it away from the app switcher will kill progress on games that theoretically have already been saved.

I've tried using the "save every X seconds option", set to 10 seconds, but that doesn't seem to work on my system, I've lost progress on SMB2 USA on bsnes-hd-beta even though I had saved in-game for longer than that.

Additionally it'd seem to me that writes to disk every N seconds/minutes would be redundant for any game that doesn't actually abuse SRAM, specially games for systems like the PS1 where save actions are explicit. In those cases there's not really any reason to not write to disk exactly when the game would save to the memory card.

Of course this may not be to everyone's personal preference (specially with how save states interact with SRAM), so this would be better as an option.

Here are a few cases I can think of where RetroArch could be more opportunistic about writing to disk:

  1. Application lost focus, we are already on the background so why not take the chance to save?
  2. Non-clean exits, surely there must be ways to cleanly write and close file handles even on Android when the app is swiped away?
  3. Screen is locked or the system is suspended, if these are events we can catch somehow.
  4. Opened the menu (would not make sense if pause with menu open is disabled), could not be desirable when using save states, but otherwise I see it as another pause opportunity to save.
  5. Every time a core writes to SRAM, we apply a throttle and debounce algorithm to ensure if it tries writing again very quickly after we only actually write to disk within a safer interval.
    • For example, suppose an NES game saves on every frame. We could throttle the disk writes to only happen every 10 seconds.
    • One side effect could be perceived latency to save, but that could be alleviated with a debounce algorithm that has a shorter interval.

While 5 sounds complicated, it seems like the ideal scenario to allow both on-demand save games as well as SRAM abusers to be safely written to disk in a reliable manner without destroying the disk.

I would be willing write a PR for this, but I'd like to hear opinions first to be sure my work can be accepted and is aligned with the developers/community.

sonninnos commented 4 months ago

The core needs to use the SRAM API for the save interval option to work, and that core does not do that, so everything will be futile. And the feature already makes sure it won't write data that is already saved.

luluco250 commented 4 months ago

@sonninnos thanks for the quick reply!

Good to know it's smart to not write unchanged data. Is there some kind of hint cores can give about whether SRAM is safe to write on demand?

For example, Dolphin or DuckStation always write to disk on save, except on RetroArch. Would it be unreasonable to have that behavior in RetroArch as well?

It's very convenient to have access to those emulators on platforms where only RetroArch can be used, but the loss of progress is painful.

sonninnos commented 4 months ago

It is completely up to the core if it won't use the common API.

luluco250 commented 4 months ago

I get that, so what you're saying is cores get direct access to disk I/O if they want, correct?

sonninnos commented 4 months ago

Frontend can't really prevent it, so of course.

luluco250 commented 4 months ago

So if my goal is to simply not lose progress when I save my games, what could I look at to try improving the current situation?

I understand in bsnes-hd-beta's case the core would need work done on its own, but it makes me less confident about other cores.

sonninnos commented 4 months ago

Fix the core to use the SRAM API, or force disk writes immediately and not on quit. Frontend does already what it can.

luluco250 commented 4 months ago

If a core uses the SRAM API do they have the option to write immediately?

sonninnos commented 4 months ago

Using the API means obeying the frontend option.

luluco250 commented 4 months ago

And is there a frontend option to write immediately, possibly as an override for some cores?

sonninnos commented 4 months ago

The minimum 1 second is pretty immediate enough, isn't it.

LibretroAdmin commented 4 months ago

Currently RetroArch, specially on Android, seems to not write SRAM unless I close content/close RetroArch cleanly.

This is done by design. And if you don't want it to behave like that, just use the automatic SRAM interval which will do what you want.

Constantly writing to disk incurs file I/O overhead and is very bad. It's also bad for frame timing and general performance, as well as SSD wear and tear.

I fail to see the issue here.

luluco250 commented 4 months ago

Well sure, I could set it to a 1 second interval for some cores, but it still means you can't be confident to close the app as soon as the game says it saved.

I understand it's an emulator, not original hardware, so one can't expect 1:1 behavior, specially when dealing with multiple cores, but as an user and programmer I can't help but feel like the current situation is counter-intuitive.

It's not a problem I've ever had on desktop emulators, but something that consistently happens with RetroArch.

And yes, it's not really the frontend's fault that some cores may not save when you think they should or that they don't use the API, but it does seem like a pattern across libretro cores.

Constantly writing to disk incurs file I/O overhead and is very bad. It's also bad for frame timing and general performance, as well as SSD wear and tear.

But this is not what happens with emulators that deal with hardware that abuses SRAM.

It seems like making saving worse for all cores even though it's only a problem with certain cores and games.

And like I said, my proposal would not to simply open the floodgates and allow SRAM write abuse, but rather throttle it.

LibretroAdmin commented 4 months ago

And yes, it's not really the frontend's fault that some cores may not save when you think they should or that they don't use the API, but it does seem like a pattern across libretro cores.

It's not our problem or fault when some emulator decides on some kind of internal illogical API that we then have to jump hoops through to integrate well with our own.

Previous bsnes versions did not have any of these issues. If later decisions were made to overcomplicate this or make that harder, then that is on them, not us.

Not having it write to disk constantly is a good thing, not a bad thing. Encouraging bad design by encouraging it to write to disk constantly is personally not something I'm interested in doing in cores. If other cores want to do that then that is their pregorative, but honestly it goes against recommended practices.

luluco250 commented 4 months ago

Not having it write to disk constantly is a good thing, not a bad thing. Encouraging bad design by encouraging it to write to disk constantly is personally not something I'm interested in doing in cores. If other cores want to do that then that is their pregorative, but it doesn't

In what situation would it be problematic to write to disk when a game has an an option to "save game" explicitly?

LibretroAdmin commented 4 months ago

It seems like making saving worse for all cores even though it's only a problem with certain cores and games.

This makes no sense. There is literally 0 issue for the vast majority of ppl. I have never had an issue with saving personally. It seems like one of these things that ppl love blowing out of proportion or creating some kind of meme issue that the vast majority of ppl simply don't have.

How is it making saving 'worse'? If you're this pedantic about your saves then just use a savestate, it does exactly what you want.

luluco250 commented 4 months ago

This makes no sense. There is literally 0 issue for the vast majority of ppl. I have never had an issue with saving personally. It seems like one of these things that ppl love blowing out of proportion or creating some kind of meme issue that the vast majority of ppl simply don't have.

I'm not blowing it out of proportion, it's just an issue I have had consistently happen, including with RetroArch's own settings, which are not saved unless explicitly done so or with quitting manually.

How is it making saving 'worse'? If you're this pedantic about your saves then just use a savestate, it does exactly what you want.

It makes saving in-game unreliable. Save states are not a replacement for saving games, specially when they can often corrupt in some cores.

LibretroAdmin commented 4 months ago

Not having it write to disk constantly is a good thing, not a bad thing. Encouraging bad design by encouraging it to write to disk constantly is personally not something I'm interested in doing in cores. If other cores want to do that then that is their pregorative, but it doesn't

In what situation would it be problematic to write to disk when a game has an an option to "save game" explicitly?

There are games that abuse SRAM constantly during gameplay, like Shiren The Wanderer.

And even for games that don't, whenever you write to disk you incur file I/O overhead and it can cause microstutters. Because you have to go through the file I/O API, which then goes through numerous various other layers to save it to disk. It is not done atomically most of the time and it cannot be predicted at what point in time your save will be written to disk, it is up to your storage medium, its speed, and the various API layers and underlying implementations.

LibretroAdmin commented 4 months ago

It makes saving in-game unreliable. Save states are not a replacement for saving games, specially when they can often corrupt in some cores.

Then implement the SRAM API. Of course saving will be unreliable if a bad core implements our interfaces improperly. Do it right then.

luluco250 commented 4 months ago

Not having it write to disk constantly is a good thing, not a bad thing. Encouraging bad design by encouraging it to write to disk constantly is personally not something I'm interested in doing in cores. If other cores want to do that then that is their pregorative, but it doesn't

In what situation would it be problematic to write to disk when a game has an an option to "save game" explicitly?

There are games that abuse SRAM constantly during gameplay, like Shiren The Wanderer.

And even for games that don't, whenever you write to disk you incur file I/O overhead and it can cause microstutters. Because you have to go through the file I/O API, which then goes through numerous various other layers to save it to disk. It is not done atomically most of the time and it cannot be predicted at what point in time your save will be written to disk, it is up to your storage medium, its speed, and the various API layers and underlying implementations.

Ok, that is a fair example! But in that case could the writes not be throttled/debounced to prevent abusers from destroying the disk?

This doesn't seem like it would affect cores for hardware that doesn't rely on cartridges like the PS1 or GC, so would there be a reason for those to not write immediately?

luluco250 commented 4 months ago

Let me clarify, I fully appreciate the work of developers, this is not an attempt to blame RetroArch or insult developers.

I like the project and wanna see it improve, even by contributing to it.

LibretroAdmin commented 4 months ago

Not having it write to disk constantly is a good thing, not a bad thing. Encouraging bad design by encouraging it to write to disk constantly is personally not something I'm interested in doing in cores. If other cores want to do that then that is their pregorative, but it doesn't

In what situation would it be problematic to write to disk when a game has an an option to "save game" explicitly?

There are games that abuse SRAM constantly during gameplay, like Shiren The Wanderer. And even for games that don't, whenever you write to disk you incur file I/O overhead and it can cause microstutters. Because you have to go through the file I/O API, which then goes through numerous various other layers to save it to disk. It is not done atomically most of the time and it cannot be predicted at what point in time your save will be written to disk, it is up to your storage medium, its speed, and the various API layers and underlying implementations.

Ok, that is a fair example! But in that could the writes not be throttled/debounced to prevent abusers from destroying the disk?

This doesn't seem like it would affect cores for hardware that doesn't rely on cartridges like the PS1 or GC, so would there be a reason for those to not write immediately?

I'm not a fan of the way some standalone emulators for PS1/GC write to memory card constantly either, and it's also a source of overhead. For one, tons of constant polling gets done to detect memory card changes per frame in many of those PS1 emulators, and when something changes, it then writes to disk each and every time.

I think in a lot of the PS1 emulator cores we avoid this by using the SRAM API interface so that it only saves to disk at the frontend's behest (so according to the SRAM autosave interval or when the core gets closed). This is the proper way to do it and it avoids tons of issues. There might be a fallback option for some of those cores that do it the way a standalone emulator does by writing to a memory card file immediately, but honestly I consider it bad form, bad design and not something that should be encouraged, and when given the choice I'd always want to use the SRAM API method instead.

LibretroAdmin commented 4 months ago

Let me clarify, I fully appreciate the work of developers, this is not an attempt to blame RetroArch or insult developers.

I like the project and wanna see it improve, even by contributing to it.

I don't consider constantly writing to disk an improvement or to do an endrun around the SRAM API that has been with the project since its inception. These are bad unwanted suggestions and something that should be strongly pushed back against.

luluco250 commented 4 months ago

Not having it write to disk constantly is a good thing, not a bad thing. Encouraging bad design by encouraging it to write to disk constantly is personally not something I'm interested in doing in cores. If other cores want to do that then that is their pregorative, but it doesn't

In what situation would it be problematic to write to disk when a game has an an option to "save game" explicitly?

There are games that abuse SRAM constantly during gameplay, like Shiren The Wanderer. And even for games that don't, whenever you write to disk you incur file I/O overhead and it can cause microstutters. Because you have to go through the file I/O API, which then goes through numerous various other layers to save it to disk. It is not done atomically most of the time and it cannot be predicted at what point in time your save will be written to disk, it is up to your storage medium, its speed, and the various API layers and underlying implementations.

Ok, that is a fair example! But in that could the writes not be throttled/debounced to prevent abusers from destroying the disk? This doesn't seem like it would affect cores for hardware that doesn't rely on cartridges like the PS1 or GC, so would there be a reason for those to not write immediately?

I'm not a fan of the way some standalone emulators for PS1/GC write to memory card constantly either, and it's also a source of overhead. For one, tons of constant polling gets done to detect memory card changes per frame in many of those PS1 emulators, and when something changes, it then writes to disk each and every time.

I think in a lot of the PS1 emulator cores we avoid this by using the SRAM API interface so that it only saves to disk at the frontend's behest (so according to the SRAM autosave interval or when the core gets closed). This is the proper way to do it and it avoids tons of issues. There might be a fallback option for some of those cores that do it the way a standalone emulator does by writing to a memory card file immediately, but honestly I consider it bad form, bad design and not something that should be encouraged, and when given the choice I'd always want to use the SRAM API method instead.

That's totally reasonable, I'm not super familiar with the technical details of how those emulators write to disk.

From an user + developer perspective, it seems like we could just wait for the core to do its thing with the API through a debounce and only write when it's actually done.

luluco250 commented 4 months ago

Let me clarify, I fully appreciate the work of developers, this is not an attempt to blame RetroArch or insult developers. I like the project and wanna see it improve, even by contributing to it.

I don't consider constantly writing to disk an improvement or to do an endrun around the SRAM API that has been with the project since its inception. These are bad unwanted suggestions and something that should be strongly pushed back against.

Sorry, I didn't mean to offend the project, but seeing how some things that were often troublesome (for me) have been improved over the years, like Esc asking to quit now by default instead of immediately, the OZone interface, the new control mapping UI being so much nicer to use, it seems like not everything has to be set in stone from the beginning.

Again, this is not seeking to radically change the way RetroArch saves, just to be smarter about it, even with the existing SRAM API.

LibretroAdmin commented 4 months ago

So what essentially are you suggesting? What problem are you attempting to solve?

If you have issues with a few cores then I think instead of writing some catch-all solution for RetroArch it'd be better to just fix that core instead.

So basically I'm asking for a more summarized version of your initial post so that we don't talk past each other. I think we've already established where we stand with regards to the writing to disk issue. I'm of course still open to any kind of solution you propose as long as it isn't too controversial and it doesn't really try to undo what we consider to be a good practice. As long as it could remain an optional feature I'm generally fine with whatever contributors propose.

luluco250 commented 4 months ago

So what essentially are you suggesting? What problem are you attempting to solve?

My goal is to just not lose progress because of a technical detail, work is already mentally taxing so I just wanna relax and not think about things like these when I play games.

If you have issues with a few cores then I think instead of writing some catch-all solution for RetroArch it'd be better to just fix that core instead.

Yes I agree with that, specially in bsnes-hd-beta's case.

So basically I'm asking for a more summarized version of your initial post so that we don't talk past each other. I think we've already established where we stand with regards to the writing to disk issue. I'm of course still open to any kind of solution you propose as long as it isn't too controversial and it doesn't really try to undo what we consider to be a good practice. As long as it could remain an optional feature I'm generally fine with whatever contributors propose.

That is fair, I can be bad at making concise points. I'll write a technical proposal in my next post.

hizzlekizzle commented 4 months ago

Yes, ultimately, the setup that RetroArch uses is very reliable as long as a core uses the libretro save interface, and if you set the interval to 1 second, it's essentially instantaneous as sonninnos mentioned.

The issue is that some cores don't use that interface, so they either need to be made to do so (preferred) or patched to write immediately on their own (less desirable).

luluco250 commented 4 months ago

Definitions

Proposal

Adding an optional feature to the frontend that:

  1. Whenever a game saves to SRAM, we use a debounce to wait a few ms for another call to save, with a safe interval to be determined by checking what the worst case would be, I'd say 500-1500ms would probably be fair for most games.
    • Example: if a PS1 emulator saves multiples times even though it'd seem like the game is only doing a single save then we just safely wait for to finish its work first before writing to disk.
  2. To avoid never saving in games that are constantly abusing SRAM, we use an additional throttle that ensures we do save given a specific interval. This could probably be used with the existing SRAM write to disk interval setting.
    • Example: if a game like Shiren The Wanderer is writing on every frame or otherwise within very few ms, we only save every 10 seconds. But we do save within those intervals so that the user doesn't have to worry about whether or not their game saved. Losing 10 seconds of progress can suck, but it's much better than losing all progress.
luluco250 commented 4 months ago

@LibretroAdmin please let me know if there's anything that needs clarification or you take issue with.

LibretroAdmin commented 4 months ago

As long as you make this an optional setting, and you're prepared to write all the code for it then we'd be OK with it of course. Hopefully the implementation is good and doesn't incur overhead when people don't use it (which is why it should be an optioanl setting to begin with), anyway if all those things check out then sure, we're fine with whatever you propose.

LibretroAdmin commented 4 months ago

@sonninnos is not quite clear on this tho. He says it already works like this. So I'm not quite sure what you're proposing then.

Is what you need not achievable by just fixing whatever core you're using that doesn't currently function the way you expect it to? If so, why not do that?

hizzlekizzle commented 4 months ago

And nothing added to RetroArch will affect your experience with bsnes-hd-beta, which doesn't use the libretro save interface.

luluco250 commented 4 months ago

Is what you need not achievable by just fixing whatever core you're using that doesn't currently function the way you expect it to? If so, why not do that?

I'm trying to understand the technical details of it¸ because as far as the UI conveys it would appear it just saves every once in a while given an interval, with the additional context @sonninnos gave that it is smart enough to not save redundantly.

However, this would seem unnecessary for any core that does not suffer from SRAM abuse.

As an user I'd expect it taking 1-2 seconds after I saved in, say, Castlevania Symphony of the Night, for the emulator to actually write to disk.

But if I were to play a game like Shiren The Wanderer, I REALLY would not like to write to disk every 2 seconds.

As far as I know, currently there's no way to satisfy both cases. Either you accept that it can take an unknown amount of time after saving in a game with explicit saves for it to actually write to disk by using a large interval OR you kill your disk by using a 1-2 second interval and playing a game that abuses SRAM.

luluco250 commented 4 months ago

And nothing added to RetroArch will affect your experience with bsnes-hd-beta, which doesn't use the libretro save interface.

That's fair, but I see that as a second step. If I can at least be confident in the way RetroArch handles the SRAM API first then I'll be open to contribute to that.

hizzlekizzle commented 4 months ago

Yes, you're correct. There's no free lunch. We could potentially add a way for a core to declare that it will never-ever-ever abuse SRAM, but all it takes is one homebrew game to make it a liar. I've heard (no proof of this and they provided no specifics) that at least one or more PS1 games does something similar with the memory card.

So, I think the current setup is safe and robust and can be customized to an individual's own concept of safety, which is nice.

The big problem now is just that not all cores play ball.

luluco250 commented 4 months ago

Yes, you're correct. There's no free lunch. We could potentially add a way for a core to declare that it will never-ever-ever abuse SRAM, but all it takes is one homebrew game to make it a liar. I've heard (no proof of this and they provided no specifics) that at least one or more PS1 games does something similar with the memory card.

So, I think the current setup is safe and robust and can be customized to an individual's own concept of safety, which is nice.

The big problem now is just that not all cores play ball.

Yeah I'd agree that the current situation is perfectly good when it comes to not destroying disks, I just wanna try to make it even better.

luluco250 commented 4 months ago

I'll take a look at the code when I have time and if I come up with something I'll open a PR.

Thanks a lot for your input!

If there's anything else you feel warrants discussion feel free to re-open/comment.