ABaumher / galaxy-integration-steam

Integration with Steam for Galaxy
Other
745 stars 17 forks source link

completely rewrite GameCache #28

Closed don-de-marco closed 1 year ago

ABaumher commented 1 year ago

Ngl this is basically your branch now. I've unfortunately been dealing with more maintenance that writing any code - and a full-time job that's actually full time now (i miss the time before this project when i was on-call lol)

I'll get back to teardown when i can.

As for the games installed issue - afaik we can update this in tick. Can we just timeout our code after like 18 seconds and return what we have so the plugin doesn't get sent a shutdown? If we can cache these we can fill out the rest in subsequent calls

don-de-marco commented 1 year ago

The game size import is handled by the plugin, not us. We just provide a means on how to get the game size for a given game id. The rest is out of our hands.

don-de-marco commented 1 year ago

The timeout comes from Galaxy btw, it's not triggered within our plugin. So as long as we're not able to tell Galaxy that we need more time importing the game sizes, there's nothing I know of that we can do about that

don-de-marco commented 1 year ago

Feel free to merge this btw. I have no changes pending

ABaumher commented 1 year ago
    async def prepare_local_size_context(self, game_ids: List[str]) -> Dict[str, str]:
        library_folders = get_library_folders()
        app_manifests = list(get_app_manifests(library_folders))
        return {app_id_from_manifest_path(path): path for path in app_manifests}

Is fully synchronous. running in executor makes it async and the code wouldn't be blocking. Granted, it might need to be done in the process pool form if this is blocked by GIL

don-de-marco commented 1 year ago

This code just collects some file paths and is done quickly. The most work is done in get_game_size below that where for each game the corresponding state file (these .vcf files in your steamapps folders) is parsed and the size returned. There's no heavy lifting going on and the GIL only is of concern when there's lots of processing, which we do not have here either. IMO it's just the I/O that's hitting us hard because we need to read a couple of hundreds of files. And that just takes time, using executors would make it even worse because of the overhead introduced by using them

ABaumher commented 1 year ago

we're doing a bunch of file opens. return vdf.load(open(path, encoding="utf-8", errors="replace"), mapper=CaseInsensitiveDict)

Those files are never closed. That seems like it could be causing issues

don-de-marco commented 1 year ago

While I agree that not closing these files is bad style, it doesn't do us any harm because open files only consume a bit of memory but no CPU power