Open qparis opened 5 years ago
A fourth approach would be to allow repositories to store resource files
I think I would prefer the 4th approach. It keeps our scripts very streamlined and you can easily plug e.g. a Lutris repository if you want to.
And it could also helps to add files like .reg files inside the repo
What kind of resources files do you mean?
Anything that the script might need
But don't we already support this?
We would just need to soften our include
command.
I don’t think that we do support this right now
We have .reg files for some scripts.
They are included in the script source file, don’t they?
I think my understanding of what you want to do is wrong.
Let me recap my understanding of what we're currently doing:
I think we should not treat this problem as if we wanted to support other script languages. This is not the case. In fact JSON/YAML are not script languages. They are not turing-complete, they do not contain any complex stuff like conditions, loops, etc.. They just contain a description of what needs to be done.
Therefore, they can be regarded as a data input (like a URL, or a game ID) and not as a script that needs to be executed. This makes a lot of differences in the approach. This is also why I think we can treat them as resource.
So in a nutshell, I think we have two things we need to treat separately:
Don't know if I'm clear or not
We can use this to store other resources as well (e.g. yaml).
Cool, so we already have that
Yep. The question is how we use it. Simplest approach would be to have 1 Lutris script and all yaml in the resources. Then the script shows a list in the wizard. We could put this in a new category "special".
I don't like that. In term of user experience, it does not have any sense
How would you want to handle it?
It makes more sense to concatenate the games with the ones of our repository
Yes but technically?
A plugin inside a repository
Or a custom repository implementation I see no other solution
With the plugin: Where would you store the resources? In one application or would you have one yaml per application?
They won't be stored I supposed, the TreeDTO will just be hooked
Even in that case, they must be located somewhere.
Directly from the remote servers
Yes, I mean in the DTO tree.
At the moment, we are relying on jgit to manage to script caching. We should not do that. I think the script content should not be stored in the DTO tree by default. A component that manages the cache should do it. This component would get the script from the URL and store it inside the cache
@qparis what do you mean by relying on jgit to manage to script caching
? I think we will should change our Repository
interface to contain three methods, which should be independent in their execution:
List<Engine> getEngines(...);
List<CategoryDTO> getCategories(...);
ScriptDTO getScript(List<String> path);
When looking at our current Repository
interface I'm also not too happy with the List<String> path
parameter we pass to nearly every method, especially because it's not really a path but more a concatenation of ids, which require a lot of computation before the (id-)"path" can be matched. More natural in my opinion would be for example the following method signature:
ScriptDTO getScript(ApplicationDTO application, String scriptId);
If I understand well, we consider that the script are available locally because they are stored in a local git repository right?
When is the content of the script is populated inside the DTO?
This basically happens in LocalRepository#fetchScripts
, which is called in a nested/recursive fashion. Here the call tree:
LocalRepository#fetchInstallableApplications
LocalRepository#fetchTypes
LocalRepository#fetchCategories
LocalRepository#fetchApplications
LocalRepository#fetchScripts
I propose to do it when the script is run
The question is then: how would you do the script lookup? To fetch all scripts belonging to an application, the following information is required:
fetchScripts(String typeId, String categoryId, String applicationId, File applicationDirectory)
To fetch only a single script, you additionally need the scriptId
.
I'm not sure you have this information during script execution.
You must have it because JavaFX has it right?
No, the parameters are needed independent of whether we use JavaFX or not.
We require these parameters because of the way our file system repositories are structured and because of how we access entities via our List<String> path
objects.
Let us assume we want to fetch the script located in ~/Applications/Development/Notepad++/v7.2.2/script.js
.
Each script has a unique "path", which is used to locate the script. For Notepad++ the path is ["applications", "development", "notepad_plus_plus", "v_7_2_2"]
. Please note, that the used path differs from the filesystem path! (which is the reason why the scripts are stored in the ScriptDTO
class)
When fetching the script for the above mentioned path ["applications", "development", "notepad_plus_plus", "v_7_2_2"]
the algorithm uses the following approach:
~/
which contains a type.json
file with the field id: "applications"
. This folder is called type-folder
in the followingtype-folder
for a subfolder which contains a categories.json
file with the field id: "development"
. This folder is called category-folder
in the followingcategory-folder
for a subfolder which contains an application.json
file with the field id: "notepad_plus_plus"
. This folder is called application-folder
in the followingapplication-folder
for a subfolder which contains a script.json
file with the field id: "v_7_2_2"
. This folder is called the script-folder
in the followingscript.js
file located in the script-folder
The problem of this approach is, that we need to do the json
file lookups to match the paths.
I think to solve this problem we would need a different approach to lookup scripts.
And to add to this: The resources we were talking about have to be part of the repository DTO tree as well such that they can be used (at least as far as I'm aware).
I understand these steps.
I think we should change this so that:
Therefore, you can access the URL from the script path, right?
Same for the resources.
If we do it like this, you can only install Lutris if you are online. The approach cannot work for resources because there is no such thing like a URL for resources.
There is, everything is a URL. We can then implement a cache mechanism on top of that that works for every type of repository
I think I misunderstood you before. Will repository merging still work with that approach?
Why not? The only difference is that we considere the URL as the primary source to get the script inside the content. Then, the caching mechanism is agnostic from the type of repository
So to summarize the plan: When loading the repository in the DTO tree, we only load the script URL instead of the script content. When a script is executed, the content is fetched from the URL (wherever that is).
Now, what exactly does the populate hook do?
Better:
The populate will add scripts to the DTO tree dynamically
Even better: Implement several caching strategies for resources:
I think we should separate this issue from caching.
Yes
So:
The populate will add scripts to the DTO tree dynamically
I thought you wanted to use resources instead of scripts?
Script in a large sense i.e. Script URL (note that several scripts can use the same URL), resources, metadata
Ok but if we only add resources, they still need to be attached to some application. Also I just realized that this approach will break the current include
mechanism.
Don't think so. The full tree is hooked, so a plugin can dynamically add new applications
The only thing is that the plugin should not be stored inside URL because they need to be always available
Almost nothing apart from:
This is ideally but unfortunately, when we delegate our script to any program (wine, steam, gog, ....) this is no longer true
This is true only if we try to generate source files. If we evaluate the script commands at runtime, we have full control on the execution flow. Except for the ‘exec’ commands that we can decide to not support on a first stage.
In addition, if you use a converter like you suggest, you may end up with thousands generated scripts that will never be checked one by one, so the situation is almost the same.
This is why I suggest to enforce the creation of a new repository to prevent plugins from modifying and official repository. It makes things clear separate and we do not force user support to subscribe to these new repositories.
I think that trying to generate source files almost always lead to problems. It is very hard to implement the right parsers, it can lead to security problems, and it requires a lot of effort to maintain. On the other hand, a code that evaluates a JSON script can be made clear to read (this is what Lutris itself or wine do after all)