PhoenicisOrg / phoenicis

Phoenicis PlayOnLinux and PlayOnMac 5 repository
https://phoenicis.org/
GNU Lesser General Public License v3.0
678 stars 73 forks source link

Create a new interface so that we can instanciate different kind of repositories #1723

Open qparis opened 5 years ago

qparis commented 5 years ago

@qparis I don't understand what you would like to populate a repository with?

Almost nothing apart from:

@plata My idea of Phoenicis is, that it should stay in control at all times. Ideally this means, that the every command executed inside a "script" should be checked/performed by Phoenicis.

This is ideally but unfortunately, when we delegate our script to any program (wine, steam, gog, ....) this is no longer true

The reason why I think this is, that this approach leads to more security and control. Our scripts are so generic, that you can do anything with them. This in turn makes it very difficult to find bugs later on. In addition it makes it quite easy for script developers to inject some malicious commands into their scripts.

This is true only if we try to generate source files. If we evaluate the script commands at runtime, we have full control on the execution flow. Except for the ‘exec’ commands that we can decide to not support on a first stage.

In addition, if you use a converter like you suggest, you may end up with thousands generated scripts that will never be checked one by one, so the situation is almost the same.

In addition I think Phoenicis should only support one type/format of script officially. Everything else should be done by the community externally from Phoenicis. Otherwise it can become a nightmare to maintain this.

This is why I suggest to enforce the creation of a new repository to prevent plugins from modifying and official repository. It makes things clear separate and we do not force user support to subscribe to these new repositories.

This means that if there are people who want to have direct/native support of lutris scripts in Phoenicis, they should write a solution outside of Phoenicis. This is also the reason for my idea of a converter for lutris scripts, because such a converter could exist outside of Phoenicis.

I think that trying to generate source files almost always lead to problems. It is very hard to implement the right parsers, it can lead to security problems, and it requires a lot of effort to maintain. On the other hand, a code that evaluates a JSON script can be made clear to read (this is what Lutris itself or wine do after all)

qparis commented 5 years ago

A fourth approach would be to allow repositories to store resource files

plata commented 5 years ago

I think I would prefer the 4th approach. It keeps our scripts very streamlined and you can easily plug e.g. a Lutris repository if you want to.

qparis commented 5 years ago

And it could also helps to add files like .reg files inside the repo

madoar commented 5 years ago

What kind of resources files do you mean?

qparis commented 5 years ago

Anything that the script might need

madoar commented 5 years ago

But don't we already support this? We would just need to soften our include command.

qparis commented 5 years ago

I don’t think that we do support this right now

plata commented 5 years ago

We have .reg files for some scripts.

qparis commented 5 years ago

They are included in the script source file, don’t they?

madoar commented 5 years ago

I think my understanding of what you want to do is wrong.

Let me recap my understanding of what we're currently doing:

qparis commented 5 years ago

I think we should not treat this problem as if we wanted to support other script languages. This is not the case. In fact JSON/YAML are not script languages. They are not turing-complete, they do not contain any complex stuff like conditions, loops, etc.. They just contain a description of what needs to be done.

Therefore, they can be regarded as a data input (like a URL, or a game ID) and not as a script that needs to be executed. This makes a lot of differences in the approach. This is also why I think we can treat them as resource.

So in a nutshell, I think we have two things we need to treat separately:

  1. Metadata: How a repository can read a remote source of script, and expose them as a phoenicis native source of script.
  2. Script content: How can we automatically inject external input (URL or game ID for GOG, JSON for Lutris, etc...) inside a pre-written JS script (like the ones I have made in my previous requests).

Don't know if I'm clear or not

plata commented 5 years ago

https://github.com/PhoenicisOrg/scripts/tree/4672e1c56b79edf6e9280aaaa3cf350ad1863e3a/Applications/Games/Subnautica/resources

We can use this to store other resources as well (e.g. yaml).

qparis commented 5 years ago

Cool, so we already have that

plata commented 5 years ago

Yep. The question is how we use it. Simplest approach would be to have 1 Lutris script and all yaml in the resources. Then the script shows a list in the wizard. We could put this in a new category "special".

qparis commented 5 years ago

I don't like that. In term of user experience, it does not have any sense

plata commented 5 years ago

How would you want to handle it?

qparis commented 5 years ago

It makes more sense to concatenate the games with the ones of our repository

plata commented 5 years ago

Yes but technically?

qparis commented 5 years ago

A plugin inside a repository

qparis commented 5 years ago

Or a custom repository implementation I see no other solution

plata commented 5 years ago

With the plugin: Where would you store the resources? In one application or would you have one yaml per application?

qparis commented 5 years ago

They won't be stored I supposed, the TreeDTO will just be hooked

plata commented 5 years ago

Even in that case, they must be located somewhere.

qparis commented 5 years ago

Directly from the remote servers

plata commented 5 years ago

Yes, I mean in the DTO tree.

qparis commented 5 years ago

At the moment, we are relying on jgit to manage to script caching. We should not do that. I think the script content should not be stored in the DTO tree by default. A component that manages the cache should do it. This component would get the script from the URL and store it inside the cache

madoar commented 5 years ago

@qparis what do you mean by relying on jgit to manage to script caching? I think we will should change our Repository interface to contain three methods, which should be independent in their execution:

List<Engine> getEngines(...);
List<CategoryDTO> getCategories(...);
ScriptDTO getScript(List<String> path);

When looking at our current Repository interface I'm also not too happy with the List<String> path parameter we pass to nearly every method, especially because it's not really a path but more a concatenation of ids, which require a lot of computation before the (id-)"path" can be matched. More natural in my opinion would be for example the following method signature:

ScriptDTO getScript(ApplicationDTO application, String scriptId);
qparis commented 5 years ago

If I understand well, we consider that the script are available locally because they are stored in a local git repository right?

When is the content of the script is populated inside the DTO?

madoar commented 5 years ago

This basically happens in LocalRepository#fetchScripts, which is called in a nested/recursive fashion. Here the call tree:

  1. LocalRepository#fetchInstallableApplications
  2. LocalRepository#fetchTypes
  3. LocalRepository#fetchCategories
  4. LocalRepository#fetchApplications
  5. LocalRepository#fetchScripts
qparis commented 5 years ago

I propose to do it when the script is run

madoar commented 5 years ago

The question is then: how would you do the script lookup? To fetch all scripts belonging to an application, the following information is required:

fetchScripts(String typeId, String categoryId, String applicationId, File applicationDirectory)

To fetch only a single script, you additionally need the scriptId. I'm not sure you have this information during script execution.

qparis commented 5 years ago

You must have it because JavaFX has it right?

madoar commented 5 years ago

No, the parameters are needed independent of whether we use JavaFX or not. We require these parameters because of the way our file system repositories are structured and because of how we access entities via our List<String> path objects.

Let's give an example

Let us assume we want to fetch the script located in ~/Applications/Development/Notepad++/v7.2.2/script.js. Each script has a unique "path", which is used to locate the script. For Notepad++ the path is ["applications", "development", "notepad_plus_plus", "v_7_2_2"]. Please note, that the used path differs from the filesystem path! (which is the reason why the scripts are stored in the ScriptDTO class)

When fetching the script for the above mentioned path ["applications", "development", "notepad_plus_plus", "v_7_2_2"] the algorithm uses the following approach:

  1. Search for the folder in ~/ which contains a type.json file with the field id: "applications". This folder is called type-folder in the following
  2. Search in the found type-folder for a subfolder which contains a categories.json file with the field id: "development". This folder is called category-folder in the following
  3. Search in the found category-folder for a subfolder which contains an application.json file with the field id: "notepad_plus_plus". This folder is called application-folder in the following
  4. Search in the found application-folder for a subfolder which contains a script.json file with the field id: "v_7_2_2". This folder is called the script-folder in the following
  5. Return the script.js file located in the script-folder

The problem of this approach is, that we need to do the json file lookups to match the paths.

I think to solve this problem we would need a different approach to lookup scripts.

plata commented 5 years ago

And to add to this: The resources we were talking about have to be part of the repository DTO tree as well such that they can be used (at least as far as I'm aware).

qparis commented 5 years ago

I understand these steps.

I think we should change this so that:

Therefore, you can access the URL from the script path, right?

Same for the resources.

plata commented 5 years ago

If we do it like this, you can only install Lutris if you are online. The approach cannot work for resources because there is no such thing like a URL for resources.

qparis commented 5 years ago

There is, everything is a URL. We can then implement a cache mechanism on top of that that works for every type of repository

plata commented 5 years ago

I think I misunderstood you before. Will repository merging still work with that approach?

qparis commented 5 years ago

Why not? The only difference is that we considere the URL as the primary source to get the script inside the content. Then, the caching mechanism is agnostic from the type of repository

plata commented 5 years ago

So to summarize the plan: When loading the repository in the DTO tree, we only load the script URL instead of the script content. When a script is executed, the content is fetched from the URL (wherever that is).

Now, what exactly does the populate hook do?

qparis commented 5 years ago

Better:

The populate will add scripts to the DTO tree dynamically

qparis commented 5 years ago

Even better: Implement several caching strategies for resources:

plata commented 5 years ago

I think we should separate this issue from caching.

qparis commented 5 years ago

Yes

plata commented 5 years ago

So:

The populate will add scripts to the DTO tree dynamically

I thought you wanted to use resources instead of scripts?

qparis commented 5 years ago

Script in a large sense i.e. Script URL (note that several scripts can use the same URL), resources, metadata

plata commented 5 years ago

Ok but if we only add resources, they still need to be attached to some application. Also I just realized that this approach will break the current include mechanism.

qparis commented 5 years ago

Don't think so. The full tree is hooked, so a plugin can dynamically add new applications

qparis commented 5 years ago

The only thing is that the plugin should not be stored inside URL because they need to be always available