Open danixeee opened 5 years ago
Sounds good. I always prefer convention over configuration. We could go even further by having a grammar .tx
file always be in the same path, e.g. mydsl/grammar.tx
. Also, outline spec. could be mydsl/outline.txol
. Where, mydsl
is specified in the DSL description header. We could than completely remove paths
section. If there is no need for any other section then we could move elements of general
section directly under dsl
section.
What is left to be defined is how will decorators for registering classes and processors work. They will need to know (probably by location of the lang.py
where they are applied) which language they apply to.
Should we have one DSL defined per TextXFile or should we support more?
I like the idea to place all those files under a directory and to avoid any unnecessary configuration.
That should be a good start for now.
Should we have one DSL defined per TextXFile or should we support more?
I would say one TextXFile per extension. User can create an extension pack to bundle different extensions.
This is what I though so far:
textX-LS-core will have several built-in languages:
Each of those languages will have their own lang.py (and other configuration files if needed).
Since there is possibility that we will need to create a CLI for textX-LS-core (for using those functionalities with tools that don't support LSP), I think making all functions stateless will be useful. Core should not know about the textX-LS-server or CLI.
CLI example for getting completion list:
textx mydsl.tx example.mydsl completions 2:30
where 2:30
states for line:col
.
Each function will know on which file it is called, so, depending on the file extension, we will switch metamodel (and lang.py configurations) to support different languages. Probably we will have to pass metamodel path for user-defined languages which won't be needed for built-in ones.
If I remember correctly, you need to reload a metamodel if parsing a model returned errors. Not sure if that's a textX bug or normal behavior, but in the old extension I was loading metamodel each time I needed to parse model.
I am also thinking about making textX-LS-core to be extensible with different plugins, but that should be a separate discussion since we are far from there (although we should set a good base :) ).
textx mydsl.tx example.mydsl completions 2:30
I would go with something like:
textx-ls completions example.mydsl 2:30
textx-ls
instead of textx
to avoid clashing with the current textx
command. This might be unified down the road. Command (completion
) should be first and the paras should come later. And finally, there is no need to specify the meta-model .tx file as for all registered languages you should know the meta-model by the file extension. The last one need some working out.
If I remember correctly, you need to reload a metamodel if parsing a model returned errors. Not sure if that's a textX bug or normal behavior, but in the old extension I was loading metamodel each time I needed to parse model.
That shouldn't be done. Meta-models can be reused. If you had some issues in the past that must be some old bug that is resolved by now.
I am also thinking about making textX-LS-core to be extensible with different plugins, but that should be a separate discussion since we are far from there (although we should set a good base :) ).
I agree. You can take a look at textx-tools
project which is designed to be extended by languages and generators projects/plugins.
If we have a metamodel referencing another metamodel only, w/o having access to its grammar (because the other metamodel comes from a lib), then we will not be able to get a lsp with the current design. Am I right?
Maybe we must redesign the referencing-metamodel feature...
The old textX-languageserver does not have support for that. I think it was developed before textX's multi meta-model feature.
This project is at the very beginning and we should discuss how we plan to support all of the new features (multi meta-model, scoping, etc.).
Our last idea was to have a combination of Textxfile and python decorators to register what we need (see https://github.com/textX/textX-LS/issues/4#issue-396315437). For more complex meta-model loading, maybe we can introduce another decorator and load all necessary stuff with python, for e.g.:
from textx_ls_core import metamodel
@metamodel()
def load_mm():
mm_A = metamodel_from_str(grammarA)
mm_B = metamodel_from_str(grammarBWithImport, referenced_metamodels=[mm_A])
return mm_B
or something similar. What do you think?
P.S.: Link to test_multi_metamodel_refs.py
is not pointing to a valid location (http://textx.github.io/textX/stable/multimetamodel/#use-case-meta-model-referencing-another-meta-model).
The link seems to be already fixed (via https://github.com/textX/textX/issues/95).
Your work is very good. I appreciate a responsibility allocation to the individual artifacts: e.g., "The Textxfile is responsible to identify the grammar of an extension XY". And that is exactly what you did so far. Good work.
I was just wondering if the "referenced_metamodels" feature is not somehow "not fitting" into our plans so far. The referenced_metamodels feature allows to reference objects, from which we just know that they exist (not how they are concretely modeled). In order to let a metamodel A reference such elements from a metamodel B, you just have to specify the metamodel B (python object) containing the class to be referenced, without the need to have access to its grammar.
The fact that a metamodel references another metamodel is not visible inside the grammar A directly. Concretely, the grammar A references some unknown element type. Trying to load the grammar (=load the metamodel) without specifying the metamodel B python object (via the references_metamodel option) yields an error.
This, the referenced_metamodels feature is metamodel centric and not grammar centric.
Rationale for not using the grammar, but the metamodel python object of the referenced DSL: I think this feature (referenced_metamodels) is an essential feature of textX, since it allows to reuse a metamodel from some third party library (e.g. a data modeling DSL) and to create a custom DSL/metamodel which allows to reference elements from this third party language (e.g. a data flow DSL). See example https://github.com/textX/textX/blob/master/tests/functional/test_metamodel/test_multi_metamodel_refs.py.
Note: If this referenced_metamodels feature causes problems for a LSP implementation, we should also think how to possibly change the referenced_metamodels feature. Let's leave this open for now and see how things evolve...
I am looking forward to see what how the textX-LS project evolves.
Thank you, and textX is progressing quite fast thanks to Igor and you. I haven't had chance to get my hands on scoping and multi meta-model stuff yet, but I will have to before I start implementing support for that in our language server.
I am currently working on a model validation for built-in languages ( Textxfile and textx for now) and I plan to finish it in a few days. Validation should work the same way, independently of language that we are trying to validate. In other words, we will use different meta-models based on a file extension. If you have any ideas you want to try out, you are welcome to join on this project, as well. Since I won't work till beginning of March, let me know these days and I can explain you have to setup and start the project, add you on Azure DevOps, etc.
Thank you. I am quite busy at the moment. So let's leave everything as it is.
First of all, I would like to change configuration file name (currently .txconfig), since on Unix-like systems, any file that starts with dot is treated as hidden file. Textxfile came to my mind, obviously because of Dockerfile and Jenkinsfile. :)
Below is the example of old .txconfig file:
I personally think that
mydsl/lang.py:get_classes
is a little bit confusing (we are pointing to a function from a python file, which is not a path likemydsl/simple.tx
few rows above) and since we have alreadylang.py
file, we can implement a python API to register those metamodel custom arguments.So, the new configuration will contain just paths:
and the
lang.py
will look something like this:but this will require a naming convention or a path to this file, so it can be dynamically loaded.
Thoughts?