[x] As user, I can find all information about possible commands and expected input for the LSP console application in the README of the project (currently working, but will need to be extended for new commands).
[x] As developer, I can trigger a build in GitHub actions that compiles the current version of the Mo|E server via a commit to the main branch and runs a test suite, so that I can be sure that my changes do not introduce errors. (kinda working, but no tests with actual models)
[x] As developer, I can run a test suite through gradle that contains unit and/or integration tests for all major features, so that I have examples for how the project should be used and can quickly find bugs that I introduced.
I can't remember where I have heard this tip, but i find that a good guideline is that each unit test should be a hypothesis about errors in your code. Something like "function f will fail, if I call it with a list that is not fully sorted as its first parameter". You should be able to formulate such a hypothesis as a short sentence for each test you write and actually write that hypothesis down as a comment in the test code. This will help you to define meaningful test cases and to quickly identify which new hypotheses you haven't covered yet.
Suggested test hypotheses:
Mo|E will fail to report non-existing top-level classes.
Mo|E will fail to report a non-existing class within an existing top-level class.
Mo|E will fail to report the use of a variable that was never defined.
The Mo|E server process will not shut down when it receives a shutdown request.
When two instances of the Mo|E server are started and initialized on the same machine, they will share an OMC instance. (Which may lead to unexpected behavior)
... and many more regarding much more fine-grained issues within single methods in the code.
[x] As Modelica programmer, I can use an LSP client to check a class in a Modelica-Project for errors, so I can identify bugs quickly and without switching tools. (#19)
We do not need the full feature set here, but only the following sub-tasks:
Report errors as Diagnostic which has additional information, such as the file name and line numbers. NOTE: I previously assumed that ResponseError was the correct class for this, but this seems to be wrong.
Report errors that can occur during loadModel
loading a non-existing top-level class, such as FooBar
loading a non-existing class within an existing top-level class, such as Modelica.FooBar
syntax errors, such as a missing semicolon
Report errors that can occur during checkModel
variable resolution errors, such as using a variable in an equation that has no definition
[x] As user, I can use the LSP console application to list all classes that are defined in a Modelica file, so that I can choose freely, which of these classes I want to investigate further with other LSP commands. (This is related to the last checkbox in #19.)
[x] As user, I can check the correctness of all classes within the CSchoel/hh-modelica project with loadModel and checkModel (if applicable). (part of #20, mostly working but some errors persist and needs to be checked with error reporting enabled)
[x] As user, I can use the LSP console application to shut down the server gracefully, so that all processes on the client and server side are stopped without any error messages or warnings. (Ignoring warnings does not count 😜)
[x] As developer, I can look up the additional JSON-RPC calls provided by the Mo|E server in a precise but human-readable format, so that I can easily implement Mo|E clients without needing to look at the code.
[x] As Modelica programmer, I can use the experimental VS code client to request and receive model errors and code completion events, so that I can get the benefits of Mo|E by using standard commands in a familiar text editor.
[x] As Modelica programmer, I can request an HTML rendering of the Documentation of a Modelica class via LSP, so I can check my own documentation for errors and explore unknown library classes (#22)
For this prototype it is ok, if the documentation is simply reported as an HTML string upon a custom request (no onHover or anything).
[x] As Modelica programmer, I can use a new testClass command via LSP that checks "everything that can be checked" of a Modelica class, i.e. it uses loadModel, checkModel, and instantiateModel successively, but only if this is sensible for the given class type (e.g. partial models cannot be instantiated, package classes can probably(?) not be checked), so that I just need one command to do all checks without needing to remember how far a particular class can be checked.
We need small issues for the following tasks.
f
will fail, if I call it with a list that is not fully sorted as its first parameter". You should be able to formulate such a hypothesis as a short sentence for each test you write and actually write that hypothesis down as a comment in the test code. This will help you to define meaningful test cases and to quickly identify which new hypotheses you haven't covered yet.loadModel
FooBar
Modelica.FooBar
checkModel
loadModel
andcheckModel
(if applicable). (part of #20, mostly working but some errors persist and needs to be checked with error reporting enabled)testClass
command via LSP that checks "everything that can be checked" of a Modelica class, i.e. it usesloadModel
,checkModel
, andinstantiateModel
successively, but only if this is sensible for the given class type (e.g.partial
models cannot be instantiated,package
classes can probably(?) not be checked), so that I just need one command to do all checks without needing to remember how far a particular class can be checked.