ansys / pymapdl

Pythonic interface to MAPDL
https://mapdl.docs.pyansys.com
MIT License
419 stars 116 forks source link

refactor: clean mapdl inprocess and move mute to MapdlCore #3220

Open Gryfenfer97 opened 6 days ago

Gryfenfer97 commented 6 days ago

Description

Cleanup of MapdlInProcess and light refactor

This PR is a followup to the suggestion made in #3198

ansys-reviewer-bot[bot] commented 6 days ago

Thanks for opening a Pull Request. If you want to perform a review write a comment saying:

@ansys-reviewer-bot review

codecov[bot] commented 6 days ago

Codecov Report

Attention: Patch coverage is 40.00000% with 3 lines in your changes missing coverage. Please review.

Project coverage is 84.27%. Comparing base (7c1eb1e) to head (3adb0df). Report is 26 commits behind head on main.

Additional details and impacted files ```diff @@ Coverage Diff @@ ## main #3220 +/- ## ========================================== - Coverage 86.63% 84.27% -2.36% ========================================== Files 52 53 +1 Lines 9550 9618 +68 ========================================== - Hits 8274 8106 -168 - Misses 1276 1512 +236 ```
Gryfenfer97 commented 6 days ago

@germa89 Some comments:

koubaa commented 6 days ago

@Gryfenfer97 you can test it by mocking the backend in a test

germa89 commented 6 days ago

I don't know how to test this backend as it is only callable by MAPDL

We are deploying v251 (See #3210). I can also add the daily build (tag latest_daily) docker image. Then I guess you could test it with mapdl.input_strings or whatever you are using at the moment. We might need to design a testing framework for this (set of helpers functions for testing *PYTHON).

If you are strictly working with MAPDL (starting MAPDL without the -grpc flag), then I would setup something different. We can use MAPDL CLI ( ansys251 -b -i input.inp -o output.out ). We will have to create a fixture that create the input file from the command lines given (you can add all the need MAPDL or Python boilerplate), and runs it using the metioned command. It should also read the output file to check the values. If the values are meant be stored in MAPDL memory, we can save an RST file and read it later. If the values are in the Python memory, we can just export the results as csv, or pickle files. For this, I think the best is to run everything inside the MAPLD docker image, for that you will have to wait for the Ubuntu docker version (that task is on me then). The CentOS image might work too, but we had problems installing Python. You can try though.

this is incompatible with the name being defined in MapdlCore

name is dependent on each implementation and it should be implemented in all implemented in all of them. I don't see why the current implementation is incompatible. But I agree with you, let's define completely name in MapdlCore (name is just returning _name). And then implement _name in each subclass constructor.

the section_id is only used for the grpc backend but is referenced...

Assuming you mean _session_id, I'm Ok with the approach you propose.

@koubaa Mocking might not be the best for this case, if we are expecting to have tight coupling. Maintaining the mocked version might be a lot of burden.

germa89 commented 6 days ago

Furthermore... the grpc server can be activated from inside MAPDL using an MAPDL command (ask Fred about it, and let me know).

germa89 commented 6 days ago

I guess we might have issues when executing an /input file if the gRPC server is not active. Can the gRPC server run at the same as in-memory?

Gryfenfer97 commented 6 days ago

I guess we might have issues when executing an /input file if the gRPC server is not active. Can the gRPC server run at the same as in-memory?

If having the grpc server running doesn't block MAPDL there is no reason to have a problem with both of them running at the same time

germa89 commented 6 days ago

I guess we might have issues when executing an /input file if the gRPC server is not active. Can the gRPC server run at the same as in-memory?

If having the grpc server running doesn't block MAPDL there is no reason to have a problem with both of them running at the same time

Good. Test that.

Remember also, that there is another...

There is another backend... the console one. I never used it, but I think it worked by directy typing the commands in the MAPDL terminal (sort of). We do not test it, nor develop against it, so it is going to be fun if we have to activate it.

koubaa commented 6 days ago

I guess we might have issues when executing an /input file if the gRPC server is not active. Can the gRPC server run at the same as in-memory?

why would we have this issue? It can run at the same time but I don't think we should, at least not only for the purpose of testing pymapdl

koubaa commented 6 days ago

@koubaa Mocking might not be the best for this case, if we are expecting to have tight coupling. Maintaining the mocked version might be a lot of burden.

I disagree. Here's how it could work:

def test_mapdl_c_backend():
    result_container = [""]
    class MockBackend:
        def run_command(command, verbose, mute):
            return result_container[0]
    mock_backend = MockBackend()
    mapdl = MapdlInProcess(mock_backend)

    result_container[0]="ENTERING PREP7 PROCESSOR"
    mapdl.prep7()

I don't see why this would be hard to maintain, and this can give 100% code coverage to all the logic we add to the MapdlInProcess class

germa89 commented 5 days ago

I guess we might have issues when executing an /input file if the gRPC server is not active. Can the gRPC server run at the same as in-memory?

why would we have this issue? It can run at the same time but I don't think we should, at least not only for the purpose of testing pymapdl

How are we sending the /INPUT command if the gRPC server is not active? It must then being through batch -b whic is OK, but not super convenient.

germa89 commented 5 days ago

@koubaa Mocking might not be the best for this case, if we are expecting to have tight coupling. Maintaining the mocked version might be a lot of burden.

I disagree. Here's how it could work:

def test_mapdl_c_backend():
    result_container = [""]
    class MockBackend:
        def run_command(command, verbose, mute):
            return result_container[0]
    mock_backend = MockBackend()
    mapdl = MapdlInProcess(mock_backend)

    result_container[0]="ENTERING PREP7 PROCESSOR"
    mapdl.prep7()

I don't see why this would be hard to maintain, and this can give 100% code coverage to all the logic we add to the MapdlInProcess class

I think the method you proposed makes sense when on the client we are having some postprocessing (PyMAPDL postprocessing module for instance) or preprocessing (PyHPS for instance, building the REST request) of the server response. We fake the server response, pre/post-process it normally, and compare it with something.

But I see no reason to fake the server response when there is not much pre/post-processing. Since we are testing the backend, we need to make sure we can send commands, and that the server can process them and give something correct back. What PyMAPDL is doing with the server response is not important in testing the backend.

This approach could be interesting for testing the post-processing module. We can have MAPDL command responses and make sure that PyMAPDL processes them correctly.

koubaa commented 11 hours ago

I guess we might have issues when executing an /input file if the gRPC server is not active. Can the gRPC server run at the same as in-memory?

why would we have this issue? It can run at the same time but I don't think we should, at least not only for the purpose of testing pymapdl

How are we sending the /INPUT command if the gRPC server is not active? It must then being through batch -b whic is OK, but not super convenient.

How it is done is not the concern of the mapdl_in_process.py. It's job is to give the pymapdl frontend using the backend protocol which it is given

koubaa commented 11 hours ago

@koubaa Mocking might not be the best for this case, if we are expecting to have tight coupling. Maintaining the mocked version might be a lot of burden.

I disagree. Here's how it could work:

def test_mapdl_c_backend():
    result_container = [""]
    class MockBackend:
        def run_command(command, verbose, mute):
            return result_container[0]
    mock_backend = MockBackend()
    mapdl = MapdlInProcess(mock_backend)

    result_container[0]="ENTERING PREP7 PROCESSOR"
    mapdl.prep7()

I don't see why this would be hard to maintain, and this can give 100% code coverage to all the logic we add to the MapdlInProcess class

I think the method you proposed makes sense when on the client we are having some postprocessing (PyMAPDL postprocessing module for instance) or preprocessing (PyHPS for instance, building the REST request) of the server response. We fake the server response, pre/post-process it normally, and compare it with something.

But I see no reason to fake the server response when there is not much pre/post-processing. Since we are testing the backend, we need to make sure we can send commands, and that the server can process them and give something correct back. What PyMAPDL is doing with the server response is not important in testing the backend.

This approach could be interesting for testing the post-processing module. We can have MAPDL command responses and make sure that PyMAPDL processes them correctly.

What do you mean by server? There is no server.