iqm-finland / KQCircuits

KLayout Python library for integrated quantum circuit design.
GNU General Public License v3.0
126 stars 71 forks source link

Implement simulation object validation checks #94

Closed rmoretti9 closed 1 month ago

rmoretti9 commented 1 month ago

This pull request addresses issue https://github.com/iqm-finland/KQCircuits/issues/91

cla-bot[bot] commented 1 month ago

Thank you for your pull request and welcome to our community.

We require contributors to sign our Contributor License Agreement (CLA), and it seems that the following contributors have not yet signed it: @rmoretti9.

In order for us to review and merge your code, please sign the CLA at meetiqm.com.

Once you have signed the CLA, write a comment here and we'll verify. Please note that it may take some time for us to verify it.

rmoretti9 commented 1 month ago

I have now signed the CLA

qpavsmi commented 1 month ago

Oh hi :D Thanks for submission! It will take me some time to receive confirmation that you CLA form went through but in the mean time I will go through the code and test that it works

cla-bot[bot] commented 1 month ago

Thank you for your pull request and welcome to our community.

We require contributors to sign our Contributor License Agreement (CLA), and it seems that the following contributors have not yet signed it: @rmoretti9.

In order for us to review and merge your code, please sign the CLA at meetiqm.com.

Once you have signed the CLA, write a comment here and we'll verify. Please note that it may take some time for us to verify it.

rmoretti9 commented 1 month ago

I introduced the function validate_simulation inside klayout_package/python/kqcircuits/simulations/export/simulation_export.py. So that both export_elmer and export_ansys perform some checks before doing any other operations. For the moment, I implemented the checks suggested in the Concrete validation scenarios to start with part of the issue description and tested it within the script "klayout_package/python/scripts/simulations/swissmon_fluxline_sim.py" by commenting different parts of SwissmonFluxlineSim to trigger the error messages.

I'm currently trying to figure out how to simplify the introduction of new validation checks, and making them less prone to nested if statements. For instance, port_and_simulation_type_compatibility and flux_integration_layer_exists_if_needed are fairly simple functions but perhaps not that intuitive at first.

qpavsmi commented 1 month ago

@cla-bot check

cla-bot[bot] commented 1 month ago

The cla-bot has been summoned, and re-checked this pull request!

rmoretti9 commented 1 month ago

Thank you @qpavsmi, I think I implemented your suggestions now, but I'm a bit confused about the unit tests since I have very little experience with them. I made a test module in KQCircuits\tests\simulations\simulation_validate\test_simulation_validate.py but I'm not sure whether this works properly or not. When I run pytest --cov --cov-report term-missing -rf this test fails for Empty Simulations because it tries to match it with AnsysHfssSolution

qpavsmi commented 1 month ago

Thank you @qpavsmi, I think I implemented your suggestions now, but I'm a bit confused about the unit tests since I have very little experience with them. I made a test module in KQCircuits\tests\simulations\simulation_validate\test_simulation_validate.py but I'm not sure whether this works properly or not. When I run pytest --cov --cov-report term-missing -rf this test fails for Empty Simulations because it tries to match it with AnsysHfssSolution

Took me some time to realise this, but it's actually not your test failing, but your pull request breaks one of the existing tests: test_export_produces_output_files. It creates a completely empty simulation (no ports either), and checks that it can be exported without causing errors. Now if solution type is not specified, it defaults to AnsysHfssSolution. But now you validate that there must be a port if such solution type is used, hence the test fails.

It's best to modify this test so that it sets the solution type to something that passes this check. I found one way to fix this test is to fix the fixtures we defined for simulation tests. Notice that in the failing unit test we give perform_test_ansys_export_produces_output_files as an argument and then call it like a function. Where does that come from? Well in tests/conftest.py we define the fixture which is a function that returns another function perform_test_ansys_export_produces_output_implementation.

As this is getting very complicated, I'll just say exactly what to do to fix this test. We would like to be able to set which solution type to use by calling perform_test_ansys_export_produces_output_files(EmptySimulation, ansys_solution=AnsysSolution()) in test_export_produces_output_files. To do that, we modify the fixture in tests/conf.py. The fixture itself takes other fixtures as arguments: tmp_path is a built-in fixture to get a path to a temporary directory, and get_simulation is defined above. But we should instead introduce the new ansys_solution argument into the inner function perform_test_ansys_export_produces_output_implementation that the fixture returns. So that piece of code would look something like:

@pytest.fixture
def perform_test_ansys_export_produces_output_files(tmp_path, get_simulation):
    def perform_test_ansys_export_produces_output_implementation(cls, ansys_solution=None, **parameters):
        simulation = get_simulation(cls, **parameters)
        bat_filename = export_ansys([(simulation, ansys_solution) if ansys_solution else simulation], path=tmp_path)
        assert Path(bat_filename).exists()

    return perform_test_ansys_export_produces_output_implementation

After this change the test should pass. Consider making similar change for existing elmer test fixtures that we have.

I would recommend to study the fixtures since that could help you structure the tests you are writing better. Generally its good practice that each test function is written in such a way that they can be independently run, which means they set everything needed for the test (and tear it down if needed). Of course it gets annoying when the same set up is copy pasted for each unit test. This is where fixtures are useful, and pytest makes sure that the fixture is reconstructed from scratch for every unit test so that they are not made dependent on their order of execution. So in your example the initialisation of MockSimulation could be made into a fixture.

You will also need to test for negative cases: with this simulation and that solution I expect it to fail. You can do this using with pytest.raises(ValidateSimError) as e: block, google for examples or see test_chip_coordinate_conversion.py

qpavsmi commented 1 month ago

Remember to run python -m black -l 120 -t py39 . once you are close to being done

rmoretti9 commented 1 month ago

Hi @qpavsmi, thanks for the useful suggestion. I fixed the perform_test_ansys_export_produces_output_files fixture and implemented a few tests (two for every sim validation check). Let me know if I can improve it further

qpavsmi commented 1 month ago

I think after addressing this round of suggestions we can approve and get this merged :)

rmoretti9 commented 1 month ago

I think after addressing this round of suggestions we can approve and get this merged :)

Thanks @qpavsmi, I squashed my commits and marked this Pull Request as ready for review

qpavsmi commented 1 month ago

Thanks @qpavsmi, I squashed my commits and marked this Pull Request as ready for review

I'm not sure if its visible to you but there are many pending review points that I outlined based on the code changes image

These would be good to address before we merge

rmoretti9 commented 1 month ago

Thanks @qpavsmi, I squashed my commits and marked this Pull Request as ready for review

I'm not sure if its visible to you but there are many pending review points that I outlined based on the code changes image

These would be good to address before we merge

It seems like I cannot access these comments (or I don't know how to do that), but I modified the code based on the screenshot's suggestion. Except for the one about adding the empty new line. I'm not sure where it should be added.

qpavsmi commented 1 month ago

It seems like I cannot access these comments (or I don't know how to do that), but I modified the code based on the screenshot's suggestion. Except for the one about adding the empty new line. I'm not sure where it should be added.

You should be able to see the suggestions now I think: https://github.com/iqm-finland/KQCircuits/pull/94#pullrequestreview-2094009560

rmoretti9 commented 1 month ago

I applied all the suggested changes. Maybe the usage of @pytest.mark.parametrize in tests is a bit redundant as it is now, but it allows adding new tests quite easily