BlueBrain / BluePyOpt

Blue Brain Python Optimisation Library
https://bluepyopt.readthedocs.io/en/latest/
Other
199 stars 97 forks source link

Arbor cable cell exporter and backend #393

Closed lukasgd closed 1 year ago

lukasgd commented 2 years ago

I'm working on an Arbor cable cell exporter for BluePyOpt optimized cell models (related to https://github.com/arbor-sim/arbor/issues/1839) and looking for feedback here.

I've added a create_acc module analogous to create_hoc in bluepyopt.ephys. It works with the simplecell and l5pc examples. Using generate_acc.py in the corresponding example folder, you can output a cell with optimized parameters to 3 files (plus morphology) - a JSON file for metainformation that references a morphology file as well as two ACC files - one for the decor and one for the label dictionary. The output of running

$ BluePyOpt/examples/simplecell/generate_acc.py -o <output-dir>

is

$ tree <output-dir>
<output-dir>
├── simple_cell_decor.acc
├── simple_cell.json
├── simple_cell_label_dict.acc
└── simple.swc

0 directories, 4 files

with file contents

Once a cable cell is constructed in Arbor, it's possible to write it to ACC (arbor.write_component) and visually inspect it in the Arbor GUI. For the l5pc example, the ACC output is in l5pc_full_no_mechs.zip (mechanisms removed from decor) and looks as follows: Screenshot from 2022-04-07 16-03-22

I'm also working on Arbor simulation examples that use these two exported cells.

For the code below, I'd like to mention that I slightly refactored create_hoc by adding two new function calls inside the create_hoc function, so that code from the existing module can be reused in create_acc (using _generate_template_params) and control flow is similar between the two (create_hoc._read_template/create_acc._read_templates). The logic/instructions in create_hoc are unchanged.

The tests intest_create_hoc.py are still succeeding and I've added analogous ones for the JSON/ACC output in test_create_acc.py.

If you have feedback, I'm interested to hear from you.

wvangeit commented 2 years ago

Ok, thank you, that already looks nice. One thing I think would be useful is to have a test that actually runs arbor on the generated code and compares the output with a NEURON simulation.

lukasgd commented 2 years ago

Thank you @wvangeit for the review! The tests should pass now. Please let me know if there are open issues. I'll have to prepare an example comparing Arbor with NEURON simulation output (analogous to the Allen tutorial mentioned above).

wvangeit commented 2 years ago

@AurelienJaquier @DrTaDa , after merging master into this PR, some test failed. But I think it's something we recently added?

DrTaDa commented 2 years ago

@wvangeit Indeed, I hardcoded a value for the a test for CMA but it seems that I should use a rtol or atol on the test instead of an equality: diff: 0.10525059698894752 != 0.10525059698894745

lukasgd commented 2 years ago

Ok, thank you, that already looks nice. One thing I think would be useful is to have a test that actually runs arbor on the generated code and compares the output with a NEURON simulation.

The validation of Arbor simulation results with those obtained from Neuron is done for the simple cell example in simplecell_arbor.ipynb. It demonstrates how to run protocols on a cable cell in Arbor, analyzes the obtained voltage traces with eFEL/Arbor's internal spike detector and compares them to Neuron. They seem to agree well both with and without axon replacement. Note that the simulations with axon replacement depend on features that are not yet in an official release of Arbor.

lukasgd commented 2 years ago

In f2ca1ed, I've added support for inhomogeneous parameters, which was the last missing feature of the cable cell exporter that I'm aware of.

One thing I think would be useful is to have a test that actually runs arbor on the generated code and compares the output with a NEURON simulation.

What do you think about examples/simplecell/simplecell_arbor.ipynb? Would you like to have tests following this approach for this and further mechanisms? And are there any further features needed for this PR?

lukasgd commented 2 years ago

In e68599075fa, I've added a functional test based on the deviation of voltage traces in the L1-norm between Arbor and Neuron (relative error < 5% with dt = 0.025 gives a pass). It runs on the unicompartimental simple-cell morphology, inherits all mechs/params from the L5PC example and is run both without and with axon replacement. The setup is analogous to the L5PC.ipynb, so there's a notebook where voltage traces can be inspected visually. It's parameterized with papermill and using the associated script it's also possible to run this with the L5PC mechs/params from other regions re-mapped to the soma or subselections thereof (e.g. to narrow the source of an error). I can do some of these runs and upload them to a separate repo.

The notebook also gives a better overview of how Arbor could fit the object model of BluePyOpt (without going into all details, for brevity).

Finally, there's a small change to the Arbor imports suggested by @thorstenhater.

lukasgd commented 2 years ago

I've uploaded a set of validation runs here and added some analysis.

lukasgd commented 2 years ago

So, maybe I should add a bit of explanation regarding the validation runs and analysis.

The filename schema for the output notebooks is l5pc_soma_arbor_<region>[_<mech1>_<mech2>...].ipynb, where <region> describes the L5PC section from which mechanisms were taken and remapped to the soma of the test morphology and the <mech> options specify which mechs were used from that region (if not present, all mechs were used). Global, all and region-specific parameters as well as the pas channels were used in every run. If you're unsure what was used in a particular case, it's possible to open the corresponding notebook and look at cell [2]. In the referenced folder, you'll find validation runs with all subsets of mechs up to size 3 as well as the full set of mechs for each region.

The analysis notebook distinguishes these runs into OK vs. ERROR depending on whether the L1-norm of the difference between the Arbor and Neuron voltage trace exceeds 5 % of the area between the Neuron voltage trace and its minimum. A third outcome N/A happens exclusively when CVode in Neuron fails to advance time integration and the notebook raises an exception (verified at the end of the analysis notebook, happens predominantly with Ca-mechanisms).

In the analysis notebook, there is one plot per region that lists the L1-metric as a function of the set of mechanisms used for both standard dt (0.025 ms, blue) and fine dt (0.001 ms, orange), sorted from high to low (high error on the left). We can see that this metric is < 1 % for the all subsets of mechanisms from the regions all, apical and basal regions and nicely improves with finer time stepping. These voltage traces are highly consistent.

The regions somatic and axonal are more complicated. Whereas all somatic mechs reach a level < 5 % with fine time stepping, for the axonal there's a few where the metric is close to 1 irrespective of the time step (even though the complete axonal configuration only has a deviation of ~2 %). To narrow the source down to individual mechanisms, the evaluated configurations are ranked and a mean rank (normalized to [0,1]) is calculated for each mechanism per region/dt setting from the configurations it is involved in. This is listed in the data frames under regional_rank_stats. You can see that e.g. for the axonal region, K_Tst has the lowest mean rank in both dt settings and in fact it can be found in all of the mechanisms with high deviation (the second lowest is K_Pst).

If you're interested in the specific behavior of a run, you can open the corresponding notebook and inspect the voltage traces, e.g. l5pc_soma_arbor_axonal_Ca_LVAst_K_Tst.ipynb. You'll see that the Arbor/Neuron voltage traces stand in high agreement up until the membrane voltage reaches a certain value, beyond which they diverge: for a short pulse (bAP) Arbor "depolarizes" up to over +50 mV while Neuron repolarizes relatively quickly, whereas for longer pulses (Step1 and 3) Neuron continues to "depolarize" up to > +100 mV whereas Arbor's membrane voltage remains bounded. These may not represent physiological conditions, but it still is noteworthy that the same behavior can be observed when using K_Tst alone. Another case where this happens, but the voltage traces sometimes rejoin is the combination of Ca_HVA, K_Tst and NaTa_t. On the other hand, all cases where both K_Tst and SKv3_1 appear (e.g. l5pc_soma_arbor_axonal_Ca_LVAst_SKv3_1_K_Tst.ipynb) never reach a depolarization state where the two traces diverge, but stay significantly below 0 mV and actually show high agreement.

Do you have an idea what causes this effect for K_Tst? We've tested different discretization policies and using the Nernst method for the K-reversal-potential in Arbor as well as turning off adaptive time stepping in Neuron - none of them made it disappear. I'd also be interested in any other feedback (e.g. regarding the other mechs).

codecov-commenter commented 2 years ago

Codecov Report

Merging #393 (70fddf8) into master (38ba649) will decrease coverage by 3.03%. The diff coverage is 75.63%.

@@            Coverage Diff             @@
##           master     #393      +/-   ##
==========================================
- Coverage   89.69%   86.66%   -3.04%     
==========================================
  Files          46       50       +4     
  Lines        3397     4281     +884     
==========================================
+ Hits         3047     3710     +663     
- Misses        350      571     +221     
Impacted Files Coverage Δ
bluepyopt/deapext/optimisations.py 92.50% <ø> (ø)
bluepyopt/deapext/optimisationsCMA.py 81.75% <ø> (ø)
bluepyopt/ephys/parameters.py 88.46% <ø> (ø)
bluepyopt/ephys/protocols.py 61.37% <33.84%> (-22.08%) :arrow_down:
bluepyopt/ephys/simulators.py 73.85% <37.50%> (-13.11%) :arrow_down:
bluepyopt/ephys/stimuli.py 87.34% <51.85%> (-7.90%) :arrow_down:
bluepyopt/ephys/locations.py 86.47% <69.56%> (-8.46%) :arrow_down:
bluepyopt/ephys/acc.py 77.14% <77.14%> (ø)
bluepyopt/ephys/models.py 87.39% <85.41%> (+1.02%) :arrow_up:
bluepyopt/ephys/parameterscalers/acc_iexpr.py 85.71% <85.71%> (ø)
... and 8 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

lukasgd commented 2 years ago

I've added support for cell optimization with Arbor in 6aa15de.

In the L5PC example, to determine a single location for the extra-recordings in the bAP protocol, I've used the Arbor GUI. In particular, I'm restricting the set of locations distally from the proximal end of the apical region to those that are proximal of a specific apical branch (bright green point below). This gives unique locations for the recordings of dend1 (red) and dend2 (violet), shown below without and with axon replacement.

l5pc_bAP_extra_recordings_locs

l5pc_bAP_extra_recordings_locs_with_axon_replacement

lukasgd commented 2 years ago

The commit c425101 allows using external catalogues in Arbor exports/optimizations (i.e. others than the built-in ones). This would enable to e.g. build the catalgoue at examples/thalamocortical-cell/mechanisms by running arbor-build-catalogue tc . in that folder (after making some changes to the NMODL files, cf. guidelines). The tc-catalogue.so compiled in this manner can be loaded in BluePyOpt using ArbSimulator(ext_catalogues={'tc': 'mechanisms/'}) when working in the examples/thalamocortical-cell folder.

To run the full thalamo-cortical cell example, support for custom ASC tags in Arbor morphology loaders is required (will be handled under https://github.com/arbor-sim/arbor/issues/1981), so that this example can be left for a future addition.

lukasgd commented 2 years ago

Thanks for the review. To the first comment - thank you for your interest in the implementation of the JSON/ACC exporter.

I think if you really want to understand the internals, it will be helpful to study the Arbor docs in depth. You need to know about cable cells and how they're constructed from labels, the decor and the morphology. Also, mechanism catalogues are discussed there. I believe the tutorials section will also be helpful as well as the concepts overview. Also, feel free to reach out to the Arbor developers on Gitter for questions.

For create_acc, we can add a some debug logging for the local variables you cite if you'd like, at the cost of a lot of noise, though, when running cell optimization. But things won't become simpler by introducing classes (rather the amount of code will explode), nor will the fundamental data structures change (a class instance is a dict and the many functions already ensure encapsulation). Changing the create_acc implementation also won't make a difference to users as these are library internals (I rather fear I'll add unnecessary clutter to the module). The variable names may not be completely straightforward, but they come from the shared start of create_acc and create_hoc and should, therefore, be more easily accessible than if I strictly went with Arbor terminology.

I've outsourced ACC descriptor generators where meaningful: locations, parameter-scalers, morphology tags/labels, stimuli descriptors.

Modern IDE and static code analysis tools can detect possible bugs before runtime.

This is applicable to statically typed languages (like C++), not Python which is dynamically typed. In Python a misconstructed program will just crash at runtime or deliver faulty results irrespective of how many functions/classes there are.

anilbey commented 2 years ago

This is applicable to statically typed languages (like C++), not Python which is dynamically typed. In Python a misconstructed program will just crash at runtime or deliver faulty results irrespective of how many functions/classes there are.

That is no longer true. Even pylint detects type errors in static time. Modern IDEs will underline the code where you are passing an object of a different type.

anilbey commented 2 years ago

I think if you really want to understand the internals, it will be helpful to study the Arbor docs in depth. You need to know about cable cells and how they're constructed from labels, the decor and the morphology. Also, mechanism catalogues are discussed there. I believe the tutorials section will also be helpful as well as the concepts overview. Also, feel free to reach out to the Arbor developers on Gitter for questions.

Thanks for the info it will be useful.

For create_acc, we can add a some debug logging for the local variables you cite if you'd like, at the cost of a lot of noise

I don't agree here. Just because we need to produce a nested json in the end, we don't have to treat that data as nested dictionaries in the program. We can have our python objects to represent the logic and make them create a nested json at the end. This is called serialization.

(a class instance is a dict and the many functions already ensure encapsulation)

An object constructor fails if you pass it a parameter that is not defined as an argument. Dicts never fail! That's the problem. One has to debug all the time to figure out what is going on in the nested dicts. With classes, you don't even need to run the code to tell the structure.

Changing the create_acc implementation also won't make a difference to users as these are library internals

But in the future I am afraid we (the maintainers) have to maintain this code and therefore introduce updates to it. We cannot ask you to help us each time.

lukasgd commented 2 years ago

For create_acc, we can add a some debug logging for the local variables you cite if you'd like, at the cost of a lot of noise

I don't agree here. Just because we need to produce a nested json in the end, we don't have to treat that data as nested dictionaries in the program. We can have our python objects to represent the logic and make them create a nested json at the end. This is called serialization.

The create_acc module already contains types (namedtuples) that represent intermediate state during serialization (it's not all dicts/lists of primitive types, you can see that by logging the arguments to the Jinja template render function). In parts these namedtuples are the same as for the create_hoc module (from which this design was inherited), in parts they're newly added. Maybe your IDE doesn't recognize them as types? We can replace them all e.g. by dataclasses with type hints if that is what you're looking for, but I'm not aware of the use of type hints anywhere in BluePyOpt other than in a few examples and the change would equally affect both create_hoc and create_acc.

Also, the result of create_acc is not a single JSON file. That one is the master file that ties everything together. There are up to 4 more files (label-dict, decor, axon-replacement morphology, modified morphology), plus the original morphology to characterize the exported model. There will be a tutorial in an upcoming Arbor release that explains this file structure.

(a class instance is a dict and the many functions already ensure encapsulation)

An object constructor fails if you pass it a parameter that is not defined as an argument. Dicts never fail! That's the problem. One has to debug all the time to figure out what is going on in the nested dicts. With classes, you don't even need to run the code to tell the structure.

namedtuples fail when you don't supply all variables, cf. the comment above. Maybe type hints for some functions may help, but in the end I think there's no way around learning (the relevant part of) Arbor and doing some runtime inspection if you need to modify create_acc.

Changing the create_acc implementation also won't make a difference to users as these are library internals

But in the future I am afraid we (the maintainers) have to maintain this code and therefore introduce updates to it. We cannot ask you to help us each time.

You can ask on Gitter or drop an issue if you encounter a bug, see here. Arbor has a vivid community and it shouldn't take too long to get help.

lukasgd commented 2 years ago

I've re-run the Arbor-Neuron validation of the L5PC model here - the findings remain the same as above.

anilbey commented 2 years ago

Hi @lukasgd, sorry yesterday was busy I couldn't get back.

The create_acc module already contains types (namedtuples) that represent intermediate state during serialization (it's not all dicts/lists of primitive types

That's good. They are immutable and they clearly show the structure of objects. I still want to try to see if something can be done for the nestedness of dicts though :)

We can replace them all e.g. by dataclasses with type hints if that is what you're looking for

No need to do that in this PR.

Maybe type hints for some functions may help

Ok, can you either add docstrings or type hints to some parameters and return types of the create_acc functions where you feel it's informative? That would really help.

lukasgd commented 2 years ago

I've added some more docstrings to create_acc.py - please let me know if you have remaining questions.

anilbey commented 2 years ago

There's one part I don't understand in create_acc. This function returns a single tuple inside a list, not a list of tuples. However it is consumed in a for loop as if it is expected to return a list of tuples.

def _arb_pop_global_properties(loc, mechs):
    """Pops global properties from a label-specific dict of mechanisms
    ...
    Returns:
        A list of (mech, params) tuples with Arbor global properties
    """
    global_properties = []
    local_properties = []
    if None in mechs:
        for param in mechs[None]:
            if _arb_is_global_property(loc, param):
                global_properties.append(param)
            else:
                local_properties.append(param)
        mechs[None] = local_properties
    return [(None, global_properties)]

The code consuming it in _arb_convert_params_and_group_by_mech_local is these 2 lines below. Here the mech value will always be None and the for loop will iterate only once. The global_properties dict just has one key and one value. The key is "None".

        for mech, props in _arb_pop_global_properties(loc, mechs):
            global_properties[mech] = global_properties.get(mech, []) + props

All the tests pass when I remove that for loop and change the return value of _arb_pop_global_properties to return (None, global_properties).

        _, props = _arb_pop_global_properties(loc, mechs)
        global_properties[None] = global_properties.get(None, []) + props

Is this a bug or is the loop unnecessary?

lukasgd commented 2 years ago

I don't see why this would be a bug. I think the question is whether the loop is necessary or not. If you know the internals of _arb_pop_global_properties, you could change it to your suggestion. But the point is that this is not the interface around which _arb_pop_global_properties has been designed, which is detailed in the docstring.

def _arb_pop_global_properties(loc, mechs):
    """Pops global properties from a label-specific dict of mechanisms

    Args:
        loc (): An Arbor label describing the location
        mechs (): A mapping of mechanism name to list of parameters in
        Arbor format (None for non-mechanism parameters) from which
        Arbor global properties will be removed.

    Returns:
        A list of (mech, params) tuples with Arbor global properties
    """

The idea here is that at some point in the future there will possibly be some mechs (not just parameters) that need to be promoted to default to be handled correctly. Then you would only need to change _arb_pop_global_properties and below.

lukasgd commented 1 year ago

Hi @anilbey, thanks for the additional tests and identifying corner cases. This pointed out where the test suite for the JSON/ACC exporter was lacking. I've added a test that checks the ACC-export for the l5pc, expsyn and simplecell examples to improve the robustness. It works with axon-replacement in 2 out of 3 cases, so the l5pc and simplecell tests won't execute unless you use the development version of Arbor (soon to be released as Arbor v0.8).

Your contribution is split into very many commits, which made it difficult for me to build on them within reasonable time. Please consolidate your changes into fewer commits in the future (moving large code blocks in a separate commit is helpful, but tens of commits with self-explanatory few-line changes not so much). The changes also introduced several subtle regressions (e.g. in the global properties of the L5PC model), which is why I've integrated your ideas in a new commit built on 4148974 (your code is still available under the branch arbor_integration_anil_review). I've also re-run the larger set of validation tests we developed earlier in this PR.

There's a few things, where I deviated:

An idea to consider for the future:

Apart from this, from my side I'm done with the changes for this PR and ready to merge it. Please let me know if there's anything more to address.

wvangeit commented 1 year ago

@lukasgd could you please restore this PR to the state after @anilbey latest changes, or create new PR based on that state. It seems you have overwritten the history and squashed some commits. We should keep the history of this branch intact. I also think have atomic commits like @anilbey used is a good idea. As a maintainer he will have to maintain the code in the future, so he can decide how he prefers to apply his commits.

anilbey commented 1 year ago

Hi @lukasgd

which is why I've integrated your ideas in a new commit built on https://github.com/BlueBrain/BluePyOpt/commit/4148974fe3273270f3b7b74577b43c2b7fb5aa12

Thanks for taking care of the code but we have to respect the principles of this repository (and, in general, OSS development expects that the community respect the maintainers process). The PR author makes the suggestions and the maintainers decide whether to apply them. Each organisation has different ways of working. You cannot enforce them to change their principles. Otherwise people could not have maintained large repositories if each PR author introduced their rules.

(I don't understand how your import function makes a difference and importing Arbor in the bluepyopt

Here making it a function makes a difference. It prevents side-effect caused by the global scope import. Before it was in the global scope. It means it was getting executed whenever someone imports something from the module. E.g. if someone imports from utils import NPEncoder in a repositry, they expect only the NPEncoder to be imported, not some global scope code to be executed. This effect should be avoided.

Before making changes to one, in my opinion a strategy is needed about how to evolve both (e.g. with a common exporter framework).

Having an interface that both hoc and acc adheres to would be good indeed. But the code was not there. Before making changes, first this code must be readable, maintainable, unit testable. I had made some changes to begin us along that path, but by squashing them, this has made the maintenance side of things more difficult - for instance the CI is failing, and it's difficult to track down why.The last state of the repository I left was "good enough" for me to merge for the time being. It means for now this much structuring is enough for the code to be readable and maintainable. In the future when new features come and BluePyOpt becomes more complex, more refactorings will be needed but for now it's okay.

This being said, future improvements are welcome but please respect the style of the repository and the maintainer's time.

Thus, please strive for:

The code you submitted reintroduced some old bugs there.

Although the unit test coverage increased, of course it is possible that my changes might have introduced bugs. If there's a bug, ideally the tests should be catching it, not visual inspection. That's why the unit test coverage must be high. Please inform me on the bug and how to reproduce it so that I can test it and fix it.

You can of course start with the ACC exporter, but it seems inconsistent to only apply it to part of the codebase.

In the future I would like to apply the same exception raising strategy (if it's a python built-in exception like type error then use it. If it's something very specific, then define it) in the other modules as well. However it's not within the scope of this already large PR.

Your contribution is split into very many commits, which made it difficult for me to build on them within reasonable time. Please consolidate your changes into fewer commits in the future (moving large code blocks in a separate commit is helpful, but tens of commits with self-explanatory few-line changes not so much).

It's the maintainers who decide this. Some large repositories even demand machine readable commit messages that includes certain keywords and structure. When I make a PR to a large code-base they have much stricter rules than this and I have to abide their rules because it's their system.

By the way, our small commits idea is not our invention. It comes from the literature and it's widely applicable. Kent Beck, Martin Fowler, Bob Martin etc. have expressed this idea of small, testable, revertible steps (commits) in various books, articles and talks. They are interesting resources.

I once again would like to thank you for your valuable contributions. Apologies if this comment appears overboard. I wanted to clarify our responsibilities to collaborate better.

I created #422 from the previous state of code. The tests pass there. They currently fail here with the newest changes. I think #422 is ready to be merged. You mentioned a bug, what is it? With that fix I can merge #422. Is there something critical you think should be added to #422? For the newer features of course we can have new PRs.

brenthuisman commented 1 year ago

Hi @anilbey thanks for the review! I'm probably not going to get round to addressing it before the break, so I plan to get started first thing next year.

lukasgd commented 1 year ago

Thanks for the review @anilbey. Please double-check the Arbor import mechanism at bluepyopt/ephys/acc.py. You may want to move it to bluepyopt/ephys/__init__.py or similar according to the previous discussion.

Also, the L5PC create_acc test with axon-replacement is currently failing for me again due to 6th digit after the dot difference in the axon replacement coordinates. This happens when the entire testsuite is run with tox, not when the create_acc tests are run in isolation (then they all pass).

anilbey commented 1 year ago

Thanks @lukasgd for the changes. I am okay with the import mechanisms.

Also, the L5PC create_acc test with axon-replacement is currently failing for me again due to 6th digit after the dot difference in the axon replacement coordinates. This happens when the entire testsuite is run with tox, not when the create_acc tests are run in isolation (then they all pass).

I see, this happens often with NEURON. One test leaves some global state and the other tests are affected by that state. In tests using NEURON we tolerate it because we know that it is due to NEURON's design. I guess it's also ok for Arbor tests to differ in precision at the 6th digit when not isolated.

This issue can be more important for Arbor, than for us. I.e. if it is not expected that independent runs affect each other, maybe it is something to look at. I don't know.

Like @brenthuisman I will also leave for holidays, let's continue finishing it next year. Happy holidays :)

lukasgd commented 1 year ago

Thanks for the feedback, @anilbey. The test suite is fixed now - the altered coordinates in the L5PC morphology with axon replacement instantiated with Neuron were introduced with a version bump of Neuron (v8 to 9) causing test_write_acc_l5pc to fail. I've updated the test reference data to Neuron v9 now.

The main remaining point now that I'm aware of is to integrate the recent updates from the master branch. If the above things need any more work, please let me know.

lukasgd commented 1 year ago

I'm uploading a proposal for merging master here. There are few things that I came across.

First, setting tox' minversion to 4 causes this error for me

dev/BluePyOpt$ tox
ERROR: venv '.tox' in /home/lukasd/src/arbor/dev/BluePyOpt would delete project
Traceback (most recent call last):
  File "/home/lukasd/src/arbor/dev/venv/bin/tox", line 8, in <module>
    sys.exit(cmdline())
  File "/home/lukasd/src/arbor/dev/venv/lib/python3.8/site-packages/tox/session/__init__.py", line 44, in cmdline
    main(args)
  File "/home/lukasd/src/arbor/dev/venv/lib/python3.8/site-packages/tox/session/__init__.py", line 69, in main
    exit_code = session.runcommand()
  File "/home/lukasd/src/arbor/dev/venv/lib/python3.8/site-packages/tox/session/__init__.py", line 186, in runcommand
    provision_tox_venv = self.getvenv(self.config.provision_tox_env)
  File "/home/lukasd/src/arbor/dev/venv/lib/python3.8/site-packages/tox/session/__init__.py", line 137, in getvenv
    raise tox.exception.ConfigError("envdir must not equal toxinidir")
tox.exception.ConfigError: ConfigError: envdir must not equal toxinidir

I didn't set any of these dirs - how do I fix this?

Regarding the updates from master I found the following:

AurelienJaquier commented 1 year ago

@lukasgd about tox, I usually have this error when I have installed a tox with version < 4, while having minversion set to 4 in the tox.ini. Have you tried updating tox to the latest version?

lukasgd commented 1 year ago

Thanks, @AurelienJaquier - indeed, upgrading tox solved it. I've uploaded new validation runs here. Note that I've used different random parameter samples compared to the ones before, but the findings remain the same.

lukasgd commented 1 year ago

The tests should pass now as far as my internet connection allows to check. From my side, the PR is ready. Is there anything more that needs to be addressed?

wvangeit commented 1 year ago

Thank you @lukasgd, the tests are indeed working.

@thorstenhater @brenthuisman , one last check, you're ok with the current implementation? Then I'll merge the PR.

brenthuisman commented 1 year ago

If you are happy and all breakage is addressed, you have my blessing :) Thanks everyone!

wvangeit commented 1 year ago

Finally merged :-)

Thank you for all the hard work @lukasgd et al.

lukasgd commented 1 year ago

Thank you for accepting it @wvangeit and thanks to everyone for their contribution and guidance!