Closed slinlee closed 2 years ago
@brettskymind fyi- you'll see there are a bunch of new conflicts. This is from running Black, which adds standard Python code formatting. This was done in #438
Hopefully it's not too bad resolving the conflicts since it's just code formatting.
@brettskymind I recommend resolving the conflicts before adding more features and fixes in this same branch 😄
@brettskymind I recommend resolving the conflicts before adding more features and fixes in this same branch 😄
oh, i was running local pathmind.Simulation tests that needed these changes. i thought fixing the bugs was the first to-do.
conflicts resolved, what's next?
@brettskymind cool, thanks! Now you can see the code linting results here on the 'Checks' tab https://github.com/SkymindIO/nativerl/pull/437/checks?check_run_id=3931000288
@brettskymind cool, thanks! Now you can see the code linting results here on the 'Checks' tab https://github.com/SkymindIO/nativerl/pull/437/checks?check_run_id=3931000288
wow, this automatic code linting check is great.
@slinlee this error i'm less sure of. looks like it fixed some formatting. https://github.com/SkymindIO/nativerl/pull/437/checks?check_run_id=3931115134
@slinlee this error i'm less sure of. looks like it fixed some formatting. https://github.com/SkymindIO/nativerl/pull/437/checks?check_run_id=3931115134
@brettskymind ah yeah. So that means that it checked for python code formatting. Since it made changes, it means the checked-in code doesn't match the standard formatting.
Here are instructions for how to apply the formatting locally. Run these commands and then commit the changes. https://github.com/SkymindIO/nativerl#to-run-code-syntax-formatting-locally
@slinlee this is passing the tests now. although i'm afraid the tests may not encompass instances where auto-normalization is applied. if not, do you have an idea of how we should test for that?
despite carefully designing it, if we deploy to test and try auto-normalization, there's a chance we'll uncover further errors. would writing more tests or deploying to test be a better use of time?
@brettskymind We'll need a python example that has reward terms to add to the test, right? Or is that not expected to work yet?
@brettskymind the latest two changes I propose will also resolve the merge conflicts we currently see
@brettskymind is this the pr you said you were working on this morning?
adding a quick note for myself: there seem to be duplicates of the new test model in the test folder now.
i'm now currently working on support for balancing pathmind simulations that don't intrinsically have the auto balancing attributes, where they can instead weight and normalize via run flags
@brettskymind be wary of adding too much more to this PR. we need to be able to test and release the enhancements to reward terms for anylogic models.
@slinlee it think its finally in a good place now to experiment with, both for anylogic models and pathmind simulations
@brettskymind @ejunprung @SaharEs @maxpumperla I've merged this to deploy on test.devpathmind.com here. It will be available to test soon. It'll be nice since you can choose nativerl versions to compare different versions. This is 1.8.1 https://github.com/SkymindIO/nativerl/pull/471
This restores https://github.com/SkymindIO/nativerl/pull/396 since it needs more fixes. We reverted the code in test #436
@ejunprung recommends using pycharm and testing locally 👍🏽