Closed AJue3101 closed 2 years ago
Hi @AJue3101, I just opened https://github.com/brightway-lca/brightway2-calc/issues/43 to report the problem, I'm seeing the same issue
You can try downgrading scipy to an older version, I suspect that scipy API changes are currently breaking brightway2:
conda install scipy=1.7
Hi @haasad, thanks for the quick feedback - downgrading scipy works
@haasad, thanks for being on top if this! Is this different from #703?
Can we fix this from our side by pinning scipy=1.7
?
@marc-vdm Yes, it's different from #703 (even if the error message is the same). Scipy 1.8.0 was released yesterday and changed some APIs. Ideally this should be fixed upstream in bw2calc, but we could add a temporary pin for scipy until then. However somebody would need to address the failing tests, otherwise the release pipeline won't run through.
However somebody would need to address the failing tests, otherwise the release pipeline won't run through.
@nabilahmed739 @bsteubing Could we have a look at the testing stuff soon? I want to learn how to fix these issues.
@marc-vdm @nabilahmed739 @haasad I have fixed 2 of the reasons why the tests were failing:
Yet there is another reason to do with matplotlib on MacOS that I fail to understand... https://github.com/LCA-ActivityBrowser/activity-browser/runs/5127971434?check_suite_focus=true
Solutions are appreciated...
@bsteubing please review #711, I'll open a separate issue for fixing the dependencies, I couldn't easily get bw2calc 1.8.1 to work
@haasad strange, in your pull request it worked, now after merging it does not work... https://github.com/LCA-ActivityBrowser/activity-browser/runs/5144321235?check_suite_focus=true It looks like we are again a step further as it passed for MacOS_38, but not for MacOS_39
I restarted the workflow twice, now it ran through :man_shrugging:
I skipped one more test: https://github.com/LCA-ActivityBrowser/activity-browser/commit/d6e29cc4b370da20d1f6105fd032ff3ab37a34bc
Really not sure what causes this, would need somebody with a Mac to run the tests locally. It's really hard to debug when you have to wait for a 10min pipeline run after every change.
If there is anything I can do to help let me know.
I just tried to re-run tests on PRs that don't do anything critical but that still breaks. Both my PR #710 and #708 have problems. #710 fails macOS test for 3.9 and #708 fails all py3.9 tests. py3.8 tests seem to get cancelled, which I also don't understand?
Do I need to pull/merge the current master and push again for those new tests to work? The PR #711 was not merged into either branch yet as my PRs existed before we fixed the issue.
@marc-vdm Your PRs run against the github actions defined in your branch, not the master branch. I also find this a bit confusing, but at least it let's you test changes to the worfklows.
Not sure how comfortable you are with more "advanced" git operations, but this is how I would do it:
git fetch upstream
git checkout -b <your-feature-branch>
git rebase upstream/master
git push --force
git merge upstream/master
if you push the changes to your fork, the workflows should run again and (hopefully) succeed.
Regarding the cancelled tests: We could/should add strategy: [fail-fast: false]
for the tests in the workflow (see e.g. here). Without it the pipelines stop if one step of the "matrix" fails, i.e. if 3.8 tests fail 3.9 are stopped and vice-versa.
Thanks! I rebased (already had upstream in my pycharm) and and all the checks completed and passed. So the solution seems to work well.
I restarted the workflow twice, now it ran through 🤷♂️
Thanks @haasad ... seems like black magic at times... at least we have a running development version out now. I guess for the stable version I will wait until we have figured out the dependencies... or do you and @marc-vdm think the way this is solved is fine for now?
These kinds of problems are honestly out of my comfort zone. I'm happy to help indicate or diagnose problems where I can, but I don't have the knowledge to fix them or tell you whether these fixes are good enough unfortunately.
@haasad @marc-vdm @bsteubing This should be fixed in upstream bw2calc
and brightway2
releases. Please close if you agree.
Going to keep this open for a little longer (sorry all <3): @haasad this MacOs test failed twice, do I just keep re-running the test or is there a way to find if it is my code that is actually failing?
I opened #723 for the segfaulting tests on OSX. The original problem in this issue is addressed with #716 and #722
Hi,
I got this Problem when starting AB. The error had also occurred before, but downgrading bw2calc to version 1.8.0 had always fixed the error. And I have always worked with a Python version >=3.8 (now with 3.9.10)
However, it doesn't work now - I get the error message again: AttributeError: module 'bw2calc' has no attribute 'ComparativeMonteCarlo'. The same happens with the stable version