Closed thedrow closed 3 years ago
According to my search there is no other mention to model.state
.
~My investigation indicates that nodes
is {'C_anchor', 'D', 'A'}
. I'm seeing these anchors for actual states myself for my own diagram.~
~Strangely, when running only test_roi
, the test passes.~ There may be more than one apparently.
These tests absolutely fail on master as well. Let's create an issue to investigate?
git branch
* master
patch-4
transitions on master [!] via 🐍 v3.9.0 (transitions)
❯ tox -e py39
GLOB sdist-make: /home/thedrow/Documents/Projects/transitions/setup.py
py39 inst-nodeps: /home/thedrow/Documents/Projects/transitions/.tox/.tmp/package/1/transitions-0.8.7.zip
py39 installed: apipkg==1.5,attrs==20.3.0,coverage==5.4,dill==0.3.3,execnet==1.8.0,graphviz==0.16,iniconfig==1.1.1,mock==4.0.3,packaging==20.9,pluggy==0.13.1,py==1.10.0,pycodestyle==2.6.0,pygraphviz==1.7,pyparsing==2.4.7,pytest==6.2.2,pytest-cov==2.11.1,pytest-forked==1.3.0,pytest-runner==5.2,pytest-xdist==2.2.0,six==1.15.0,toml==0.10.2,transitions @ file:///home/thedrow/Documents/Projects/transitions/.tox/.tmp/package/1/transitions-0.8.7.zip
py39 run-test-pre: PYTHONHASHSEED='2891939929'
py39 run-test: commands[0] | pytest -nauto --doctest-modules
================================================================================== test session starts ===================================================================================
platform linux -- Python 3.9.1, pytest-6.2.2, py-1.10.0, pluggy-0.13.1
cachedir: .tox/py39/.pytest_cache
rootdir: /home/thedrow/Documents/Projects/transitions, configfile: pytest.ini
plugins: cov-2.11.1, xdist-2.2.0, forked-1.3.0
gw0 [2317] / gw1 [2317] / gw2 [2317] / gw3 [2317] / gw4 [2317] / gw5 [2317] / gw6 [2317] / gw7 [2317]
.................................................................................................................................................................................. [ 7%]
.................................................................................................................................................................................. [ 15%]
.................................................................................................................................................................................. [ 22%]
................................................................................................................................................................................... [ 30%]
.................................................................................................................................................................................. [ 38%]
................................................................................................................................................................................... [ 46%]
.................................................................................................................................................................................. [ 53%]
.................................................................................................................................................................................. [ 61%]
................................................................................................................................................................................... [ 69%]
.................................................................................F...................................................................F............................. [ 76%]
.................................................................................................................................................................................. [ 84%]
.................................................................................................................................................................................. [ 92%]
................................................................................................................................................................................. [100%]
======================================================================================== FAILURES ========================================================================================
________________________________________________________________________________ PygraphvizTest.test_roi _________________________________________________________________________________
[gw1] linux -- Python 3.9.1 /home/thedrow/Documents/Projects/transitions/.tox/py39/bin/python
self = <tests.test_pygraphviz.PygraphvizTest testMethod=test_roi>
def test_roi(self):
m = self.machine_cls(states=['A', 'B', 'C', 'D', 'E', 'F'], initial='A')
m.add_transition('to_state_A', 'B', 'A')
m.add_transition('to_state_C', 'B', 'C')
m.add_transition('to_state_F', 'B', 'F')
g1 = m.get_graph(show_roi=True)
self.assertEqual(len(g1.edges()), 0)
self.assertEqual(len(g1.nodes()), 1)
m.to_B()
g2 = m.get_graph(show_roi=True)
> self.assertEqual(len(g2.edges()), 4)
E AssertionError: 3 != 4
tests/test_pygraphviz.py:87: AssertionError
_____________________________________________________________________________ TestPygraphvizNested.test_roi ______________________________________________________________________________
[gw0] linux -- Python 3.9.1 /home/thedrow/Documents/Projects/transitions/.tox/py39/bin/python
self = <tests.test_pygraphviz.TestPygraphvizNested testMethod=test_roi>
def test_roi(self):
class Model:
def is_fast(self, *args, **kwargs):
return True
model = Model()
m = self.machine_cls(model, states=self.states, transitions=self.transitions, initial='A', title='A test',
use_pygraphviz=self.use_pygraphviz, show_conditions=True)
model.walk()
model.run()
g1 = model.get_graph(show_roi=True)
_, nodes, edges = self.parse_dot(g1)
> self.assertEqual(len(edges), 4)
E AssertionError: 0 != 4
tests/test_graphviz.py:302: AssertionError
==================================================================================== warnings summary ====================================================================================
tests/test_async.py::TestAsync::test_new_state_in_enter_callback
tests/test_async.py::AsyncGraphMachine::test_new_state_in_enter_callback
/home/thedrow/Documents/Projects/transitions/tests/test_core.py:1178: RuntimeWarning: coroutine 'AsyncEvent.trigger' was never awaited
machine.to_B()
tests/test_async.py::TestHierarchicalAsync::test_new_state_in_enter_callback
tests/test_async.py::AsyncHierarchicalGraphMachine::test_new_state_in_enter_callback
/home/thedrow/Documents/Projects/transitions/tests/test_core.py:1178: RuntimeWarning: coroutine 'HierarchicalAsyncMachine.trigger_event' was never awaited
machine.to_B()
-- Docs: https://docs.pytest.org/en/stable/warnings.html
================================================================================ short test summary info =================================================================================
FAILED tests/test_pygraphviz.py::PygraphvizTest::test_roi - AssertionError: 3 != 4
FAILED tests/test_pygraphviz.py::TestPygraphvizNested::test_roi - AssertionError: 0 != 4
====================================================================== 2 failed, 2315 passed, 4 warnings in 10.97s =======================================================================
ERROR: InvocationError for command /home/thedrow/Documents/Projects/transitions/.tox/py39/bin/pytest -nauto --doctest-modules (exited with code 1)
________________________________________________________________________________________ summary _________________________________________________________________________________________
ERROR: py39: commands failed
@aleneum Can you please merge this and release a new version? If you'd like, I can release this bugfix version myself. I need a released version that works in order to release Jumpstarter 0.1.
Hello @thedrow,
unfortunately the tests still don't pass. I havent had time to check out why.
These tests absolutely fail on master as well.
I cannot verify this. Last build on travis passed and my checked out master also passes without that issue. There is a test on master that does not await a trigger (you mentioned that already somewhere else, didnt you?) but that's all I can see on master
right now.
I hope I can investigate things this weekend (late Friday to Sunday) but I do not want to give promises I cannot keep. I noted that this is urgent for you and will do my best to not keep you waiting for longer than necessary.
I'm guessing that the tests fail because of libgraphviz-dev version updates in focal. Travis-CI is still using xenial which is very old. That's why we see behavioural changes and failing tests.
I'm using graphviz from linuxbrew which is even newer:
brew info graphviz
graphviz: stable 2.46.0 (bottled), HEAD
Graph visualization software from AT&T and Bell Labs
https://www.graphviz.org/
/home/linuxbrew/.linuxbrew/Cellar/graphviz/2.46.0 (475 files, 12.3MB) *
Poured from bottle on 2021-02-01 at 10:58:21
From: https://github.com/Homebrew/linuxbrew-core/blob/HEAD/Formula/graphviz.rb
License: EPL-1.0
==> Dependencies
Build: autoconf ✔, automake ✔, bison ✔, pkg-config ✔, flex ✔, groff ✔, byacc ✘, ghostscript ✔
Required: gd ✔, gts ✔, libpng ✔, librsvg ✔, libtool ✔, pango ✔
==> Options
--HEAD
Install HEAD version
==> Analytics
install: 247 (30 days), 581 (90 days), 3,383 (365 days)
install-on-request: 156 (30 days), 330 (90 days), 2,206 (365 days)
build-error: 0 (30 days)
I think we're safe to release here since most environments will still work as expected. Do we want an issue on the graphviz updates?
I'm guessing that the tests fail because of libgraphviz-dev version updates in focal. Travis-CI is still using xenial which is very old. That's why we see behavioural changes and failing tests.
I'm using graphviz from linuxbrew which is even newer:
brew info graphviz graphviz: stable 2.46.0 (bottled), HEAD Graph visualization software from AT&T and Bell Labs https://www.graphviz.org/ /home/linuxbrew/.linuxbrew/Cellar/graphviz/2.46.0 (475 files, 12.3MB) * Poured from bottle on 2021-02-01 at 10:58:21 From: https://github.com/Homebrew/linuxbrew-core/blob/HEAD/Formula/graphviz.rb License: EPL-1.0 ==> Dependencies Build: autoconf ✔, automake ✔, bison ✔, pkg-config ✔, flex ✔, groff ✔, byacc ✘, ghostscript ✔ Required: gd ✔, gts ✔, libpng ✔, librsvg ✔, libtool ✔, pango ✔ ==> Options --HEAD Install HEAD version ==> Analytics install: 247 (30 days), 581 (90 days), 3,383 (365 days) install-on-request: 156 (30 days), 330 (90 days), 2,206 (365 days) build-error: 0 (30 days)
I think we're safe to release here since most environments will still work as expected. Do we want an issue on the graphviz updates?
@aleneum ping?
0.8.7
has been released as you probably have seen. There might be an issue but I cannot reproduce it. Neither Travis on master
nor my local tests (WSL2; Debian; 2.40.1-6) failed. I fear that this might be timing-related since it happened on Travis' very old Xenial
installation and your recent homebrew setup. Right now I cannot really investigate that issue. If you have some reliable ways to reproduce the issue, feel free to file an issue.
Considering the pretty old travis setup: I opened an issue (#513) since the OS travis setup is not just outdated but the queuing time became awfully long. Is there a CI/Testing service you'd recommend instead?
Yes, use Github Actions.
This is indeed an issue but it seems to screw with the tests. I need to have a closer look why this messes with the graph generation. Maybe there are more spots where
model.state
needs to be replaced.