Closed jklynch closed 3 years ago
Merging #92 into master will decrease coverage by
0.39%
. The diff coverage is93.54%
.
@@ Coverage Diff @@
## master #92 +/- ##
==========================================
- Coverage 57.56% 57.17% -0.40%
==========================================
Files 12 13 +1
Lines 912 892 -20
==========================================
- Hits 525 510 -15
+ Misses 387 382 -5
Impacted Files | Coverage Δ | |
---|---|---|
nslsii/tests/test_kafka_publisher.py | 92.53% <92.53%> (ø) |
|
nslsii/__init__.py | 37.98% <95.65%> (+7.05%) |
:arrow_up: |
nslsii/tests/conftest.py | 100.00% <100.00%> (ø) |
|
nslsii/iocs/tests/test_epstwostate_ioc.py | ||
nslsii/tests/temperature_controllers_test.py | 68.75% <0.00%> (+2.08%) |
:arrow_up: |
nslsii/_version.py | 44.80% <0.00%> (+2.20%) |
:arrow_up: |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update f4b0d29...3d8b559. Read the comment docs.
I finally found a configuration parameter that fixed a problem: auto.commit.interval.ms
is relevant to CI testing and may turn out to be important elsewhere. I was having an issue with this Kafka test getting double the expected number of documents from Kafka. The extra documents were left over from the previous test run (so the very first time I ran the test there was no issue) because the consumer did not signal to the broker it had processed them, even though it had. By default consumers can wait up to 5 seconds before signaling to the broker they have processed a message. This test ends very shortly after the Kafka messages are consumed and so the consumer did not signal that it had processed them, until I shortened the auto.commit.interval.ms
to 100. I could have put a delay in the test but I would rather the test end as soon as possible.
The latest CI failure is only on python 3.6.
That's a good question. I have not thought about it. In test_publisher_with_no_broker
there should be no time penalty related to Kafka since the producer is never used.
So I ran three quick and dirty tests with RE(count([det2]))
, RE(count([det2]), num=10)
, and RE(count([det2]), num=100)
. They ran in 5.01s, 5.03s, and 5.2s respectively. So this seems ok other than that there seems to be a 5s overhead.
I also ran those tests under pyinstrument
and it says
Program: /home/jlynch/local/beamlines/rsoxs/nxsas-worker/venv/bin/pytest -s nslsii/tests/test_kafka_publisher.py
5.031 test_publisher_with_no_broker test_kafka_publisher.py:131
└─ 5.031 __call__ bluesky/run_engine.py:700
[6 frames hidden] bluesky, threading
5.031 wait threading.py:270
time for count: 5.031
I think I'll have to talk to Tom or Dan to figure that out.
I will create an issue about performance. I would like to address it with a test if possible in a later PR.
After some more thought it seems like the performance issue should be tested in bluesky-kafka
. What do you think @gwbischof ?
Seems good to me, yea I think it makes sense to test the performance issue in bluesky-kafka.
Last time the tests ran 3.6 failed, now 3.7 is failing. Am I correct about that?
What do you think about also moving the other kafka tests to bluesky-kafka? Also subscribe_kafka_publisher might be useful to have in bluesky-kafka. That could be something that we do later.
I want to have tests here for subscribe_kafka_publisher
even though they are similar to the tests in bluesky_kafka
. These tests can be more focused on how we work with Kafka at the beamlines. That's my intention anyway.
I ran the python 3.7 build again and it passed.
This PR adds a function to subscribe Kafka Publishers to the RunEngine using a RunRouter.
A test for this function is also added which requires a Kafka broker to be running. The broker for testing is set up in the same way as for bluesky-kafka.