Open TouchTheFishy opened 2 years ago
After further investigations I think it may be related to issue #270. Even though the fetch
function of each camera are handled in two separate threads, it seems that they use all the CPU time, blocking the serial communication thread from using resources.
I also noticed that the fetch
function times out while it didn't before the update. I have tried to use try_fetch
instead but it doesn't seem to do the trick.
As a reminder, my software used to run on 1.3.4 but then stopped working after updating to 1.3.7. Did something change in that function between those two versions? And do you have any insight on why rolling back to 1.3.4 did not solve the issue?
Thanks for the help,
Fishy
Hello @TouchTheFishy. I work with camera FX10 and I had a similar problem. Try saving frames not in self.data_matrix[y] = _1d
, but in some kind of database and then stitch (for example i'm using h5py file)
@TouchTheFishy Hi,
it seems that they use all the CPU time
If you begin the image acquisition process by calling the start
method with run_as_thread=True
then you can manage the probing frequency by giving the cycyle_s=some_ints
option to the fetch method call.
Did something change in the memory management that would not be affected by library rollback ?
Though I need to admit that I can make a mistake but at least no change that has supposed to affect memory consumption. If the issue remains after the rollback then it would confuse me much more to be honest...
Hi Kazunari,
Sorry for answerning this late, I was not able to access the device that uses your library for a while. Make it run as a thread does not seem to work. I have tried a range from 0.01seconds up to 2.
And I do confirm that the issues stays after rolling back. Let me know if you have any other insight, otherwise I'll keep on updating until the issue resolves itself the same way it appeared.
Thanks for the help,
@TouchTheFishy Hi, excuse me for having kept you waiting. A possible approach that I can think of its to run the script in debug mode and detect the line that bloats the memory consumption by executing the script line by line. It may time some time but that should be much practical if you do not have any profiling tool.
Hi @kazunarikudo. This time it's my turn to aplogoize fo the late reaction. I did not have the device on my hands. Now i have something new. When the scan is about to start, I get this error :
2022-12-13 15:10:47,482 :: harvesters.core :: ERROR :: GenTL exception: Given handle does not support the operation. (Message from the source: Invalid data stream handle) (ID: -1006)
Traceback (most recent call last):
File "/home/bota/.local/lib/python3.10/site-packages/harvesters/core.py", line 2266, in _fetch
manager.update_event_data(self.timeout_period_on_update_event_data_call)
File "/home/bota/.local/lib/python3.10/site-packages/genicam/gentl.py", line 1458, in update_event_data
return _gentl.EventManagerNewBuffer_update_event_data(self, timeout)
_gentl.InvalidHandleException: GenTL exception: Given handle does not support the operation. (Message from the source: Invalid data stream handle) (ID: -1006)
[ERROR]:Emergency stop
2022-12-13 15:10:47,785 :: harvesters.core :: ERROR :: GenTL exception: Given handle does not support the operation. (Message from the source: Invalid data stream handle) (ID: -1006)
Traceback (most recent call last):
File "/home/bota/.local/lib/python3.10/site-packages/harvesters/core.py", line 2266, in _fetch
manager.update_event_data(self.timeout_period_on_update_event_data_call)
File "/home/bota/.local/lib/python3.10/site-packages/genicam/gentl.py", line 1458, in update_event_data
return _gentl.EventManagerNewBuffer_update_event_data(self, timeout)
_gentl.InvalidHandleException: GenTL exception: Given handle does not support the operation. (Message from the source: Invalid data stream handle) (ID: -1006)
Does it ring any bell? I'll try to dig for some solution on my side.
EDIT: here's a different one i got as well
2022-12-13 17:21:42,231 :: harvesters.core :: ERROR :: GenTL exception: Given handle does not support the operation. (Message from the source: Invalid data stream handle) (ID: -1006)
Traceback (most recent call last):
File "/home/bota/.local/lib/python3.10/site-packages/harvesters/core.py", line 2266, in _fetch
manager.update_event_data(self.timeout_period_on_update_event_data_call)
File "/home/bota/.local/lib/python3.10/site-packages/genicam/gentl.py", line 1458, in update_event_data
return _gentl.EventManagerNewBuffer_update_event_data(self, timeout)
_gentl.InvalidHandleException: GenTL exception: Given handle does not support the operation. (Message from the source: Invalid data stream handle) (ID: -1006)
[ERROR]:Emergency stop
2022-12-13 17:21:42,526 :: harvesters.core :: ERROR :: GenTL exception: Tried to call a function but the library has been closed. Call GCInitLib first. (ID: -10002)
Traceback (most recent call last):
File "/home/bota/.local/lib/python3.10/site-packages/harvesters/core.py", line 2266, in _fetch
manager.update_event_data(self.timeout_period_on_update_event_data_call)
File "/home/bota/.local/lib/python3.10/site-packages/genicam/gentl.py", line 1458, in update_event_data
return _gentl.EventManagerNewBuffer_update_event_data(self, timeout)
_gentl.ClosedException: GenTL exception: Tried to call a function but the library has been closed. Call GCInitLib first. (ID: -10002)
Hi Kazunari, it's me again.
I am using your library on a computer used by a client of mine. Today I went there to update to the v1.3.7 in order to fix the binning issues I last told you about.
Halas, after upgrading to the last version, my program doesn't work anymore. I could find the root cause: the harvesters thread that fetches the buffer takes so much memory that the other thread that the software needs times out. This other thread reads periodically (usually at 20Hz) the serial port of the PC to check some safety flags sent by a motor controller.
I tried to rollback to the v1.3.4 that used to work perfectly fine but it didn't help.
Here's the part of the code that seems to slow everything down :
Did something change in the memory management that would not be affected by library rollback ? (I used the command
pip3 install harvesters==1.3.4
). I also tried to uninstall and reinstall the package completely as well as deleting the files manually but nothing did the trickThanks for your time
I don't have a error report to show since the error is thrown by the other thread.
I can show a piece of code that demonstrates the reported phenomenon:
Configuration
Reproducibility
This phenomenon can be stably reproduced:
[X] Yes
[ ] No.
[ X] I've read the Harvester FAQ page.