The Galicaster Project is an open initiative to provide flexible, state-of-the-art solutions for recording educational multimedia contents like lectures and conferences
On one CA with an IP camera we have intermittently got an error when the recording stops when it appears that the gstreamer pipeline cannot be shut down properly. We're not sure of the cause.
The following is logged to the console (stdout or stderr):
Traceback (most recent call last):
File "/usr/share/galicaster/galicaster/scheduler/scheduler.py", line 122, in __stop_record
self.recorder.stop()
File "/usr/share/galicaster/galicaster/recorder/service.py", line 184, in stop
self.__close_mp()
File "/usr/share/galicaster/galicaster/recorder/service.py", line 192, in __close_mp
self.current_mediapackage.status = mediapackage.RECORDED
AttributeError: 'NoneType' object has no attribute 'status'
The auto-recover then handles the mediapackage, which gets recovered with a "Recovered" prefix.
gstreamer logs:
2:18:46.087645621 4131 0x3034b70 WARN v4l2bufferpool gstv4l2bufferpool.c:912:gst_v4l2_buffer_pool_stop:<gc-v4l2-src:pool:src> some buffers are still outstanding
2:19:12.421106747 4131 0x7fa968001850 WARN rtspsrc gstrtspsrc.c:2664:on_timeout:<gc-rtp-src> source 27867261, stream 27867261 in session 0 timed out
galicaster logs:
galicaster 2016-09-09 08:49:59,057 INFO scheduler/scheduler Timeout to stop record 56467754
galicaster 2016-09-09 08:49:59,077 INFO recorder/service Stopping the capture
galicaster 2016-09-09 08:49:59,077 DEBUG recorder/recorder Stopping recorder, sending EOS event to sources
galicaster 2016-09-09 08:50:29,277 ERROR recorder/recorder Timeout trying to receive EOS message
galicaster 2016-09-09 08:50:29,277 ERROR recorder/service Handle error (Timeout trying to receive EOS message)
galicaster 2016-09-09 08:50:29,642 INFO mediapackage/repository Trying to recover the crashed recording
galicaster 2016-09-09 08:50:29,757 INFO mediapackage/repository Copying file /usr/share/galicaster-repository/gc_menz10_20160718T09h55m24_37/org.opencastproject.capture.agent.properties to /usr/share/galicaster-repository/rectemp/org.opencastproject.capture.agent.properties
galicaster 2016-09-09 08:50:29,876 INFO mediapackage/repository Repository temporal files moved to /usr/share/galicaster-repository/gc_menz10_20160718T09h55m24_37
galicaster 2016-09-09 08:50:30,490 INFO mediapackage/repository Crashed recording added to the repository
galicaster 2016-09-09 08:50:30,491 INFO mediapackage/repository Copying file /usr/share/galicaster-repository/gc_menz10_20160718T09h55m24_37/org.opencastproject.capture.agent.properties to /usr/share/galicaster-repository/gc_menz10_20160718T09h55m24_37/org.opencastproject.capture.agent.properties
galicaster 2016-09-09 08:50:30,537 DEBUG recorder/service Connecting recover recorder callback
galicaster 2016-09-09 08:50:30,642 INFO opencast/service Set status unknown to server
Also this but not sure if it's related (console / stdout or stderr)
Failed to connect to Mir: Failed to connect to server socket: No such file or directory
Unable to init server: Could not connect: Connection refused
We're not sure of the underlying cause, but it seems the Galicaster traceback should be fixed at least.
On one CA with an IP camera we have intermittently got an error when the recording stops when it appears that the gstreamer pipeline cannot be shut down properly. We're not sure of the cause.
The following is logged to the console (stdout or stderr):
The auto-recover then handles the mediapackage, which gets recovered with a "Recovered" prefix.
gstreamer logs:
galicaster logs:
Also this but not sure if it's related (console / stdout or stderr)
We're not sure of the underlying cause, but it seems the Galicaster traceback should be fixed at least.