The following error outputs to the celeryBroker queue at the end of the pipeline _chainfileprocessing() function, which currently runs the pipeline batch jobs:
[2022-07-21 18:36:38,556: ERROR/ForkPoolWorker-8] Task celeryBroker._chainfileprocessing[36b544fe-dead-428a-9031-42a44eef178a] raised unexpected: TypeError("unsupported operand type(s) for |: 'chain' and 'NoneType'")
Traceback (most recent call last):
File "/home/joel/.local/lib/python3.8/site-packages/celery/app/trace.py", line 451, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/joel/.local/lib/python3.8/site-packages/celery/app/trace.py", line 734, in __protected_call__
return self.run(*args, **kwargs)
File "/home/joel/Documents/pipeline/celeryBroker.py", line 53, in _chainfileprocessing
response = chain( intakejob.intakejob(), cleanjob.cleanjob(), keywordjob.keywordjob()).apply_async()
File "/home/joel/.local/lib/python3.8/site-packages/celery/canvas.py", line 898, in __new__
return reduce(operator.or_, tasks, chain())
TypeError: unsupported operand type(s) for |: 'chain' and 'NoneType'
This error occurs at the end of the pipeline, once the final job, keywordjob.py, has resolved. The error does not appear to shut down the celery task queue, nor does it affect the flask API route that triggers the task queue. It looks to be related to the return value being passed.
At the moment, because the async call of the pipeline chain, all of the pipeline jobs, and the API connection are all working as expected, I'm treating this error with a low priority. Until a refactoring or further development is done to make the pipeline more robust, I don't see a need to delve into the error too far.
The following error outputs to the
celeryBroker
queue at the end of the pipeline_chainfileprocessing()
function, which currently runs the pipeline batch jobs:This error occurs at the end of the pipeline, once the final job, keywordjob.py, has resolved. The error does not appear to shut down the celery task queue, nor does it affect the flask API route that triggers the task queue. It looks to be related to the return value being passed.
At the moment, because the async call of the pipeline chain, all of the pipeline jobs, and the API connection are all working as expected, I'm treating this error with a low priority. Until a refactoring or further development is done to make the pipeline more robust, I don't see a need to delve into the error too far.