Open abelBEDOYA opened 1 year ago
@abelBEDOYA The 'services' queues is the default value for running the pipeline controller itself.
When using set_default_execution_queue()
, you are setting the queue on which the pipeline steps will run.
To control where the controller itself will run, use the pipeline_execution_queue
parameter in PipelineDecorator.pipeline
.
Does this help?
@ainoam Thanks for your answer! Providing pipeline_execution_queue
argument to PipelineDecorator.pipeline
was the key point. However, now I'm encountering an issue where the pipeline is stuck in a pending state and never starts running. I've taken some screenshots for reference.
It is pending, but it has been placed in a queue with a worker, so it should start running.
@abelBEDOYA what is the queue with the worker?
@jkhenning the queue with the worker is colaVolumen
. It shows a new experiment in Next Experiment section but it doesn't start running.
OK, can you share the console output or log of the agent listening to this queue?
@jkhenning I've created an agent by running:
clearml-agent daemon --queue colaVolumen --docker
It works properly for normal task but pipelines always turn into pending tasks. How can I show the console output logs of the agent? I haven't found any command in the official website.
Thanks for the help.
Describe the bug
I am trying to write a .py script in which a clearml pipelin is defined using python decorators. I would like to run it remotely using the clearML queue system. To do so, I've added:
PipelineDecorator.set_default_execution_queue('queue1')
and this error shows up:I have made sure there is a queue1. Besides, I have not writen "services" as a queue.
To reproduce
Expected behaviour
It should run the pipeline remotely using queue1.