Closed iguberman closed 4 years ago
As in Issue 43 I would suggest the addition of declare statements for each of these features. Since some of the annotations you proposed are Condor-specific, they will probably have to be ignored in other execution engines (like local or Hadoop).
In order to continue on that one, I need a list of declarations and how they are to appear in the Condor submit file.
Maybe a lazy approach? Maybe just take them and put them in the generated submitfile as is, without trying to validate on Cuneiform side. If they fail, htcondor error will happen, that way cuneiform doesn't have to keep track of new or deprecated condor features and keywords. I.e:
We're trying to declare custom submit key-value pairs for task join-files
(sorry about terrible syntax, I am just using this to demonstrate my point):
declare htcondor_submit : join-files
universe=gorilla
blah=whatever
this will fail with htcondor error when task gets submitted because there is not such universe gorilla
and not such keyword blah
.
but this should work (NOTE I made a separate declaration for generic key-values that apply to all tasks and one specific to join-files
task that needs 256M of memory)
declare htcondor_submit :
universe=docker
docker_image=dockerimage
notification=Error
notify_user=support@a.com
declare htcondor_submit : join-files
RequestMemory=256M
And this should probably give a Cunfeiform warning because executable
and output
will be overwritten by cuneiform values:
declare htcondor_submit : join-files
RequestMemory=256M
output=output.txt
executable=my.exe
Also, I just realized that
declare htcondor_submit : gen-xx-sequence
universe=local
would take care of inline
suggestion I made in the other issue, but only for condor backend.
Another thought, submit default condor settings on command line, to avoid cluttering the cf file?
i.e. cuneiform -p htcondor --htcondor_universe docker --htcondor_docker_image docker_image --htcondor_requestCpus 2
We dropped HTCondor support.
This is not a real issue, but a real limitation in CF as I see it. HTCondor is a very powerful tool for matching different types of jobs to different types of machines, but with a submit file purely generated from inside CF all that power is lost along with the power to control the universe (this sounds funny ;) (i.e. what if I want to run inside the "docker" universe, or some tiny shell scripts inside the "local" universe to avoid scheduler overhead for such small jobs?) It would be nice if on condor platform tasks were allowed to take in a number of parameters that would be added to the generated cfsubmitfile, i.e. universe, file transfer parameters, worker match parameters, like memory and cpu, etc. In this case the DAG would be generated as it currently is but the provided fields will overwrite the default ones. Another approach: a complete submitfile template as one of the inputs to the task? If it's a template, cuneiform will use it but overwrite the cuneiform-important stuff: command, inputs, outputs (if file transfer is YES), etc. I would be happy to work on this one if you agree it is important and should be part of cuneiform functionality.