OK so basically we want to have it so that each image spec takes:
text prompts
style prompts
number of iterations per epoch
number of epochs
AND THAT'S IT! Hide everything else. Then we want to store the resulting images against the generation request (how, what kind of model would this be? You would ideally store it against the image spec itself - might it be worth making image spec an actual model? Then generation spec would have to be a model too - this is all OK to do and probably worth doing anyway).
Would be good, as part of this work, to get it generating the video as well... for each "step" in the video spec. But we can also move that to another ticket if need be.
Also it would be good to get progress persisted as well.
OK so basically we want to have it so that each image spec takes:
AND THAT'S IT! Hide everything else. Then we want to store the resulting images against the generation request (how, what kind of model would this be? You would ideally store it against the image spec itself - might it be worth making image spec an actual model? Then generation spec would have to be a model too - this is all OK to do and probably worth doing anyway).
Would be good, as part of this work, to get it generating the video as well... for each "step" in the video spec. But we can also move that to another ticket if need be.
Also it would be good to get progress persisted as well.