Open jg10545 opened 10 months ago
For now I think I'd like to maintain the current functionality of pipeline stages and Pipeline
objects, where you can pass a tensor to forward()
and get a result, e.g.
result = pipeline(mytensor)
But if you pass a dictionary...
result_dict = pipeline({"patch":mytensor})
...result_dict
could contain other information. So each stage would need to:
forward()
if the input was a dictkornia
wrapper from KorniaAugmentationPipeline
that manages affine transforms only, and applies the sampled transformation to any coordinates passed in the input dictionary.Alternate idea- have every pipeline stage forward()
method accept a kwargs
dictionary, and change
return x
to
return x, kwargs
So every stage could read from, add to, or manipulate whatever's in that dictionary.
Next question is whether x
has to be a tensor or if it can be a dictionary or something. And how much I need to micromanage the different pipeline stages to handle this.
x
and output a corresponding dictPatchSaver
PatchResizer
PatchStacker
PatchTiler
PatchScroller
HighPassFilter
SoftProofer
SpectrumSimulationAttack
x
and output a tensorRectanglePatchImplanter
FixedRatioRectanglePatchImplanter
ScaleToBoxRectanglePatchImplanter
WarpPatchImplanter
PatchWrapper
Pipeline.initialize_patch_params()
OK, the implant part is annoying. Thoughts on how to implement an API for implanting multiple patches
forward()
forward()
adding one patch at a timeforward()
since it will sometimes be implanting in a target that already has a patch in itlastsample
dictMultiPatchImplanter
class that wraps a dictionary of implanter objects, one per patchduring forward()
:
forward
call of the next implanterthoughts on this:
possible implementation:
WarpPatchImplanter
and add a couple lines of code to wrap bare input tensors in a dict
this way it's easy to capture information (like model output details) you may need later
example workflows to eventually enable