Open bergfried opened 10 years ago
Does this have to do with the fill parameter? I'm digging into the _transform2
function. It seems that fill
is passed to the function, but passed as 1
to ImagingTransformAffine, ImagingTransformPerspective and ImagingTransformQuad.
I lose track of fill in the generic transform function ImagingTransform
, I don't know enough C to know what the line if (fill) memset(out, 0, imOut->pixelsize)
is doing here. Besides that line, I might be able to submit a pull request to at least support transparent fill, if not extrapolate.
Edit: Reading more about it, it looks like memset(out, 0, imOut->pixelsize)
writings a pixel's worth of zeros (black?)
I might be able to submit a pull request to at least support transparent fill, if not extrapolate.
Transparent fill should already be the behaviour.
from PIL import Image, ImageTransform
im = Image.open("Tests/images/hopper.png").convert("RGBA")
transform = ImageTransform.ExtentTransform((0, 0, 150, 150))
im.transform((100, 100), transform).show()
gives this image (it's hard to see because it is transparent, but there is transparency to the right and bottom of the image).
Of course, it's only transparent if the image has an alpha channel. Otherwise it's black.
Also there are two different parameters to some of the methods: fill
and fillcolor
. Some of the methods take one, or the other, or both. So maybe that could be something to make more consistent.
Whenever a function needs to extrapolate values of non-existing pixels (i.e. pixels outside of the image), it should be possible to specify the extrapolation method to use.
The extrapolation methods I consider to be most useful are:
See the introduction to http://docs.opencv.org/modules/imgproc/doc/filtering.html for further explanation and other extrapolation methods. There are several Pillow functions where this comes in handy. For example,
might produce a rotated and scaled version of the source image on top of a transparent background so you can paste this transformed image into other images easily. Another example:
might produce a blurred image that you can use for a tiled wallpaper without ugly tile borders.
If you need to prioritize, I consider
Image.transform
to be more important thanImage.filter
. (This is because it is quite easy to work around this issue forImage.filter
as soon as this is implemented forImage.transform
.)