Open lily-mara opened 2 years ago
Gah sorry this got lost in the shuffle, and thank you for the all the detail.
There is definitely something fishy going on. For sizes less than 16x16, coregraphics is consistently returning garbage from CGBitmapContextGetData
. I'll dig around a bit and see if I can figure out what's going on.
ah okay, this is our fault I think: when we call CGBitmapContextCreate
we are passing NULL
for data and 0
for bytesPerRow, which means coregraphics chooses these values for us. In this case that means it is choosing a row size of 64 (bytes, so 16 rgba pixels), which we aren't handling correctly.
I'll have a patch up shortly.
okay actually this is slightly more nuanced; I haven't spent much time in this codebase in a while.
most bitmap image storage APIs have some concept of 'stride', which is the number of bytes used to store a row of image data. Per my very loose understanding, it is useful to do things like have the number of bytes in a row be a multiple of the platform word size. Likely this makes operations like copying more efficient; it may have other benefits I don't know about.
I also do not know all the various considerations that go into choosing a stride, but it is enough for our purposes to know that this is a thing.
In practice, what this means is that the storage that contains your image on the platform can potentially be larger than the actual size of your image.
To convert the raw pixel data stored by the platform into a compact vec, you need to copy the bytes, using the stride to determine the starting position of each row of pixels.
We do currently handle this, but we only do so in piet_common
, for instance in cg_back.rs
:
Piet was designed with the intention of rendering graphics to a display, not to an image, and rendering to an image is not well supported. This is a limitation that we could address, but it would probably require different API for different backends, because the way different backends set up drawing contexts is different, and a unified API would be hassle. (It would be possible, but would be a hassle).
It is true that the test-picture
examples are currently not correctly extracting pixels. These files are intended to be used in testing, and are not intended as a guide for how to use piet to generate bitmaps. I think this is an oversight that we should fix at some point.
So: maybe we should do that at some point? For the time-being, however, I do think that using piet-common
is a reasonable solution.
If you aren't using piet-common, though, then currently you would have to handle all of this stuff yourself. The easy solution, perhaps, would be for you to use piet-common
directly, instead of piet-coregraphics
.
In summary:
piet_common
test-picture.rs
files in the various backends should not crash just because the provided size does not fulfill some undocumented constraintsI am not going to address these issues myself in the near-term unless they are blocking someone else's work, but if someone else is interested in fixing them I would be happy to review patches and answer questions.
Sorry for the run-around!
I'm trying to replicate the results of the example file in
piet-coregraphics
. I'm unable to render an empty PNG file.I'm using these crates:
Running the above example, I get the following output:
The PNG crate expects the image buffer to be 4 bytes long, which makes sense considering it's an RGBA image with 4 bytes per pixel, and it contains only one pixel. I'm not sure why piet is returning a 64-byte image buffer, and I don't really know enough about image formats to make a good guess. I opened the raw buffer in a hex editor, and it contains
0xff 0xff 0xff 0xff
followed by 60 bytes of0x00
.Is there some trimming that I'm supposed to do, or is there a settings mismatch in my code between png and piet?
EDIT: it seems there may be some minimum image size that I'm not exceeding. If I change the size to 100x400, it seems to work just fine. Is this restriction coming from piet, coregraphics, or png?