Closed andlabs closed 6 years ago
I'd like to second this request.
This is actually one of the only features I'd need to have before I could start using this both for my personal projects but also in-production at $DAYJOB
.
I made a stab at implementing something that can render images into an Area. I'm not making a pull request yet because I haven't looked at Windows support (and building in Windows makes me want to drink). But I'd like to RFC about a proposed interface.
The libui branch is here: https://github.com/art4711/libui/tree/draw-pixmap The go ui binding is here: https://github.com/art4711/ui/tree/draw-pixmap And a test program (with an equivalent gtk program) is here: https://github.com/art4711/fract-ui
The reasoning is explained here: https://github.com/art4711/libui/blob/draw-pixmap/doc/pixmap.md
Please let me know if it looks reasonable and I should dig into Windows support too, or if it's too far off from a good API and I shouldn't waste my time.
Your code is close to being what I would have done, and I could probably use some of it too. The two main differences would be that
I'll probably drop this in alongside Grid and Table after the next packaged release. I'll read your reasoning file and comment on that soon.
I suspect pre-creating the objects would actually make more sense for other applications, my use case is a blank canvas of pixel sized pixels. So I just want the library to render things and not get involved otherwise.
I'll look into how much it'd take to do this. The big question is scaling. The way the api is designed things are supposed to be as unaware of pixels as possible. If cairo, quartz and windows can scale for us, that should be easy. If not, this will take a while.
The byte order part. Like I wrote, Cairo expects host-endian only, no options there. Quartz can specify endianness. So any explicit big or little endian would require Cairo rendering to manually convert each uint32 when copying the pixmap. It's significantly slower to do that. Right now, all the conversion needed (flip the image upside down for Quartz) is one memcpy per row of data which is as cheap as it can get.
It actually becomes easier to efficiently populate the pixmap in host-endian mode. a << 24 | r << 16 | g << 8 | b
as long as we treat each pixel as a uint32. With each channel as a byte, you'd have to write a byte at a time and that becomes much less efficient (especially in Go which is horribly slow at manipulating byte arrays).
Right. The problem actually is Go, where byte data is always in r g b a
order regardless of host endianness. I was thinking we swizzle before handing the data to the OS, but IDK what option is better.
Scaling and pixel size compared to point size is something I still haven't figured out. I suppose I could provide two drawing operations, DrawNative and DrawStretch, with the latter filling a given point rectangle and stretching with either nearest-neighbor or linear interpolation (other interpolation modes are only available starting with Windows 8, IIRC). DrawNative would always draw 1:1 with screen pixels and there would be a separate function that would give you the equivalent point rect.
I currently have no plans to support loading external resources or decoding image formats in libui itself.
Ah, of course I haven't thought about the image libraries in Go. That's another thing I should have taken into account. This is all messy. Standards are great, everyone should have one. You might be right and the only thing that's doable is to swizzle.
The thing that annoys me about this is that cairo isn't the final format of the image either, so it itself will also do a transformation. In the end you end up copying and transforming the same data half a dozen times before it actually hits the hardware. Which actually leads me to believe that the most correct way to do this is to support all possible formats and have a way to indicate to the user which the most efficient format is. Something that I started doing but then threw away.
Let's see if I can find some time today to hack up something that works.
New version up. I implemented uiImage that can be drawn in a drawcontext.
I went through a dozen experiments, gave up and implemented image/draw.Image in the Go wrapper and that basically led to a completely different API for libui. It's still not right, I especially started to dislike the format flags I for some reason decided to shove into one uint32. Also, there's a first stab at an efficient data loader (remains from an earlier experiment). But I'm not sure it needs to be there now.
Does this new API make more sense? I'll see if I can find some time to look at Windows over the next few days.
I've now started a slightly different image API based on the fact UI elements that need to render in high-DPI systems use separate image file assets. Right now, this is going to be used in uiTable for image cell parts (don't worry about what this means just yet; just know that multiple parts of different types can make one cell). Not sure if a uiImageView will be provided in alpha 4 or not.
The byte order is native with B in the lowest byte (bits 7-0) and A in the uppermost byte (bits 31-24). A uiImage has a floating-point size that will be used for actually sizing content and a series of representations of integral size, and either package ui or the underlying OS (whichever decides to take responsibility) will choose the best one for the display.
So for example:
floating point size 16 x 16 image 0 is 16px x 16px image 1 is 32px x 32px
On a high-DPI system with 2x scaling compared to normal, the 32x32 image will be drawn.
hey all, any progress on this? I could help implement too - is anyone working in their own private branch? Would be happy to help
I've been busy with IRL work stuff, sorry.
Just wanted to add a +1 to the use case @art4711 gave. I'm not interested in loading images, but I do want a pixbuf I can draw into manually and then get into the UI. I'd like to try out libui for my deluxe-paint inspired paint program. (It's in C++ and currently uses Qt, but I find that a bit bulky and heavyweight for my taste)
Just want to chime in - displaying images is the only thing holding me back from being able to use this. I typically write photography related code -
I, too, would use such functionality. Low latency would be key, too - I'd need it to display video.
Merged with #318.
migrated from https://github.com/andlabs/ui/issues/100
Simple image-displaying widget.
Need to figure out scaling, scrolling, etc.
Need to figure out image loading.
uiArea might obviate the need for this, but putting it up anyway.