Closed jgodfrey closed 3 years ago
Note, while the "portrait-pairs" feature added here looks nice, it doesn't handle the paired image's text. That is, it only displays the requested text for the single, originally selected image and doesn't include the text from the image that was secondarily "paired".
Jeff, thank you very much for all that; I've pulled it into my local develop branch and will have a look a.s.a.p. It's good to see that the switch to an open ended bit-mask style for SHOW_TEXT has already been utilized for the addition of 'folder'.
Paddy
Jeff, the double image idea looks pretty good, thanks for that. I've had to put a few extra bits in to cope with exif orientations 6 and 8 (270 and 90 degrees) and slicing [pic_num+1:] generates an error on the last in the list (though it's already in a try: block so it only matters infrequently).
I found that running the picture frame on my small laptop screen the text sometimes wrapped round and went off the bottom so I've made the y position adjust itself so the bottom of the text stays in the same place.
As I said the whole programs needs to be refactored so it's mainly a case of getting the code in and working - it can be tidied up later (Isn't that what "technical debt" is all about!)
Paddy
Jeff, the double image idea looks pretty good, thanks for that. I've had to put a few extra bits in to cope with exif orientations 6 and 8 (270 and 90 degrees) and slicing [pic_num+1:] generates an error on the last in the list (though it's already in a try: block so it only matters infrequently).
Argh... I had the 2 "requires rotation" EXIF orientations on my list to handle at one point and lost track of them I guess. Regarding the slice error, I will admit that, although I'm a software engineer by day, I'm not a Python programmer. Playing with the list reordering code I used in an online sandbox, I decided that out-of-range indices were already handled gracefully by Python itself so I didn't find a reason to handle them explicitly. I guess I missed something there, though I still don't see a problem in the online sandbox...
I found that running the picture frame on my small laptop screen the text sometimes wrapped round and went off the bottom so I've made the y position adjust itself so the bottom of the text stays in the same place.
Yeah, good idea. I didn't experiment with anything that would cause a wrap, but your solution sounds great.
As I said the whole programs needs to be refactored so it's mainly a case of getting the code in and working - it can be tidied up later (Isn't that what "technical debt" is all about!)
Yeah, 100% agree.
Yes, the iterating with wrapping along a list is always a bit fraught, especially with popping and pushing to change the indexation! I'm also thinking how that might work with the image records in a db. Hmm, maybe there should be an extra field for tagging which images are the "second half" of a pair. I will almost certainly revise my rather messy hacks later.
The other thing I encountered running the picture frame for real with mixed image sizes, is that the cropping to the smallest image causes a very odd effect where there is 1020pixel image next to a 3840pixel image. So I have made the wider image to be the same width as the narrower one.
Yes, the iterating with wrapping along a list is always a bit fraught, especially with popping and pushing to change the indexation!
Right - fiddling with the order / content of a collection that's being iterated on can certainly be fraught with danger... ;)
Hmm, maybe there should be an extra field for tagging which images are the "second half" of a pair. I will almost certainly revise my rather messy hacks later.
While I certainly haven't given it much thought, it seems to me that simply having a field that represents "already shown" would suffice. That way, when the paired image is found, it just needs to be marked as shown and wouldn't need to be "moved". Then, the image iterator would just need to skip "already shown" images for the current pass.
Then, at the end of a full cycle, the "already shown" flags could be cleared to start again (or, whatever is appropriate based on config options).
The other thing I encountered running the picture frame for real with mixed image sizes, is that the cropping to the smallest image causes a very odd effect where there is 1020pixel image next to a 3840pixel image. So I have made the wider image to be the same width as the narrower one.
Yeah, I fiddled with various ways of concatenating two dissimilar sized images and settled for what I committed. But my images were certainly closer in height than your example. Your method sounds reasonable to me. Here's a related article, complete with small python functions for multiple concatenation methods.
While I certainly haven't given it much thought, it seems to me that simply having a field that represents "already shown" would suffice. That way, when the paired image is found, it just needs to be marked as shown and wouldn't need to be "moved". Then, the image iterator would just need to skip "already shown" images for the current pass.
Related, it'd be super easy to create a dynamic View that contained all of the "portrait images that haven't yet been shown" (for example), making the task of finding the next image to pair absolutely trivial...
OK I've pushed the changes to the develop branch. I re-did the pop - insert system in favour of a 'shown' flag. (In preparation for using a database type record I made iFiles a list of Pic objects with properties accessed using dot style) There are two issues:
- in order to get the orientation and aspect the get_exif_info() function has to be called so if the portrait images are sparse it could cause a little delay. The info is stored in the list so it saves processing later.
In the original implementation, I attempted to make it work whether the exif data was pre-loaded or not (via --delay_exif). If the exif is loaded at the start, it's all available, and the new "aspect" member is accurate for every image. If the exif wasn't pre-loaded, as soon as the first portrait image is found, the search for a second one will fail (since the aspect is recorded when the exif is loaded, which happens when the image is displayed). So, that first image would display by itself. When the 2nd portrait image is discovered, it's search for another one, and would find the only other one with a portait aspect ratio (the first image that was displayed). Now, 2 are available to be paired with the 3rd one when it's found. So, in this case, additional portrait pairs are "discovered" on-the-fly as further images are displayed. So, the "mix" improves as the the slide show runs. While that's not "ideal", I didn't want to force the exif to be preloaded for a bunch of images simply for the pairing feature. I even considered having the new "portrait_pairs" option force the "delay_exif" to FALSE, to ensure the exif was preloaded.
With your current implementation, it looks like the exif will be force loaded (if necessary) during the search for the next portrait image - up to the point where the next portrait image is found or the remaining exif is loaded. I guess that's the middle ground between the two extremes of the original implementation.
Another difference in the current code is that it only search the remaining list for portrait matches. In my original implementation, it searched the entire list, starting at the image beyond the "current" one. While that could recycle an already seen image (from earlier in the list), it'll ensure that a pair is generated (assuming there is more than 1 portrait image available). In your case, the last portrait image in the list will be displayed as a single image - not paired. I can see either implementation as valid.
flagging images as 'shown' prevents them from being displayed as the pointer iterates over them. However it also stops the MQTT back from working.
Hmmm.. Yeah, that's kind of nasty, and complicated by some of the current tech-debt...
flagging images as 'shown' prevents them from being displayed as the pointer iterates over them. However it also stops the MQTT back from working.
I haven't given it much thought, but what if the 'shown' member stored the index of the image it was paired with (instead of being a simple bool)? That might provide enough info to do something sensible when navigating via MQTT. And, might allow the same images to be "re-paired" in a MQTT forward/back scenario.
But, as we've already discussed, this would certainly be much easier to do if it were thought about more holistically.
Ah yes that makes sense (about the images repeating to start with) I was trying to avoid that but maybe it's not so bad. I will think some more about it.
Using the pic_num of the second image as a flag is a really good idea. Python can search through 100000 records in a list very fast (even faster using sqlite) I might patch that as a stand in while thinking about the best way to do the whole thing.
I made iFiles a list of Pic objects with properties accessed using dot style)
Forgot to comment on this. I'm a HUGE fan. I hated all those "magic number" array references... ;)
Yes, if this was C or Rust it would be a nice simple struct - python dict will do but dot notation is much easier to read than all those square brackets and apostrophes.
I've converted the Pic.shown to Pic.shown_with and made it None
else an int key to the portrait image it was shown with https://github.com/pi3d/pi3d_demos/commit/6a7c6048ee9a676d9fcaf54e0e6aeb6375f88dc2 . It seems to work much of the time though it's still possible to go next/back in certain combinations to end up with different portraits together.
I will think a little about the text for combined portraits but that might be just too messy to squeeze in.
Then I will probably draw a line under any further alterations until the whole thing is refactored as multiple files and classes.
I will think a little about the text for combined portraits but that might be just too messy to squeeze in.
Yeah, this is definitely messy. I guess the easy (but again, messy) thing to do would be to have tex_load()
return a list of indices belonging to the texture(s) it just loaded. Then, those indices could be used to look up the appropriate image info when building the text.
But, having that info come back from the tex_load()
feels odd. In a refactor, it'd seem better for the image(s) to be identified prior to that call, and then just tell tex_load()
what to grab, rather than it being responsible for the decision...
One more thought for the "pairing" feature - when the refactoring happens. I'd think it'd be just as valid to pair landscape images together for those using the software on a portrait oriented photo frame. So, it'd be nice to allow the pairing in both cases.
Added a number of features and bug-fixes, including: