Open jeffalperin opened 3 years ago
That sounds a bit worrying. Are the images all the same dimensions or might there be especially large ones that cause the seg fault? I'm thinking maybe something gets written beyond where it should go. This is the joy of C and attempting to access it from python with ctypes! I will see if I can reproduce the error here if you give me details of images involved.
Paddy
There are about 88 files, ranging in file size from 1 - 13M. The average size is 5-6M. They're all 3840 x 2160 (though there may be a few which are 2160 x 3840 because they're vertical in orientation).
I took the 6 largest files, ranging from 10M - 13M, and put them in a separate folder and then re-started PictureFrame2020 pointed at just that folder (no subfolders). It ran for hours overnight with no issues.
When I had the problems yesterday, I had the following directory structure for --pic_dir = /home/pi/4kpix
:
/4kpix/
/travel
/Canadian_Rockies
/Europe_2019
I'm testing now with --pic_dir = /home/pi/4kpix/travel
(only one level of subdirectory). I wonder if having too many levels of subdirectories is the problem.
--Jeff
So much for that theory: with just one level of subdirectory, it ran for about 15 minutes and then crashed with a segmentation fault.
Crashes with segmentation fault when pointed at a single directory with 41 images and no subdirectories.
Jeff, it does point towards the problem being associated with something in one (or more) of the images though. I'm not sure how you could pin it down - maybe if you set --verbose true and --shuffle false you might get an indication of approximately where in the sequence it had happened, then select a smaller range etc. etc.
Maybe the exif reading is the issue - it's all done with memory offsets. Or something else - really don't know.
Paddy: I've identified the issue. I have one image with dimensions 2971 x 1671; all the others are 3840 x 2160. The segment fault occurs each time the smaller image is about to be displayed, but only when --auto_resize=False
. If --auto-resize is set to True, the image displays with no problem. I've got --auto_resize set to False because I'm using a 4k display and the other images are cropped to 4k size (i.e. 3840 x 2160). So, is this a bug or expected behavior?
--Jeff
Jeff, well done, I hope it wasn't too tedious to find. I'm not entirely surprised that there are some issues with image dimensions. In the early days of GPUs the widths had to be powers of 2 then by the time of the broadcom GPU and driver on the earlier RPis there was a strange set of width that seemed to work OK https://github.com/tipam/pi3d/blob/master/pi3d/Texture.py#L25 that I had to basically discover by trial and error! However entering a wrong widthed Texture simply resulted in scrambled lookup values. With the 4k capability I tried a few other widths and the new driver seemed to be able to cope with them so I thought it would be safe... Oh well.
If I get a chance I try writing a program to test different widths on the RPi4 and see if there are a set of magic values. It might be a long-winded process if the majority cause seg faults! In the mean time it's probably best to operate on the basis that --auto_resize=False
should be used on sets of images that have been externally sized to match the display dimensions exactly.
Paddy
It was pretty easy to find after I followed your suggestion to turn --shuffle off and after I added this debugging line:
# set the file name as the description print (iFiles[pic_num][0]) # <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<ja debugging if config.SHOW_TEXT > 0 or paused: #was SHOW_TEXT_TM > 0.0
I'm still puzzled, though, why the config file comments say to set --auto_resize=False
in order to use 4k resolution on rPi 4. Seems like it's still possible to output to a 4k display regardless of the --auto_resize setting?
The rendering works by drawing to a rectangle (actually two triangles) which is made to fit the screen, though it can be bigger or smaller (such as the 'background' grey strip behind the text, or the text itself). The shader then fills in the rectangle according to the code https://github.com/pi3d/pi3d_demos/blob/master/shaders/blend_new.fs#L7 where you can see it looks up the RGB values for each pixel using Texture2D()
which looks up the values to use from the supplied Texture
object.
That Texture
can be bigger or smaller than the number of pixels being rendered on the screen and GPU has algorithms (mipmap) for scaling depending on whether you want speed or smooth interpolation. So if you scaled your texture down to 16x9 pixels it would render fine onto your 3840x2160 screen but you would lose much of the detail.
If you scale your 3840 pixels down to 1920 then 99% of the loss of resolution won't be apparent, however where you have fine grainy textures that vary from one pixel to the next (grass, fine twigs in a bush, the weave of some cloth or gravure style printing say) then you might notice the difference. Or, to put it another way, you're losing the benefit of a 4k monitor if you're going to display images limited to 1920 pixels
But two things occur to me. a) in pi3d.constants.init() the code determines whether it is running on 'old' RPi and sets pi3d.PLATFORM to PLATFORM_PI so it should be possible to automatically add the extra permitted widths to the list https://github.com/tipam/pi3d/blob/master/pi3d/Texture.py#L25 and always auto_resize. Secondly, maybe switching mipmap off for the 4k screens would speed up the fragment shader. I will try testing that...
I haven't changed anything other than beginning to load it up with 4k .jpg images. PictureFrame2020 displays maybe a dozen or so images and then quits with a segmentation fault. How do I start troubleshooting? I've invoked it with:
export DISPLAY=:0 && python3 /home/pi/pi3d_demos/PictureFrame2020.py
, and it's running on rPi4 (8gb).--Jeff