Open sapnho opened 1 year ago
Hi, It certainly would be good. I think that the video would have to be rendered to a window using a video player (I think the new Pi uses wayland rather than X server but I think the X commands are translated and work.. I should probably ask one of the many people that seem to have a copy of the RPi 5 for review if they could test pi3d, though I don't think the OS will be released until mid October) then the frame and blending in/out images rendered over the top using window transparency, i.e. with the GLX option.
I think I have tried something along these lines before but can't remember if there were major roadblocks or I just got bogged down. People had managed to render to screen using the 'new' mesa drivers without the X server running but the code was horrible looking C and the prospect of making a python module and integrating it with pi3d seemed too daunting. But a few years have passed and I expect there's lots more work available for us to 'just use'.
I'm away for a few days (Berlin then Copenhagen) but when I'm back I will spend some time on this (there have already been a couple of fixes related to the latest versions of numpy and pillow dropping functionality, and I expect more will creep out with the new RPi)
All the best (were you cycling in Portugal?)
Paddy
PS there's a suggestion here https://forums.raspberrypi.com/viewtopic.php?t=273270 that it's possible to build SDL2 without needing X, or, presumably, wayland. I did make a version of pi3d using rust https://github.com/paddywwoof/rust_pi3d/ so if I look at what I did there, maybe it would be possible to get python pi3d to use SDL2 as an option. I will look at this as well when I get back.
The Pi 5 should be available in a few weeks, and so should be the latest OS Bookworm. It would be great to have a general update of Pi3D for the latest OS. Not sure, in how far, matting would also be concerned, after all, the color scheme will have to come from a video and not from a photo.
(Portugal was just beaching and hiking. The country has non-stop hills that make biking, especially with the heat, quite demanding!)
Looks like I was too optimistic regarding the availability of the Pi5. Let's hope it'll be available to mere mortals in November then.
Rashly I've pre-ordered a 4GB, should be delivered week from 23rd Oct!
Cool, where did you order it?
https://shop.pimoroni.com/products/raspberry-pi-5 they're a company in Sheffield quite near to where I live and have a nice sense of humour so I tend to always use them. Don't know if they would send to Germany...On 5 Oct 2023 23:16, Wolfgang Männel @.***> wrote: Cool, where did you order it?
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: @.***>
I think the new Pi uses wayland rather than X server but I think the X commands are translated and work..
From what I can read, Wayland seems to be really powerful. Maybe there is indeed a new way of translating the old Pi3D.
Here's a little test of using python as a video renderer. I've not looked at audio, and probably won't do. I've tested the enclosed videos (one is 1280x720 another is same but scaled to 640x360 and third is reduced FPS as well) on RPi4 and they all tick along at 50+FPS (the actual video only runs at 24 so there's a bit of capacity spare, but obviously not 4k) On RPi3 the larger frame size drops to 11FPS and the smaller ones to 22 or so.
So I think having videos play in picframe is quite feasible given a) not attempting audio b) people will have to curate their own content with respect to making content length fit the slide show time, scaling video to a reasonable size or FPS.
If you want to try this out you download the sample from my google drive (you can probably just curl
that link) and put it on a Raspberry Pi you've set up with bookworm and pi3d v2.51 stop picframe so you just have the command line, mkdir video_test
. It might be better to download and unzip onto a proper computer then scp * pi@192.168.0.XXX:/home/pi/video_test
On the RPi you will need to
sudo apt install ffmpeg
cd video_test
source /home/pi/venv_picframe/bin/activate
nano VideoPlayer.py
# comment out the line "import demo" and, on later runs, the different video file names and sizes, make sure they match
python VideoPlayer.py
For some reason I found the terminal needed to be reset
in order to see output again. Obviously the proper system would find the dimensions from the video file etc. There are a couple of lines in the comments with ffmpeg command lines settings to scale and change fps.
Neat! I tried your demo, and it worked.
Then I uploaded a 640 x 360 video of my own, but that gave a distorted result, probably because I hadn't set the bytes per pixel correctly. I can't see where I can find this information in my video file. If it's color profile, as RGB would suggest, it is "BT.2020 HLG (9-18-9)" but that is no color profile I have ever heard of.
Played around with Handbrake and exported a 1280 x720 video which is shown cropped but works playback-wise. You can download it here for testing.
I think ffmpeg will try to convert to 24 bits ie one byte each for R,G and B.
There is probably an app ffprobe that will give lots of info on file contents and which I will use to get the dimensions rather than having to hard code them.
It would be great if we could have something where people can upload any video and it converts automatically to what is possible on the frame.
Or, we add a script that checks a folder "video_new" and, whenever a new video is unloaded, converts it to a format that works with Pi3D and then adds it to the Pictures/videos folder. I'll try something in that regard.
God bless AI-assisted coding!
This is a working script that does the following: It checks a folder "video_new" if any new videos have been uploaded. It waits until the upload is ready ("stable") before it proceeds (BTW, remember the error message in PictureFrame whenever new photos were uploaded? The reason was that the analysis started before the upload was complete. Might make sense to include something like here in PictureFrame as well).
The script will then convert all videos to a resolution of 640x360, remove the audio track, and save the converted file in .mp4 format. After confirming the conversion is complete and stable, it moves the .mp4 file to the /home/pi/Pictures directory and deletes the original video file. I took the last step because of the above issue.
I tried it with a few videos and conversion seems to work just fine.
The idea is to make sure that Pi3D always gets the right format and doesn't choke on some speciality.
Here is the script:
#!/usr/bin/env python3
# Required packages:
# watchdog: For monitoring directory changes (Install via pip with 'pip3 install watchdog')
# ffmpeg: For processing and converting video files (Install via apt with 'sudo apt-get install ffmpeg')
import os
import subprocess
from time import sleep
from shutil import move
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
class VideoConverterHandler(FileSystemEventHandler):
def wait_until_stable(self, filename, stability_duration=3, check_interval=1):
last_size = -1
stable_for = 0
while stable_for < stability_duration:
try:
size = os.path.getsize(filename)
if size == last_size:
stable_for += check_interval
else:
stable_for = 0
last_size = size
except OSError:
break
sleep(check_interval)
def process_video(self, filename):
file, ext = os.path.splitext(filename)
if ext.lower() in ['.mp4', '.mov', '.avi', '.mkv']:
print(f"Processing video: {filename}.")
try:
probe_cmd = f'ffprobe -v error -select_streams v:0 -show_entries stream=width,height,avg_frame_rate,codec_name,color_space -of default=noprint_wrappers=1 "{filename}"'
probe_result = subprocess.check_output(probe_cmd, shell=True, text=True)
video_info = self.parse_video_info(probe_result)
print(f"Video '{filename}' has the following properties:")
print(f"Frame Rate: {video_info['fps']}\nFormat: {video_info['format']}\nColor Profile: {video_info['color']}\nResolution: {video_info['resolution']}")
converted_filename = f"{os.path.basename(file)}_converted.mp4"
converted_filepath = os.path.join('/home/pi/videos_converted', converted_filename)
# Specify the codec as libx264 to ensure compatibility with QuickTime Player.
convert_cmd = f'ffmpeg -i "{filename}" -vf scale=640x360 -c:v libx264 -preset slow -crf 18 -an "{converted_filepath}"'
subprocess.run(convert_cmd, shell=True)
print(f"Video '{filename}' converted and saved as '{converted_filepath}'")
self.wait_until_stable(converted_filepath)
final_path = os.path.join('/home/pi/Pictures', converted_filename)
move(converted_filepath, final_path)
print(f"Converted video moved to '{final_path}'.")
os.remove(filename)
print(f"Original video '{filename}' has been deleted.")
except subprocess.CalledProcessError as e:
print(f"An error occurred while processing the video: {e}")
def on_created(self, event):
if not event.is_directory:
print(f"New video detected: {event.src_path}. Verifying upload completion.")
self.wait_until_stable(event.src_path)
self.process_video(event.src_path)
@staticmethod
def parse_video_info(probe_result):
lines = probe_result.split('\n')
video_info = {
'resolution': 'Unknown',
'fps': 'Unknown',
'format': 'Unknown',
'color': 'Unknown'
}
for line in lines:
if 'width=' in line:
width = line.split('=')[1]
elif 'height=' in line:
height = line.split('=')[1]
video_info['resolution'] = f"{width}x{height}"
elif 'avg_frame_rate=' in line:
fps_values = line.split('=')[1].split('/')
fps = str(int(fps_values[0]) / int(fps_values[1]))
video_info['fps'] = fps
elif 'codec_name=' in line:
video_info['format'] = line.split('=')[1]
elif 'color_space=' in line:
video_info['color'] = line.split('=')[1]
return video_info
if __name__ == "__main__":
path = 'video_new'
converted_path = '/home/pi/videos_converted'
if not os.path.exists(converted_path):
os.makedirs(converted_path)
for filename in os.listdir(path):
full_path = os.path.join(path, filename)
if os.path.isfile(full_path):
handler = VideoConverterHandler()
handler.wait_until_stable(full_path)
handler.process_video(full_path)
event_handler = VideoConverterHandler()
observer = Observer()
observer.schedule(event_handler, path, recursive=False)
observer.start()
try:
while True:
# Run indefinitely until a keyboard interrupt.
pass
except KeyboardInterrupt:
observer.stop()
observer.join()
Remarkable that complicated code like that just works!
I suppose this functionality would be in the database monitoring section where there are supposed to be semaphore systems to split out that from the pi3d stuff.
@sapnho My RPi 5 arrived and it does run quite a bit faster; the original ocean.mp4 video will run 60fps (pi3d doesn't go higher than that because of the graphics pipeline). I will look to see what the minimum changes are to run video in picframe. At the moment we merge each image with the border before generating the pi3d.Texture
from it, and then render that each frame with varying opacity. The video playing system updates the whole Texture from the ffmpeg stream directly so either the border would need to be drawn as another object (like the text or clock) or just left off. At the moment I will either just render the video without a border or render to a different pi3d Shape.
@paddywwoof Any particular observations about the Pi5? Does it get warmer than the Pi 4 running Pi3D?
None of my retailers seem to have the Pi5 stock not have any idea when it will be back on the shelves, so I am curious! :-)
I guess the most important thing is to get the screensaver in Pi3D working. I am still amazed why they made it so hard.
I've not done a full comparison but running picframe or just the desktop, the processor is warm but not hot. It feels much more like a laptop to use, I can look things up with the browser whereas with previous versions I tended to use my phone when I needed to check something on the internet!So, following that comparison it might be reasonable to expect people to use the full desktop version rather than mess around with the lite image and install the missing dependencies.I tried it first with the SD card (64 bit lite) I'd been testing on the Pi3 and 4 and it seemed to run OK. Early days but hopeful.
I've put the screensaver to one side now. I thought I might have made a breakthrough when I was editing the .config/wayfire.ini file, as soon as I changed the timeouts from -1 to 5 and 10 and saved the file, they came into effect. I had assumed that wayfire would need to be restarted, but it doesn't. However this doesn't work if the file is changed by a python program, or even from another terminal by SSH. It's useable if the brightness is turned down to zero, but not ideal as it probably doesn't save much power.
PS I tested the video resizing program written by AI and it's amazingly good. The only improvements I noticed were a. using the json format for the video data to save having to parse it b. there needs to be a sleep() in the while True loop (better to use signal.pause) c. if we're emptying a specific folder of pending video files we can probably do without watchdog, just check if there's anything there every now and then. The method of checking the file size to ensure write operations have finished is definitely worth using in the picframe database building routine.
Paddy
I've posted the first edition of the video-playing-picframe to branch video
, which includes the mods for branch sdl2
. In this version it only looks for .mp4
extensions (but I think ffmpeg can cope with pretty much anything) and it just fills the screen with the video rather than trying to fit to a border or letterbox etc. If you try using an unchanged phone video in portrait mode it will be scrambled.
I will post a revised version of the video_converter.py
with a scaling that crops to screen height i.e.
ffmpeg -i video.mp4 -vf "scale=1280:720:force_original_aspect_ratio=increase,crop=1280:720" video_crop.mp4
Although this could run on the RPi in the background I think it makes more sense to run it on a laptop and copy the resized videos to the RPi. ffmpeg seems capable of doing enough processing to make the RPi5 shutdown - even with a 4A USB, so it might compete with itself for processing power if it's streaming the video at the same time as scaling other ones and writing them to disk. I might do some more tests if I can find a beefier USB supply! I've only tested on the RPi5 as yet, I will try on the 4 and 3 later today.
I'm not sure which is the tidiest way for others to test this. Either pip uninstall picframe
then pip install git+httsp://github.com/helgeerbe/picframe@video
with the existing venv active. Or create a new venv just for video testing and do the same install from github but then uninstall pi3d and pip install git+https://github.com/tipam/pi3d@sdl2
I think pip is clever enough to spot if this is a repeat install and will use a local cache, so this latter route might be pretty quick.
If you get a chance to test this let me know what you find
Paddy
PS I'm testing on a full desktop setup as I can just type picframe
in a terminal, and stop it with ESC, which is much easier.
This is the script I've run for converting videos. So far only on my laptop but I will try it on RPi later
#!/usr/bin/env python3
# Required packages:
# ffmpeg: For processing and converting video files (Install via apt with 'sudo apt-get install ffmpeg')
import os
import sys
import subprocess
import json
from time import sleep
class MAX: # get from pi3d.Display? or set manually here
width = 1280
height = 720
SOURCE = "/home/patrick/Pictures/videos/source/"
DEST = "/home/patrick/Pictures/videos/dest/" # these will then be moved to suitable directories accessed by picframe
STABILITY_DURATION = 3
STABILITY_CHECK_INTERVAL = 1
FOLDER_CHECK_INTERVAL = 20
VIDEO_EXTENSIONS = ['.mp4', '.mov', '.avi', '.mkv']
class VideoConverterHandler:
def __init__(self):
try:
while True:
pending_files = os.listdir(SOURCE)
if len(pending_files) > 0: # still something to do
for filename in pending_files:
full_path = os.path.join(SOURCE, filename)
if os.path.isfile(full_path):
self.wait_until_stable(full_path)
self.process_video(full_path)
else:
sleep(FOLDER_CHECK_INTERVAL)
except KeyboardInterrupt:
print("converter stopped by user break")
def wait_until_stable(self, full_path):
last_size = -1
stable_for = 0
while stable_for < STABILITY_DURATION:
try:
size = os.path.getsize(full_path)
if size == last_size:
stable_for += STABILITY_CHECK_INTERVAL
else:
stable_for = 0
last_size = size
except OSError:
break # TODO shouldn't happen so log this
sleep(STABILITY_CHECK_INTERVAL)
def process_video(self, full_path):
(file, ext) = os.path.splitext(full_path)
if ext.lower() in ['.mp4', '.mov', '.avi', '.mkv']:
print(f"Processing video: {full_path}\n.")
try:
probe_cmd = f'ffprobe -v error -show_entries stream=width,height,avg_frame_rate -of json "{full_path}"'
probe_result = subprocess.check_output(probe_cmd, shell=True, text=True)
video_info_list = [vinfo for vinfo in json.loads(probe_result)['streams'] if 'width' in vinfo]
if len(video_info_list) > 0:
video_info = video_info_list[0] # use first if more than one!
print(f"Video '{full_path}' has the following properties:\n")
print(video_info)
converted_filename = f"{os.path.basename(file)}_converted.mp4"
converted_filepath = os.path.join(DEST, converted_filename)
# Specify the codec as libx264 to ensure compatibility with QuickTime Player.
#convert_cmd = f'ffmpeg -i "{full_path}" -vf scale={new_width}x{new_height} -c:v libx264 -preset slow -crf 18 -an "{converted_filepath}"'
convert_cmd = f'ffmpeg -i {full_path} -vf "scale={MAX.width}:{MAX.height}:force_original_aspect_ratio=increase,crop={MAX.width}:{MAX.height}" -c:v libx264 -preset slow -crf 18 -an {converted_filepath}'
subprocess.run(convert_cmd, shell=True)
print(f"Video '{full_path}' converted and saved as '{converted_filepath}'")
self.wait_until_stable(converted_filepath)
os.remove(full_path)
print(f"Original video '{full_path}' has been deleted.")
else:
print(f"can't get dimensions of video {full_path}")
except subprocess.CalledProcessError as e:
print(f"An error occurred while processing the video: {e}")
if __name__ == "__main__":
if not os.path.exists(DEST):
os.makedirs(DEST)
VideoConverterHandler() # this will loop on __init__ until Ctrl-C
Thanks, Paddy. Will have a chance to test on the weekend!
Wolfgang, just found a problem running out of memory when I run this on the RPi4. I will have a look and try to fix later
Ok, I probably won't be able to check this weekend anyway, so no hurry. Thanks!
There were two causes of the memory leak. One I've fixed here https://github.com/helgeerbe/picframe/commit/25c5fbbf894fb2a40dd8b0c5316d132cb668b7c3 (essentially using a proper file reading syntax with Popen(...) as pipe:
) and the other was a bug in a pcmanfm
a component of wayland on bookworm (on the RPi at least).
The former you should be able to fix by switching on the venv then python3 -m pip install git+https://github.com/helgeerbe/picframe@video --no-dependencies --force
The latter will need sudo apt update
, sudo apt upgrade
Paddy
PS I've noticed that, rather nicely, when you click on the see the current image
link on the http interface, it links to the mp4 file which plays as a video on your phone!
Hi both,
Just installing 'yet another' instance of your brilliant solution after Nixplayer decided to start strangling their customers with commercially-driven limitations...! <goodbye Nixplayer!>
I'm planning to use one of these neat USB C portable 16" monitors that should give me an ultra-thin frame mount and allow single cable connection using USB C to the RPI 5 - hopefully resulting in an actually neater solution than the commercial frames...!
Firstly, can I check from the chain if you've both got Rpi 5s now? (be delighted to send you both one if not for free for all the brilliant work you've done so far!) - I ordered a few on release, so have got some in drawers.... ;)
Secondly, can I check if I should still install Legacy Debian Buster as per this page: https://www.thedigitalpictureframe.com/how-to-set-up-your-raspberry-pi-for-your-digital-picture-frame/#Installation_of_Raspberry_Pi_OS
Or if I should install the newest version through RPi imager?
Hi Katie, out of interest, what changes did Nixplayer introduce? The Pi 5 requires the Bookworm OS, so you can't use Buster here. However, if you are planning to use a 16'' monitor only, the Pi 3 is perfectly fine, only for a 4k display you will need the Pi4. The problem with Bookworm remains the screen on/off control.
Hey, they introduced a limit of 100 photos for their frames (which was previously free/unlimited), then backtracked after a big backlash, but limited to only one frame per customer having the unlimited amount.
https://www.reddit.com/r/nixplay/comments/17hr0cp/change_what_album_is_synced/
RE the Rpi5/4/3 - I was hoping to use a RPi5 - partly as it's got proper USB C, so I can use a single cable to power and connect to the monitor (neater), plus I'm keen to have it forward-compatible so when you boys introduce video capability as per this thread, then I won't need to replace 8+ frames' hardware (yes, I'm going full-bore on these frames.... ;) )
To double check, are you saying that this just won't work with Rpi5 or is it a workaround? PS what exactly is the issue with the on/off control?
Hey, they introduced a limit of 100 photos for their frames (which was previously free/unlimited), then backtracked after a big backlash, but limited to only one frame per customer having the unlimited amount.
That's really super bad customer service policy!
Regarding USB C: I don't have a Pi 5 yet but Paddy has one (I want to wait until the Flirc case is available.) But I guess you will still need two USB C cables, one for the power supply and one for the display - or did I misunderstand you?
With a Pi 5 you can't control the screen yet.
But a great advert for you guys! ;) I think if you could package this all up raspberry pi imager style, you'd have a huge number of users - it really is a great set up - I've been using it with Home Assistant and Syncthing for ages now and it's rock solid...!
Re USB C - oh that's a bit rubbish, just plugged my monitor into my laptop and screen just popped up (both power and video out down one cable) - as you say, not actually too big a problem, just use an extra short HDMI or USB C - it's really the extra transformers/power blocks that introduce bulk to photo frames - as things stand I reckon the frame will be able to be <8mm thick with a stand of 1cm thick (that I can hide the RP5 inside).
Planning to make a nice 3d printed frame stand to make it really easy for folks to put it all together.
Sorry - back to my question (as I'm in the progress of installing onto RPi5) - will it work on Bookworm and what's the on/off problem you describe?
Hi, I'm really curious to hear about piping video through USB C, I'd never really thought of doing that and I'd love to know what's involved and how you get on.
The video works reasonably well on the RPi5 up to moderate frame sizes, and I think it would work on earlier version of the OS, there isn't really anything changed in pi3d, you just need a little process running ffmpeg to keep pasting video frames into the OpenGL frame buffer.
Making a debugged, "certified", working SD image with all required dependencies might not be a bad idea. Something to think about (one of the others probably has more clue about that kind of thing).
The issue with bookworm is that it no longer seems to cope with the screen being turned off using tvservice
or vcgencmd
or xset
or xrandr
or wlr-randr
. There is a screensaver system that wayland can use which does seem to switch the screen off when it's toggled on. However if the screensaver is then toggled off wayland won't turn the screen back on again unless there is keyboard or mouse activity. So far I haven't been able to spoof the keyboard or mouse system! I've been running picframe for a while now at home on bookworm and it seems to be fine with the provisos: 1. On RPi <= 3 I don't enable wayland, just let SDL2 use the legacy X11 functionality. 2. when I would have turned the screen off (I have a PIR sensor connected to a GPIO pin) I just turn the brightness down to zero via the HTML interface.
Paddy
PS Kate do you have any refs for getting video output from the RPi over USB?
Yes of course, the end state is brilliant (ie these frames become a case of getting a 180 degree connector and become super low profile with just a single cable, however it just seems the RPI5 engineers have implemented a pretty limited support! Worst case I can work around it though.
Good to hear video is coming along, this would be really nice - I'm not even sure if I'd use it much for long videos, more just really slowed down videos so they look like photos that are moving really slowly would be a neat effect for photo frames!
Re SD image - yes exactly - think this would lower the entry bar significantly for folks!
Re screen turn off - Ah got you, ok will carry on with RPi5 install - can live with that for now and can help muse approaches - could even just run a hardware solution (eg smart switch) or trigger from home assistant, or as you say use a PIR - assume you've look at things like xdotool or xte (from the xautomation package) ?
Hi, yes I tried a couple (I think xdotool was one) but wayland isn't really X11, just has ways to convert xserver function calls into things it understands. Hopefully something will turn up sooner or later. There was an issue (which I mention earlier in this thread) where a process involved with the video rendering wasn't releasing memory. It was more obvious playing video but it still happened with still images, just took longer. I think that bug was patched in the Dec 5th release but I imagine there will be quite a lot of minor bugs and quality of life improvements over the next few months.
PS Kate do you have any refs for getting video output from the RPi over USB?
Not really - it's generally a bit of a muddle still it seems, lots of references around the place - even for the Rpi 4 folks have used for power and data, eg: https://forums.raspberrypi.com/viewtopic.php?t=356922&start=100
Just going to start testing myself over the next few days - have loads of cables etc! Worst case I've seen someone confirm that USB C to HDMI works, so could use that.
As an aside, am happy to make a bit of a 'how to' video to walk through the whole process of setting up your frames if you think that'd be helpful too?
This is the monitor I've bought: https://www.amazon.co.uk/dp/B093GCL18V?ref=ppx_yo2ov_dt_b_product_details&th=1
Works really nicely on my laptop - decent screen, sharp, matte screen (which is nice for photo frames).
Also got my eye on a 16:10 ratio 2K one, eg: https://www.lg.com/uk/laptops/16mr70/?gad_source=1&gclid=Cj0KCQiAsvWrBhC0ARIsAO4E6f_vuajrxDhQN3IBrJdHaVJbx9D7H1tw9PpLYP_XNB1Fz98w2MxGfIUaAlbqEALw_wcB&gclsrc=aw.ds
Will keep you posted... ;)
If and when you get something up and running then a walk through would be fantastic.
PS I just did a quick re-test with ydotool and IT WORKS. So thank you so much for that final nudge.
What I did was (mainly written up here for the benefit of @helgeerbe with whom I was discussing this before)
cd ~
git clone https://github.com/ReimuNotMoe/ydotool
cd ydotool
sudo apt install cmake
mkdir build
cd build
cmake ..
make -j `nproc`
unfortunately ydotool needs access to /dev/uinput
which requires root permission. It needs to have a server running so I made a little script
import os
import time
os.system("/home/pi/ydotool/build/ydotoold &")
xmove = 5
while True:
time.sleep(60)
xmove *= -1
os.system(f"/home/pi/ydotool/build/ydotool mousemove -x {xmove} -y 0")
which has to be started with sudo
. This is the functionality that would be turned on and off in order to allow the screensaver to blank the screen or to turn it back on again.
In /home/pi/.config/wayfire.ini
there needs to be a screensaver timeout a bit longer than the mousemove frequency. Alternatively the screensaver would be turned on and off using the SDL2 functionality (now added to pi3d.Display) and the script above would be run as a one off after turning the screensaver back on, in order to 'wake it up'.
So it's possible but a lot more complicated than before, and it has the disadvantage of needing to be run by root.
Ah, brilliant! That sounds like a very sensible workaround - ie enough to tackle and by your description it should be fixed eventually in a patch...
Cool - will have a look at pulling something together, think it'd be great to make this as accessible as possible to people as it's a really great solution!
PS I just did a quick re-test with ydotool and IT WORKS. So thank you so much for that final nudge.
Wow, that would be amazing to solve the missing link!
As an aside, am happy to make a bit of a 'how to' video to walk through the whole process of setting up your frames if you think that'd be helpful too?
Of course! I am too tied up with work these days to do videos and would be happy to repost and link to them.
As an aside, am happy to make a bit of a 'how to' video to walk through the whole process of setting up your frames if you think that'd be helpful too?
Of course! I am too tied up with work these days to do videos and would be happy to repost and link to them.
Cool - I'll have a think about the best approach - I'd like to cover the end to end if possible - it's so powerful when you stitch HA + Syncthing + PhotoFrame together - leave with me!
Pi3D PictureFrame is still the best software for displaying photos on a picture frame! It's got the best transitions, a perfect matting algorithm, and a great integration with Home Assistant.
But as I was strolling on a Portuguese beach this week, I had to think of a request that I often receive from lovers of Pi3D: Integration of videos. And since I review commercial frames occasionally that can display photos and videos in one playlist, I wondered if mixing the two media formats in Pi3D might be possible after all.
@paddywwoof, what would it take to achieve this? Especially with the recent announcement of the Pi 5 and more horsepower under the hood. It would be absolutely thrilling.