Open DariusMofakhami opened 4 years ago
Indeed the resolution is capped by the native resolution of the tablet's screen: what you see is a pixel-by-pixel reproduction of the tablet's screen buffer.
Here's the technical challenge: to have arbitrary scalability, one would need to stream the vector data as is written by the reMarkable's (closed-source) UI program; then one would need to rewrite a renderer for such vector data by reverse-engineering the format. This is roughly what the LiveView (beta) feature of reMarkable's apps does (although it sends the data via the cloud). In my experience, LiveView is inefficient and unreliable and I found it completely unusable in a streaming context.
On one hand, I already wrote an experimental renderer in the reMy tool using the same QT framework rmView is based on: you could try and play with it and see if it's feasible to implement this alternative to rmView. On the other hand, maintaining the renderer is difficult and reproducing the exact output (fast) presents some challenges; moreover the UI on the tablet does not seem to have an API in place to pick up the vector data diffs externally. I think we would end up with a clone of LiveView that is as inefficient and less maintainable, with the only advantage of not relying on the cloud.
My personal assessment is that the amount of effort (and the fragility of the solution) does not justify the size of the improvement. Personally I find the resolution perfectly adequate; I wonder if you are experiencing rendering issues?
Thanjs for this comprehensive answer ! I totally got your point, nad definitely your VNC solution is much faster and reliable than the current rM beta Live view. I really enjoy the quality for giving lectures. I was just asking for the case of more pro Youtube Video, where a slightly better rendering could have been appreciated. Thanks again for your great work !
I am very happy this app is useful. I'd definitely help out if anybody attempted at implementing the "vector streaming".
In principle one could run inotifywait
on the tablet, watching xochitl
's notebooks folder for changes and transferring modified files via rsync
(to avoid resending the whole thing) then sending a signal to computer to reload, render the file with the custom renderer.
Beware the artistic brushes are not currently rendered accurately by reMy (especially pencil).
Ok, I see how it would work and the refresh speed would necessary be impacted. While it's so perfect right now. May be just in the continuity of improving the rendering, do you think it could be possible to make gray color appear better ? actually, it's just showing a canvas of diffuse black and white star-like pixels, while the highlighter is displayed in a nice gray color (that would actually be great for the gray color for pens.) I'm pretty sure this is also related to the different way the rM tablet is treating both tools, but do you think there could be an easy way to transfer the nice rendering of the highlighter's gray to the gray pens ? Thanks a lot any way !
Yes, the gray is rendered like that on the tablet and appears uniform only because of the way you perceive the eink display. In fact, if you click on "actual size" you get an effect that looks similar to the effect on the rm.
To obtain a nice gray when zooming in, one could try and do a convolution of some sorts as a postprocessing step, for example a mean filter. Another is to do some non-linear filtering like paint a pixel gray if its neighbours are part of a chequerboard pattern. You cannot do something simple like gray if majority of neighbours are black, otherwise you modify also the genuinely black areas. One would need to experiment a bit.
I I have been trying to add a filter to the image to make gray appear better in the viewer. But with my very limited knowledge of Python and QT I am not sure how to get it running. I am using PIL (Pillow) for the filter. (I thought it might be nice to have it as an optional setting).
So far I tried inserting
from PIL import ImageFilter, Image, ImageQt
as well as importing QBuffer from QTCore, and
img = pixmap.toImage()
buffer = QBuffer()
buffer.open(QBuffer.ReadWrite)
img.save(buffer, "PNG")
im = Image.open(io.BytesIO(buffer.data()))
im2 = im.filter(ImageFilter.Kernel(
(3, 3), (0.75/8.0, 1.0/8.0, 0.75/8.0, 1.0/8.0, 1.0/8.0, 1.0/8.0, 0.75/8.0, 1.0/8.0, 0.75/8.0)))
pixmap = QPixmap.fromImage(ImageQt.ImageQt(im2))
in setImage() of viewer.py. But this results in a Segmentation Fault as soon as the viewer starts. I am not sure what causes it, any ideas?
If I add the code to the screenshot function you can see the result: The grey is displayed smoothly, at a cost of some sharpness.
It should be possible to get a better kernel that is a bit sharper, it is just important that the corners + middle have the same sum as the 4 sides to create an even gray.
Screenshot before:
Screenshot after:
I would strongly advise against saving the image on disk just to use the PIL functions.
The blurring should be applied on the fly in memory if you want to have something that performs well.
Blurring is simple enough not to need the PIL library. You can either implement it directly on the pixmap, or use the QGraphicsBlurEffect
class of Qt (see here).
For example you could insert after line 94 of viewer.py
(inside the else branch) the code
effect = QGraphicsBlurEffect()
effect.setBlurRadius(1.5)
self._pixmap.setGraphicsEffect(effect)
to blur the image (change to from PyQt5.QtWidgets import *
at the beginning of the file as well).
You can see the performance degrades. The limitation of using the QGraphicsBlurEffect
class is that it does not allow you to customise the kernel.
Hi, Thanks for your great work ! I've enjoyed a lot your script when you were using lz4, but it's way more quicker with VNC now and offers the possibility to really stream my writing. I'm planning on using it for remote lessons in this pandemia time... However, I still find the resolution quite poor. Saddly, I think this is the native resolution of rM png export... but do you think there could be a way to have a better rendering ? More like the one of svg and pdf export files, where you don't see the pixels, but every thing is rounded thanks to svg-like paths ?