SeedSigner / seedsigner

Use an air-gapped Raspberry Pi Zero to sign for Bitcoin transactions! (and do other cool stuff)
MIT License
716 stars 164 forks source link

1 Million Sat Bounty for Touchscreen Demo #150

Open SeedSigner opened 2 years ago

SeedSigner commented 2 years ago

We are offering a 1 million sat bounty for a technical demo using this screen:

https://www.waveshare.com/2.8inch-DPI-LCD.htm

The demo should allow for direct pixel mapping/drawing using PIL/Pillow (Python Imaging Library) in the same way that we currently that library to render graphics to SeedSigner's screen. Please note that "mirroring" to this screen, similar to how a Pi desktop is displayed, or using a "windowing" solution won't suffice for our purposes -- we need to be able to directly manipulate pixels. I'd anticipate that this may require a custom driver / python library. The requested technical demo should also incorporate touch recognition.

Please feel free to ask any questions under this issue if anything isn't clear -- thank you!

mutatrum commented 2 years ago

I have a basic test app working, which uses framebuffer. It's based on https://github.com/robertmuth/Pytorinox/blob/master/framebuffer.py, with a different pixel order. Would this be an acceptable route?

I haven't looked at touch interfacing yet.

SeedSigner commented 2 years ago

I am most curious about the performance impact of the framebuffer on the Zero 1.3. I would assume it wouldn't be a problem on the Zero 2W, but it would be ideal if the 2W wasn't required to use this, or any other touch screen? I am guessing that framebuffer also opens up the possibility of supporting other screens, aside from this specific one?

mutatrum commented 2 years ago

Code is running at ~7 fps, on the WaveShares 2.8" hat (640x480):

waveshares

Framebuffer class, based on above code, stripped and fixed the pixel ordering:

from PIL import Image
import numpy

class Framebuffer(object):
    def __init__(self, device_no: int):
        self.path = "/dev/fb%d" % device_no
        config_dir = "/sys/class/graphics/fb%d/" % device_no
        self.size = tuple(_read_config(config_dir + "virtual_size"))
        self.stride = _read_config(config_dir + "stride")[0]
        self.bits_per_pixel = _read_config(config_dir + "bits_per_pixel")[0]
        assert self.stride == self.bits_per_pixel // 8 * self.size[0]

    def __str__(self):
        args = (self.path, self.size, self.stride, self.bits_per_pixel)
        return "%s  size:%s  stride:%s  bits_per_pixel:%s" % args

    def show(self, image: Image):
        assert image.size == self.size

        flat = numpy.frombuffer(image.tobytes(), dtype=numpy.uint32)
        # convert RGB to BGR
        out = (((flat >> 16) | (flat << 16)) & 0xff00ff) | (flat & 0x00ff00)

        with open(self.path, "wb") as fp:
            fp.write(out)

    def on(self):
        pass

    def off(self):
        pass

def _read_config(filename):
    with open(filename, "r") as fp:
        content = fp.readline()
        tokens = content.strip().split(",")
        return [int(t) for t in tokens if t]

With test runner, it needs Roboto-Regular.ttf in the same folder:

#!/usr/bin/env python3

import time
from PIL import Image, ImageDraw, ImageFont
from framebuffer import Framebuffer

def Main():
    fb = Framebuffer(0)

    print(fb)
    image = Image.new("RGBA", fb.size)

    draw = ImageDraw.Draw(image)
    fnt = ImageFont.truetype("Roboto-Regular.ttf", 40)

    width = fb.size[0]
    height = fb.size[1]
    w3 = width / 3

    framecount = 0
    drawtime = 1
    x = y = 0
    dx = dy = 2

    while True:

        start = time.time()

        draw.rectangle(((0, 0), (w3, height)), fill="red")
        draw.rectangle(((w3, 0), (w3 * 2, height)), fill="green")
        draw.rectangle(((w3 * 2, 0), (w3 * 3, height)), fill="blue")

        text1 = "frame %d" % framecount
        draw.text(text=text1, xy=(x, y), font=fnt)
        size1 = draw.textsize(text=text1, font=fnt)

        text2 = "%.1f fps" % (1 / drawtime)
        draw.text(text=text2, xy=(x, y + size1[1]), font=fnt)
        size2 = draw.textsize(text=text2, font=fnt)

        fb.show(image)
        drawtime = (time.time() - start)
        framecount += 1

        x += dx
        y += dy

        if (x < 0) | (x > width - max(size1[0], size2[0])):
            dx = -dx
        if (y < 0) | (y > height - size1[1] - size2[1]):
            dy = -dy

if __name__ == "__main__":
    Main()

The costly bit is the conversion of the pixel format from RGB to BGR. This piece might be sped up further, but haven't succeeded in that yet. This would probably work on all screens which have a 32bit framebuffer. For other screens which have a 16bit framebuffer (rgb565), a different pixel format conversion would be needed.

I can also take a look if can get this working on the WaveShare 1.3" hat, that would give more confidence for re-usability.

Touch is done over /dev/input/event0, which is a standard interface. There are plenty modules for that. If needed I can write up a prototype, I've done this before in node.js. See https://github.com/mutatrum/sats_clock/blob/main/index.js#L52. That project is written for the Pimoroni HyperPixel4, and is also driven by the framebuffer.