dhrone / pydPiper

A general purpose program to display song metadata on LCD and OLED devices
MIT License
78 stars 36 forks source link

Abstract display interface #121

Closed PeterWurmsdobler closed 3 years ago

PeterWurmsdobler commented 3 years ago

hello,

How would I go about integrating a new display into the code base, a Noritake-Itron GU240x64D-K612A8 which is a graphical unit and can be communicated with over a parallel interface, RS232, SPI and I2C. As it happens I would prefer the SPI as it would provide more bandwidth. My questions:

I have already written a C-version of the low level driver for LCDproc some 10 years ago. I would be happy to convert that to python as long as it works over SPI.

Cheers, peter.

dhrone commented 3 years ago

In simple terms, anything that can accept a PIL Image object and render it works.

FYI, the current version of pydPiper is not being actively maintained as I am moving the code-base to Python3 and making some major changes to the architecture. The work is fairly well along but given my general lack of time for coding these days it is probably going to be spring before I get to a release. In the new version, all of the display logic has been moved to the Luma.OLED project. You might want to check that out to see what work would need to be done.

PeterWurmsdobler commented 3 years ago

Cheers, yes I saw in another thread that you are in the process of porting to python3 and using Luma. That sounds quite promising.

That being said, if there was a Luma.VFD which supported Noritake VFDs, or others, that would be great. I would be happy to contribute there. Nevertheless, there would need to be a common abstract interface to talk to an abstract LCF/OLED/VFD, at the right kind of granularity. Apart from opening and closing a device, setting up the comms (SPI, I2C, etc), would pydpiper then only send images or updates to images to the underlying driver, or exploit some advanced features of some devices which may not be available on all of them, e.g. the availability of fonts, or other primitives?

dhrone commented 3 years ago

I'm going to close this thread as I believe it has mostly moved over to the Luma.(core, lcd, oled) project. A couple of quick points though. pydPiper and to a great extend Luma are mainly about rendering bitmaps. Any fonts, or graphical primitives like lines, rectangles, circles, etc. can be produced using the Pillow library and then sent to the display as a PIL Image. Unique features (especially fonts) supported by a display are not used though. Luma does include reproductions of some fonts but this is entirely external to the displays functionality. Even when using the character interface in luma.core, it is still just rending the text content into a PIL Image and then sending that to the display.

PeterWurmsdobler commented 3 years ago

Hello, indeed, with the device specifics in the new version being moved to Luma the support for a Noritake VFD will move there, too. Thanks for your time and consideration.

As for interfaces and the various levels of abstraction in the pydpiper stack, and to recapitulate in my own words, I would imagine that pydpiper interfaces to the music player on one side, retrieving tags as defined in pages; it then sequences these pages, their canvas and widgets. For every page pydpiper creates a rendered image using the Pillow library; the image is then handed over to the Luma device driver to be communicated to the device using an appropriate comms protocol.

dhrone commented 3 years ago

Here's the basic application architecture. Untitled presentation Note: pydPiper does not request specific data that it needs from MPD. MPD sends what it's going to send. It is up to the pageFile designer to determine what variables will be available from the source (or sources) and design the pages (e.g. screens) accordingly.

PeterWurmsdobler commented 3 years ago

Thanks again for the nice diagram. So I presume that:

dhrone commented 3 years ago

Yes, that's mainly correct. One feature I'm not sure was clear though is pyAttention will allow multiple sources to be monitored at the same time. It provides a single interface to receive data from those multiple sources. To differentiate those sources, the dictionary it provides back will include a dictionary key (in the diagram this key is 'mpd') to inform the receiving system which source the data is coming from. It is possible that multiple subsystems could be reported in a single message.

I'm also considering adding a transformation capability to pyAttention to allow you to include in your pageFile (or another config file) a set of rules to use to transform received data. This could allow you to convert variable types like string to integer, or to trigger more complex processing rules like querying a database. Let me know if you think this would be useful and what features you think it should include if yes.

PeterWurmsdobler commented 3 years ago

Thanks for sharing your thoughts on the new architecture; the idea of merging information streams from various sources is indeed interesting, but certainly challenging. As you rightly say, a problem only arises if the same piece of information is available from more than one source.

I could not help but to think of the initial statement in the repo Readme.md: "A python program written for the Raspberry Pi that controls small LCD and OLED screens for music distributions" . Therefore, my assumption is that pydpiper would be listening to only one music player at a time, but perhaps additional and different kinds of additional sources.

How a situation would have to be handled where different sources produce the same kind of information, perhaps in different format or units, I would have thought that triaging the data:

Thinking about how the stream of tags is managed prompts a lot of design decisions.

dhrone commented 3 years ago

Currently pydPiper attempts to make the tags player agnostic but this has proven difficult. I was leaning towards having the next version make the tags player specific in the sense that it would not attempt to normalize the data. This would mean that each pageFile would then need to be developed to match a particular source.

Perhaps as you suggest it would be better to normalize the data into a defined format like ID3. If I went this direction, I believe that functionality would live inside the pydPiper code-base as it is the only project that is dedicated to music. I would want it configuration driven to make it easier to adjust when the distributions invariably modify the information they are sending.

Also, while not impossible to have more than one music source, this is not done in practice. The multiple sources are normally unrelated to each other. Typical sources include:

This basically means that there is normally no need to decide between two sources when resolving a particular piece of information.

PeterWurmsdobler commented 3 years ago

Hello, this sounds all quite reassuring, in particular with regards to the consolidation of information from different sources. In general, it would be preferable if the code could be tag agnostics and tags only live in configuration files such as the pageFile or other; all is data driven.

I kept thinking of the different abstractions and a functional block diagram showing the data flow, e.g. involving:

Well, these were just my thoughts on that and perhaps it is useful to you.

PeterWurmsdobler commented 3 years ago

@dhrone while you are working on the new python 3 version of pydpiper, I have been trying to build my own python3 based volumio2 client & VFD controller: https://github.com/PeterWurmsdobler/volumio-vfd . It is loosely based on some pydpiper code and does work, albeit not as general as pydpiper, and it has the principal concepts of:

I am looking forward to hearing your feedback. Cheers, peter.

dhrone commented 3 years ago

Nice clean code. Overall, the design looks good for what it appears you are trying to achieve. I do have one small suggestion for improvement.

You are re-querying the Volumio service every 1/2 second which seems excessive. Volumio is very good at sending out a message every time it changes state making the polling style connection you have implemented unnecessary. If the reason for the frequency is to update the elapsed variable, I'd suggest capturing the current time each time you receive an update and then calculating elapsed during your update_status method.

FYI, the rendering portion of my rewrite is basically done. That project is tinyDisplay. If you wanted to leverage the graphical capabilities of your display, it would be pretty simply to add that capability using it.

PeterWurmsdobler commented 3 years ago

Thanks for you comments; the polling is a bit naughty and was a quick and expedient way to get the elapsed field updated as you rightly noticed. I have changed that to as you suggested calculating the elapsed time from the last status' elapsed time and the elapsed time since.

I'll have a look at the tinyDisplay and try to understand what would be necessary to integrate the VFD, with or without Luma. I actually cross posted the previous message on https://github.com/rm-hull/luma.core/issues/236#issuecomment-798452928 as I see the need to separate the device command interface and the device Luma adapter.

Cheers, peter.

dhrone commented 3 years ago

If you can get luma to fully support your display, integration with tinyDisplay should be straight forward. tinyDisplay outputs a PIL Image which most luma devices can directly consume.

PeterWurmsdobler commented 3 years ago

It would be nice to have VFD support integrated into Luma, which would mean adding an adapter layer with some code is needed to convert the PIL image into a sequence of GU600 calls. It is however not very clear to me

Of course, I would like to make some optimisation to not always send the whole image but only areas that have changed. Perhaps there is even some support in PIL to do a diff(image1, image2) -> list. Cheers, peter.

dhrone commented 3 years ago

Developing the adapter layer looks pretty straight forward. Your display supports a Graphic Write command that seems like what you'll need. You just need to transform the image into a sequence of pixel values (using getdata()) and then convert into values needed by the Graphic Write command. Take a look at the ss1106 class to get a sense for how the develop the driver.

As far as where to place it within Luma's projects, I'd recommend within Luma.oled. While technically the VFD style device is not an OLED it is not that different and I don't think it would be worth spinning up another sub-project just to handle this display type.

PeterWurmsdobler commented 3 years ago

Thanks for the hint; I had a look at the device. It is perhaps not too complicated after all. There are only two worries I had or have:

Initially I thought I would add an adapter layer to Luma and use my full driver from my own repo. But probably, it would be too complicated to add a dependency of a Luma.VFD device. So I will just copy & paste the lines needed.

dhrone commented 3 years ago

I'm not clear on what you are asking in regards to image format. Are you trying to understand how the PIL.Image represents its data or how what needs to be sent to the display.

For your second question, the method names are derived from how many displays work. Typically you either send a .command() to configure the display (move the cursor, clear the display, etc) or you send .data() to be rendered on the display. Both .command() and .data() take bytes (e.g. uint8 or unsigned char).

There is a small difference between the calls. .command() accepts the bytes as a list of arguments while .data() takes them as an iterable of bytes (e.g. list, tuple)

.command(cmd, cmd, cmd, ...)
.data([data, data, data])
PeterWurmsdobler commented 3 years ago

Many thanks for your response.

As for the first one, it is more the the former, i.e. how PIL.Image represents its data, or more precisely, how to iterate over pixels and the data type of each pixel. Once I know that, I can work out what I need to send to the VFD.

As for the second, I have found by digging more into the code that the expected data type is assumed to be 8-bit quantity, even though in python it is simply an int. It is, however, a limiting assumption by Luma to assume that commands are singly "bytes". Also, what does not help in the interface is the semantics of the method names that sound more like properties,e.g. data().

With respect to the SPI I need to figure out which underlying SPI driver Luma users actually use; the interface stipulates and open and writebytes method; the library I was using, spidev, from https://github.com/doceme/py-spidev.git , does actually consume int of which one byte is bitten off in the c-driver: (__u8)PyInt_AS_LONG(val);

Gradually I am thinking: I should have written all in C++ with clear types.