Closed PeterWurmsdobler closed 3 years ago
In simple terms, anything that can accept a PIL Image object and render it works.
FYI, the current version of pydPiper is not being actively maintained as I am moving the code-base to Python3 and making some major changes to the architecture. The work is fairly well along but given my general lack of time for coding these days it is probably going to be spring before I get to a release. In the new version, all of the display logic has been moved to the Luma.OLED project. You might want to check that out to see what work would need to be done.
Cheers, yes I saw in another thread that you are in the process of porting to python3 and using Luma. That sounds quite promising.
That being said, if there was a Luma.VFD which supported Noritake VFDs, or others, that would be great. I would be happy to contribute there. Nevertheless, there would need to be a common abstract interface to talk to an abstract LCF/OLED/VFD, at the right kind of granularity. Apart from opening and closing a device, setting up the comms (SPI, I2C, etc), would pydpiper then only send images or updates to images to the underlying driver, or exploit some advanced features of some devices which may not be available on all of them, e.g. the availability of fonts, or other primitives?
I'm going to close this thread as I believe it has mostly moved over to the Luma.(core, lcd, oled) project. A couple of quick points though. pydPiper and to a great extend Luma are mainly about rendering bitmaps. Any fonts, or graphical primitives like lines, rectangles, circles, etc. can be produced using the Pillow library and then sent to the display as a PIL Image. Unique features (especially fonts) supported by a display are not used though. Luma does include reproductions of some fonts but this is entirely external to the displays functionality. Even when using the character interface in luma.core, it is still just rending the text content into a PIL Image and then sending that to the display.
Hello, indeed, with the device specifics in the new version being moved to Luma the support for a Noritake VFD will move there, too. Thanks for your time and consideration.
As for interfaces and the various levels of abstraction in the pydpiper stack, and to recapitulate in my own words, I would imagine that pydpiper interfaces to the music player on one side, retrieving tags as defined in pages; it then sequences these pages, their canvas and widgets. For every page pydpiper creates a rendered image using the Pillow library; the image is then handed over to the Luma device driver to be communicated to the device using an appropriate comms protocol.
Here's the basic application architecture. Note: pydPiper does not request specific data that it needs from MPD. MPD sends what it's going to send. It is up to the pageFile designer to determine what variables will be available from the source (or sources) and design the pages (e.g. screens) accordingly.
Thanks again for the nice diagram. So I presume that:
pyAttention
is agnostic to the concrete metadata tags, but just presents them in a dictionary message, independently of the source (which could be some UDP message of some sort). That's the pyAttention
output interface;tinyDisplay
needs to marry up the dict message and the pageFile
, i.e. needs to make sure that all tags referenced in the pageFile
can be populated with the data in pyAttention
as data trickle in. If the pageFile
is made for a specific player in mind that would be the case one would hope. Yes, that's mainly correct. One feature I'm not sure was clear though is pyAttention will allow multiple sources to be monitored at the same time. It provides a single interface to receive data from those multiple sources. To differentiate those sources, the dictionary it provides back will include a dictionary key (in the diagram this key is 'mpd') to inform the receiving system which source the data is coming from. It is possible that multiple subsystems could be reported in a single message.
I'm also considering adding a transformation capability to pyAttention to allow you to include in your pageFile (or another config file) a set of rules to use to transform received data. This could allow you to convert variable types like string to integer, or to trigger more complex processing rules like querying a database. Let me know if you think this would be useful and what features you think it should include if yes.
Thanks for sharing your thoughts on the new architecture; the idea of merging information streams from various sources is indeed interesting, but certainly challenging. As you rightly say, a problem only arises if the same piece of information is available from more than one source.
I could not help but to think of the initial statement in the repo Readme.md: "A python program written for the Raspberry Pi that controls small LCD and OLED screens for music distributions" . Therefore, my assumption is that pydpiper would be listening to only one music player at a time, but perhaps additional and different kinds of additional sources.
How a situation would have to be handled where different sources produce the same kind of information, perhaps in different format or units, I would have thought that triaging the data:
pageFile
which is a consumer. It manages the layout of metadata defined by tags on widgets and canvasas suitable for a given display and its size. Would these tags be player-agnostic or player-specific? Or would tags be things like source::tag
?pyAttention
which acts as an abstraction layer to being a client to different music players and their protocol. Also, like a funnel It merges different input streams into one stream as a two-tiered dictionary and is perhaps agnostic to the tags; just passing on information.tinyDisplay
. If it finds the same tag in different sources, it would have a rule to deal with it (from a config file)? If pydpiper is primarily a music player display driver, then perhaps all player tags have priority? Would it strip the source and create unitfied tags?Thinking about how the stream of tags is managed prompts a lot of design decisions.
Currently pydPiper attempts to make the tags player agnostic but this has proven difficult. I was leaning towards having the next version make the tags player specific in the sense that it would not attempt to normalize the data. This would mean that each pageFile would then need to be developed to match a particular source.
Perhaps as you suggest it would be better to normalize the data into a defined format like ID3. If I went this direction, I believe that functionality would live inside the pydPiper code-base as it is the only project that is dedicated to music. I would want it configuration driven to make it easier to adjust when the distributions invariably modify the information they are sending.
Also, while not impossible to have more than one music source, this is not done in practice. The multiple sources are normally unrelated to each other. Typical sources include:
This basically means that there is normally no need to decide between two sources when resolving a particular piece of information.
Hello, this sounds all quite reassuring, in particular with regards to the consolidation of information from different sources. In general, it would be preferable if the code could be tag agnostics and tags only live in configuration files such as the pageFile
or other; all is data driven.
I kept thinking of the different abstractions and a functional block diagram showing the data flow, e.g. involving:
player_data_adapter
base class with output interface, e.g. time-stamped dictionary,XYZ_data_adapter
that listens to the player specific topics on whatever interface, port for protocol the player produces information in,XYZ_data_adapter
for system information, weather or stock exchange, ..pyAttention
is the funnel that instantiates the data adapters needed as configured, or perhaps derived by collecting all tags referenced by the pageFile
, e.g. a tag could be mpd::artist
in a device specific file pages_Noritake_GU600.py
. The output of all data_adapters
is then the compilation into two-tier tags dictionary from these sources.pageFile
, but normalized tags are being used there, such as ID3v2.x tags for music metadata. Then the tag translation needs to happen in the XYZ_data_adapter
, either in code or by using a configuration file defining the map, and only if the tag differs from a standard.Well, these were just my thoughts on that and perhaps it is useful to you.
@dhrone while you are working on the new python 3 version of pydpiper, I have been trying to build my own python3 based volumio2 client & VFD controller: https://github.com/PeterWurmsdobler/volumio-vfd . It is loosely based on some pydpiper code and does work, albeit not as general as pydpiper, and it has the principal concepts of:
VolumioClient
class as a thread producing a PlayerStatus
objects into a queue,main
loop taking items of the queue and feeding an abstract CharacterDisplay
,CharacterDisplay
base class ingesting PlayerStatus
objects,CharacterDisplayNoritake
adapter for the Noritake VFD.I am looking forward to hearing your feedback. Cheers, peter.
Nice clean code. Overall, the design looks good for what it appears you are trying to achieve. I do have one small suggestion for improvement.
You are re-querying the Volumio service every 1/2 second which seems excessive. Volumio is very good at sending out a message every time it changes state making the polling style connection you have implemented unnecessary. If the reason for the frequency is to update the elapsed variable, I'd suggest capturing the current time each time you receive an update and then calculating elapsed during your update_status
method.
FYI, the rendering portion of my rewrite is basically done. That project is tinyDisplay. If you wanted to leverage the graphical capabilities of your display, it would be pretty simply to add that capability using it.
Thanks for you comments; the polling is a bit naughty and was a quick and expedient way to get the elapsed
field updated as you rightly noticed. I have changed that to as you suggested calculating the elapsed time from the last status' elapsed time and the elapsed time since.
I'll have a look at the tinyDisplay and try to understand what would be necessary to integrate the VFD, with or without Luma. I actually cross posted the previous message on https://github.com/rm-hull/luma.core/issues/236#issuecomment-798452928 as I see the need to separate the device command interface and the device Luma adapter.
Cheers, peter.
If you can get luma to fully support your display, integration with tinyDisplay should be straight forward. tinyDisplay outputs a PIL Image which most luma devices can directly consume.
It would be nice to have VFD support integrated into Luma, which would mean adding an adapter layer with some code is needed to convert the PIL image into a sequence of GU600 calls. It is however not very clear to me
Of course, I would like to make some optimisation to not always send the whole image but only areas that have changed. Perhaps there is even some support in PIL to do a diff(image1, image2) -> list
Developing the adapter layer looks pretty straight forward. Your display supports a Graphic Write command that seems like what you'll need. You just need to transform the image into a sequence of pixel values (using getdata()
) and then convert into values needed by the Graphic Write command. Take a look at the ss1106 class to get a sense for how the develop the driver.
As far as where to place it within Luma's projects, I'd recommend within Luma.oled. While technically the VFD style device is not an OLED it is not that different and I don't think it would be worth spinning up another sub-project just to handle this display type.
Thanks for the hint; I had a look at the device. It is perhaps not too complicated after all. There are only two worries I had or have:
uint32
, unsigned char
, etc. The method name .data()
implies a getter, or does it mean .send()
, possibly, looking at the code. IMHO methods should be verbs.Initially I thought I would add an adapter layer to Luma and use my full driver from my own repo. But probably, it would be too complicated to add a dependency of a Luma.VFD device. So I will just copy & paste the lines needed.
I'm not clear on what you are asking in regards to image format. Are you trying to understand how the PIL.Image represents its data or how what needs to be sent to the display.
For your second question, the method names are derived from how many displays work. Typically you either send a .command()
to configure the display (move the cursor, clear the display, etc) or you send .data()
to be rendered on the display. Both .command()
and .data()
take bytes (e.g. uint8
or unsigned char
).
There is a small difference between the calls. .command()
accepts the bytes as a list of arguments while .data()
takes them as an iterable of bytes (e.g. list, tuple)
.command(cmd, cmd, cmd, ...)
.data([data, data, data])
Many thanks for your response.
As for the first one, it is more the the former, i.e. how PIL.Image represents its data, or more precisely, how to iterate over pixels and the data type of each pixel. Once I know that, I can work out what I need to send to the VFD.
As for the second, I have found by digging more into the code that the expected data type is assumed to be 8-bit quantity, even though in python it is simply an int
. It is, however, a limiting assumption by Luma to assume that commands are singly "bytes". Also, what does not help in the interface is the semantics of the method names that sound more like properties,e.g. data()
.
With respect to the SPI I need to figure out which underlying SPI driver Luma users actually use; the interface stipulates and open
and writebytes
method; the library I was using, spidev
, from https://github.com/doceme/py-spidev.git , does actually consume int
of which one byte is bitten off in the c-driver: (__u8)PyInt_AS_LONG(val);
Gradually I am thinking: I should have written all in C++ with clear types.
hello,
How would I go about integrating a new display into the code base, a Noritake-Itron GU240x64D-K612A8 which is a graphical unit and can be communicated with over a parallel interface, RS232, SPI and I2C. As it happens I would prefer the SPI as it would provide more bandwidth. My questions:
I have already written a C-version of the low level driver for LCDproc some 10 years ago. I would be happy to convert that to python as long as it works over SPI.
Cheers, peter.