fread-ink / inkwave

Convert electronic paper display waveforms from .wbf to .wrf format
GNU General Public License v2.0
32 stars 8 forks source link

Decoding 5-bit waveforms #3

Open vroland opened 3 years ago

vroland commented 3 years ago

Hi, First of all: Congratulations on your work on this despite so little information being out there! That's also the reason I'm asking here. I'm currently working on an open hardware e-Ink controller (https://github.com/vroland/epdiy). Up until now, we use a very simple waveform based on trial and error to drive them. For better quality and less temperature dependence I'm now trying to look into decoding the vendor waveforms for the use in the driver (this is completely software-driven, so no proprietary e-Ink controller to decode the waveform for us).

Unfortunately, all waveforms I could find are 5-bit per pixel waveforms. The only pice of information I could find on them is this: https://www.waveshare.net/w/upload/c/c4/E-paper-mode-declaration.pdf, which does not go into the file-level encoding. Since you got this far: Do you have any information or ideas on how the actual waveform data is encoded? Any resources I could look at?

Regards,

Valentin

Juul commented 3 years ago

On Sat, Jan 23, 2021 at 3:30 AM Valentin Roland notifications@github.com wrote:

Hi, First of all: Congratulations on your work on this despite so little information being out there!

Hi and thanks!

That's also the reason I'm asking here. I'm currently working on an open hardware e-Ink controller (https://github.com/vroland/epdiy). Up until now, we use a very simple waveform based on trial and error to drive them. For better quality and less temperature dependence I'm now trying to look into decoding the vendor waveforms for the use in the driver (this is completely software-driven, so no proprietary e-Ink controller to decode the waveform for us).

Very cool!

Unfortunately, all waveforms I could find are 5-bit per pixel waveforms. The only pice of information I could find on them is this: https://www.waveshare.net/w/upload/c/c4/E-paper-mode-declaration.pdf, which does not go into the file-level encoding. Since you got this far: Do you have any information or ideas on how the actual waveform data is encoded? Any resources I could look at?

The waveforms are just lookup tables. They are really very simple.

The talk I gave at HOPE in 2018 includes some information about the epaper waveforms: http://fread.ink/hope_talk.webm

Let me know if anything needs clarification or you have any questions about it.

Beyond that explanation and the source code you already saw I'm not sure I have much to offer. I understand how the 4-bit per pixel waveforms used by Amazon for the earlier Kindle devices are encoded and I assume the 5-bit versions are very similar but I haven't looked. I also assume that since most e-paper e-readers use i.MX chipsets, and thus use the integrated EPDC, they likely get at least some example kernel module source code from Freescale which they base their own kernel module on and thus they are likely to all use very similar waveforms, but as I found with the 4th and 5th gen Kindle models the format used for storage is not identical to the format used by the kernel model (though they are similar).

I don't know if the technology used by Waveshare is the exact same technology used by the Eink corporation. As far as I understand from my conversation with Waveshare employees they have not licensed any technology from Eink. They are likely similar enough that the waveforms are identical but I can't say for sure.

It's also likely that the waveforms are generated automatically and I have considered that generating a new set of waveforms could be possible by creating a rig to directly drive the displays and a camera looking at the display + some sort of itererative genetic algorithm or similar trying different variations on the same waveform. Different waveforms for different display temperatures would of course have to be created. Unfortunately there are some restrictions necessary on the waveforms to avoid charge buildup which I've heard can permanently ruin the display and I don't fully understand what those restrictions are but it's likely they could be glarked from examination of existing waveforms.

vroland commented 3 years ago

Hi, thank you for the talk, that was quite informative. I've played around a bit and it seems that the 5-bit waveforms are indeed just larger look up tables with a similar encoding, but there is some additional information. In the phases, after the 0xff + random byte end marker there is additional data. The LUTs look reasonable if I ignore it, so it might be the "separately-supplied image preprocessing algorithm" the waveshare document speaks of. Since the odd positions are almost always zero / ignored I hope just using the 16 even positions will be approximately correct.

Regarding generating waveforms: By looking at the transitions, one could maybe derive some general rules, like "when going from gray to a darker shade, go to completely dark first". Maybe even a simplified model of how those particles move. Then, fine-tuning the waveforms using a rig would be much easier, as the search space would be greatly reduced. According to my experiments, destroying the displays by applying charge for too long is not actually that easy. But in some kind of automated rig in continuous operation, faults may build up. Sounds like a fun project overall, I'm just not sure if I want to sink the time into it ;)

Juul commented 3 years ago

On Mon, Jan 25, 2021 at 11:40 AM Valentin Roland notifications@github.com wrote:

Hi, thank you for the talk, that was quite informative. I've played around a bit and it seems that the 5-bit waveforms are indeed just larger look up tables with a similar encoding, but there is some additional information. In the phases, after the 0xff + random byte end marker there is additional data. The LUTs look just fine if I ignore it, so it might be the "separately-supplied image preprocessing algorithm" the waveshare document speaks of. Since the odd positions are almost always zero / ignored I hope just using the 16 even positions will be approximately correct.

I'd be happy to take a look if you send me some of these 5-bit waveforms, assuming you're legally allowed to do so.

Regarding generating waveforms: By looking at the transitions, one could

maybe derive some general rules, like "when going from gray to a darker shade, go to completely dark first". Maybe even a simplified model of how those particles move. Then, fine-tuning the waveforms using a rig would be much easier, as the search space would be greatly reduced.

True.

According to my experiments, destroying the displays by applying charge for too long is not actually that easy. But in some kind of automated rig in continuous operation, faults may build up.

That is good to know.

Sounds like a fun project overall, I'm just not sure if I want to sink the time into it ;)

Yeah I understand. If you have some of your PCBs made with the components pre-soldered I'd be up for buying a couple to play with.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/fread-ink/inkwave/issues/3#issuecomment-767065714, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAA57AA6VODPS3XX43XPO43S3XCLLANCNFSM4WPTTCOA .

vroland commented 3 years ago

I got the waveform files from this thread: https://community.nxp.com/t5/i-MX-Processors/How-to-convert-wbf-waveform-file-to-wf-file/m-p/467926/highlight/true But yes, the legal side of sharing them is a bit troublesome. It would be nice to have some files I can freely distribute with the library. Regarding the boards: You can find my e-mail address on my github profile, would you mind just sending me a message there?

vroland commented 3 years ago

@Juul My current approach to using the waveforms for my project is to export a JSON file from my fork of inkwave and then use a python script to generate a header from it. What do you think, would a JSON export to dump the raw data make sense to add to inkwave? If yes, I'd clean up my code and make a pull request to add that. Otherwise, I'd write my own tool for that based on inkwave. Which do you prefer?