AkBKukU / HydroThunder-TimeTool

Tool for exporting and importing high score times from Hydro Thunder Arcade
MIT License
15 stars 3 forks source link

Fully reverse engineered CMOS data checksum and parity scripts and parameters #6

Closed WizardTim closed 1 year ago

WizardTim commented 1 year ago

I had a bit too much fun and got carried away and wrote this script, might be useful but not sure anymore.

This script attempts to brute force the start and end offsets for suspected checksums in the Hydro Thunder CMOS data. Currently only very basic SUM8, SUM16, SUM24 and SUM32 are implemented and tested.

Current findings: SUM8 and SUM16 return many matches for valid checksums across all the images however so far this is just random chance of them adding up right, none of the start and end locations that I looked at made sense. SUM24 and SUM32 do not return many results, most are again not in nonsensical locations. In addition, checksum byte differences in the data sample images appear not to reflect the behavior I would expect from a SUM checksum.

The current most likely checksum at 0x75BE66F to 0x75BE672 (data sample .img offset) I have not been able to map to anything or figure out what algorithm it uses, but it looks like it could be multiple checksums.

This code needs a fair bit more work to be an exhaustive search, but it’s leaning me more towards it not being a super easy SUM(x) across clearly defined regions of the CMOS data otherwise this code would have found it instantly (unless I screwed up my SUM(x) implementation, again) CRC16 is probably next on my list of guesses.

But I’m wondering if the name of the function that does this cmos_CopyAllGlobalStructs2Cmos() is trying to tell us something about the format, have they actually designed a memory map to properly store all this (if I had 3GB I would have aligned and separated this a lot better to make my life easier especially when it came to adding that extra 11 bytes in 1.01b) or have they just dumped variables from the game engine into here. If so it would likely mean terminators, separators and other in-between data are not included in the checksums, we would have to extract all the values and strip the garbage making those brute force searches infeasible without a very good understanding of every variable and their bytes.

Performance to expect: ~500 k checksums per second (at uint16 size), takes about 15 to 18 seconds to exhaustively scan all contiguous variations of the ~5,000 byte cmos area.

TODO

AkBKukU commented 1 year ago

I spent all day today toggling every single setting and imaging the cmos data after. I'm going to work out what exactly everything maps too later today. But I can tell you now that they are clearly dumping raw data and that it is sloppy. In the menu of the game "Master Volume" and "Rumble Volume" are both displayed as adjustable percentages. In the data "Master Volume" is stored as a single byte u8. "Rumble Volume" is stored as a 4 byte float, for literally no reason at all. The wheel and throttle calibration points are a min, mid, max set for each that are shown as 0-255 in the menu, but each value is also stored as a 4 byte float. So what should be 6 bytes is balooned to 24. The "Wait for Operator" option added to the network menu in 1.01b's setting is also very close to the end of the settings area rather than near the other network settings. So I think they are linearly writing variables from RAM after they were added to the code or something similar.

Also, you should look at the discord server. Someone got the EXE open in Ghidra and has some strong clues on how the checksum works. I haven't processed what they found yet but it looks like the checksum method may be identifiable.

WizardTim commented 1 year ago

I think my scripts found the checksum solution, but I have been known to screw up checksums several times in the past so don't get your hopes up too much.

The evidence of success is in HTchecksumAnalysis.py the output is as below. The important part is Checksum abs diff between midway being all zeros excluding the ones that aren't NO DIFF, there's an extra DWORD somewhere, I think it intentionally breaks the checksum to force using one of the two CMOS areas.

...

  39 [./DataSamples/CF-Data-Set-AllTracks.img]
Checksum for first cmos copy     : 72 53 07 14 | 71 53 07 14 00010100000001110101001101110010
Checksum for second cmos copy    : 72 53 07 14 | 71 53 07 14 00010100000001110101001101110010
Checksum abs diff between copies : 00 00 00 00 | 00 00 00 00 00000000000000000000000000000000 NO DIFF, CMOS AREAS IDENTICAL
Checksum abs diff between midway :               01 00 00 00

  40 [./DataSamples/CF-Data-Set-Force.img]
Checksum for first cmos copy     : 9e 53 07 14 | 9d 53 07 14 00010100000001110101001110011110
Checksum for second cmos copy    : 9e 53 07 14 | 9d 53 07 14 00010100000001110101001110011110
Checksum abs diff between copies : 00 00 00 00 | 00 00 00 00 00000000000000000000000000000000 NO DIFF, CMOS AREAS IDENTICAL
Checksum abs diff between midway :               01 00 00 00

  41 [./DataSamples/CF-Data-Set-P-On.img]
Checksum for first cmos copy     : 74 53 07 14 | 73 53 07 14 00010100000001110101001101110100
Checksum for second cmos copy    : 74 53 07 14 | 73 53 07 14 00010100000001110101001101110100
Checksum abs diff between copies : 00 00 00 00 | 00 00 00 00 00000000000000000000000000000000 NO DIFF, CMOS AREAS IDENTICAL
Checksum abs diff between midway :               01 00 00 00

  42 [./DataSamples/CF-Data-Set-Free1st.img]
Checksum for first cmos copy     : 6e 53 07 14 | 6d 53 07 14 00010100000001110101001101101110
Checksum for second cmos copy    : 6e 53 07 14 | 6d 53 07 14 00010100000001110101001101101110
Checksum abs diff between copies : 00 00 00 00 | 00 00 00 00 00000000000000000000000000000000 NO DIFF, CMOS AREAS IDENTICAL
Checksum abs diff between midway :               01 00 00 00

  43 [./DataSamples/CF-Data-Race-4.img]
Checksum for first cmos copy     : 00 9d a9 22 | 00 9d a9 22 00100010101010011001110100000000
Checksum for second cmos copy    : fe 9c a9 22 | fe 9c a9 22 00100010101010011001110011111110
Checksum abs diff between copies : 02 00 00 00 | 02 00 00 00 00000000000000000000000000000010 LOWER CHECKSUM   <--
Checksum abs diff between midway :               00 00 00 00 Valid Checksum!

  44 [./DataSamples/CF-Data-Race-3.img]
Checksum for first cmos copy     : c6 05 89 31 | c6 05 89 31 00110001100010010000010111000110
Checksum for second cmos copy    : c4 05 89 31 | c4 05 89 31 00110001100010010000010111000100
Checksum abs diff between copies : 02 00 00 00 | 02 00 00 00 00000000000000000000000000000010 LOWER CHECKSUM   <--
Checksum abs diff between midway :               00 00 00 00 Valid Checksum!

  45 [./DataSamples/CF-Data-Race-slow.img]
Checksum for first cmos copy     : 93 8d c8 aa | 92 8d c8 aa 10101010110010001000110110010011
Checksum for second cmos copy    : 91 8d c8 aa | 90 8d c8 aa 10101010110010001000110110010001
Checksum abs diff between copies : 02 00 00 00 | 02 00 00 00 00000000000000000000000000000010 LOWER CHECKSUM   <--
Checksum abs diff between midway :               01 00 00 00
                                 |  Midway's   | My checksum
                                 |  Checksum   | SUM32 little 4275878548

Checksum parameters

checksum_algorithm = 'SUM32'
checksum_endian = 'little'
checksum_seed = 0xFEDCBA94         # Result of of -0x0123456b
checksum_start_offset = 0x075be677   # Result of 0x75BE660 + 0x17
checksum_stop_offset =  0x075bfb76   # This is just a guess, it's probably shorter or longer but I think it's only white space after 
checksum_span = 5376               # bytes, just the result of the above stop offset
WizardTim commented 1 year ago

Those have been modified to have all 'WIZ' for high scores.

AkBKukU commented 1 year ago

https://youtube.com/shorts/b6midkL9Ya4

This has now been confirmed working!

A +1 to the LSB of the checksum validated by HTchecksumManualCheck.py seems to work reliably. Both sets of data must be replaced by the new data based on testing as well.

WizardTim commented 1 year ago

I made a few changes to this thing, mainly:

Better HTchecksumManualCheck.py

You can now also call it with a path argument python3 HTchecksumManualCheck.py './DataSamples/Pay.img' Plus it just generally looks better an does both CMOS blocks

python3 HTchecksumManualCheck.py './DataSamples/Pay.img'

Checking supplied offsets point to start of CMOS areas in image:
  Image # 0 PASS : Found [01 00 00 00 98 ba dc fe] @ 0x75be663 [./DataSamples/Pay.img]
  Image # 0 PASS : Found [01 00 00 00 98 ba dc fe] @ 0x75ee663 [./DataSamples/Pay.img]

                                  |     Old      |    Newly     |
                                  |    Stored    |  Calculated  |
                                  |   Checksum   |   Checksum   |  Checksum Area Span
----------------------------------|--------------|--------------|-----------------------
CMOS area #0 Checksum @ 0x75be66f | e4 23 14 26  | e3 23 14 26  | 0x75be677 → 0x75bfb76
CMOS area #1 Checksum @ 0x75ee66f | e4 23 14 26  | e3 23 14 26  | 0x75ee677 → 0x75efb76

Reworked HTchecksumUtils.py to be useful in other scripts

It’s far from perfect but you can do stuff programmatically like bellow with it, calculating a new checksum and writing it the image all from your own script without having to deal with all the other rubbish behind the scenes. Not sure what your end goal is but I imagine this might be useful in achieving it.

This example for CMOS block [0]

>>> from HTchecksumUtils import *
>>> verifyImageHeaders(['./DataSamples/SN33-Real.img'])
  Image # 0 PASS : Found [01 00 00 00 98 ba dc fe] @ 0x75be663 [./DataSamples/SN33-Real.img]
>>> readChecksum('./DataSamples/SN33-Real.img', cmos.base_offsets[0] + cmos.checksum.rel_offset).hex(' ')
'ea 75 09 e9'
>>> calculateChecksum('./DataSamples/SN33-Real.img', cmos.base_offsets[0] + cmos.area.rel_offset)[0].hex(' ')
'ea 75 09 e9'

...
 < modified .img >
...

>>> calculateChecksum('./DataSamples/SN33-Real.img', cmos.base_offsets[0] + cmos.area.rel_offset)[0].hex(' ')
'10 76 09 e9'
>>> new_checksum = calculateChecksum('./DataSamples/SN33-Real.img')[0]  # Default is block [0]
>>> writeChecksum('./DataSamples/SN33-Real.img', new_checksum , cmos.base_offsets[0] + cmos.checksum.rel_offset)
>>> readChecksum('./DataSamples/SN33-Real.img', cmos.base_offsets[0] + cmos.checksum.rel_offset).hex(' ')
'10 76 09 e9'

I think that's enough for this script to be useful so I'll leave it at this.

If I get the time I'll look into that LSB error more, it's probably an odd/even parity bit, but for now just make two images and add 1 to the checksum, it's probably a 50/50 the first one works.

AkBKukU commented 1 year ago

Is there a reason it isn't just that 01 that is the +1 at the start of the data section or are you already doing that? On the other hand, it seems like your checksums worked when the two sections had different data. I need to look at the samples again to confirm it, but it seems like when the settings are changed it makes both of them identical and +1, but when it saves a high score they are different and +0. So I wonder if it could be some kind of extra bit needed to enforce both data sets match?

Also, I'm not clear how you got that checksum seed, was that from your brute force script just trying every possible seed value with the data until you found one that worked?

I'm going to start working on integrating this today, my short term goal is to have a single python script you run with a flag for one of three sections --times,--splits,--settings and --write parameter that takes a CSV filepath. Without --write it will just read the different sections into different CSV files to dump the current data(I may do the linux thing and print it normally and add a --file parameter to directly save it, I'm undecided for now as that would eliminate pretty printing to CLI but I'm not sure that's needed). I'm going to focus on just the top data section for now but I could also add a flag to skip the second section later. Then with --write it will go through each byte on the target drive/disk-image as it builds the checksum, but only write the diffs to reduce drive wear. So I'll probably need to crack open your utils class a bit to add that ability to it. I'll also add one more --backup parameter to copy the actual raw binary data to an image to merge my bash script into the python tool. I'll make the --write option work with that parameter to write a full image as well (again with only diffs though).

I do not plan to ever decode the audit data since the game has a feature to export that natively and it would be a lot of extra work for a redundant feature.

Additionally, your checksum testing has shown that the actual data portions are all that matters, so I'm probably going to shrink the sample drive images we have now to just the data sections we need and refactor the code with different offsets to just work with those(we've had three extra bytes at the start that have been driving me nuts while mapping the settings to the raw data). I wouldn't have been surprised before if it was doing something like writing the data at slightly different positions as part of the validation so I was trying to go overkill. But we've nearly got it locked down now I think.

Future goals may include a GUI with pyqt or something to make this easier for non-technical users, but this is already going to require enough skills to plug an IDE drive into a modern-ish computer so I'm ranking that lower for now and will focus on making the CLI fully featured.

WizardTim commented 1 year ago
The +1 LSB error isn't consistent but that 01 at the start of the header is static, scripts that run verifyImageHeaders() will exit with an error if they don't find 01 00 00 00 98 ba dc fe at the start of every CMOS block. There's something else that must change with the data, I just checked odd/even (when uint32 interpreted) parity tonight and it’s not that, couldn’t find a correlation. It doesn’t seem to be correlated with the CMOS blocks being different either, this is my understanding of the checksum differences at the moment: Data in CMOS blocks is IDENTICAL Data in CMOS blocks is DIFFERENT
Unknown Case 1 (our checksums are accepted) Both checksums identical and valid Each block's checksums different but both are valid (Always 2nd LSB diff so one is +1 other is -1, none at the between value)
Unknown Case 2 (game resets or does weird stuff) Both Midway's checksums are identical but 1 LSB HIGHER than ours Each block's checksums different but both Midway's are +1 LSB higher than ours (same +1/-1 LSB apart thing)

If I have time tomorrow I might try to dig further through HYDRO.EXE and see if there’s any obvious bit mask for some sort of parity check. If we’ve managed to narrow the checksum down from 4,294,967,296 possible values to 2, I think we can find this last bit somewhere.

HTchecksumBruteForceFind.py was mostly for when I thought it was either four SUM8 or two SUM16 checksums and I was trying to find where the different regions of the CMOS data were, but I ran it on SUM32 for completeness and noticed by chance when it would fail to find anything the last checksum it did for the full span of the CMOS area was off from the correct checksum by the same value between all 8 image files but with the LSB error on some. I wrote HTchecksumAnalysis.py after that and saw the massive correlation, subtracted the difference as a seed and it started calculating correct checksums (less the LSB error). The header does contain a suspiciously similar byte sequence to the seed but the first byte is different, I thought maybe it was clearing the checksum and calculating it for the whole CMOS block header included but I couldn’t get that to work with 0x0 for the checksum, does work if you put different seed in place of the checksum, but our seed is equivalent to whatever they are doing before checksumming the data and is easier to implement.

I agree you can safely remove all the extra bits at the start of the image, when I reworked HTchecksumUtils.py I made it use relative offsets from the CMOS header so it can be at any absolute offset you want, but I’m not sure what offset you’d want to start the image at, you could start at the first bit of the first CMOS block?

I would agree in not pursuing a fancy GUI for this, I think the venn diagram for “people who don’t know how to use a CLI script” and “technical people who have bought a 1999 PC arcade machine and want to change/backup the high scores and have extracted the HDD and adapted it from IDE to a modern system” don’t overlap at all, likely no one would use it. But furthermore, this script is for a 1999 arcade machine, I would not be surprised if someone another 24 years in the future has to use this script to restore their settings or high scores, trying to run an old Python 3.10 script from 2023 will be annoying enough, one with lots of dependences for pyqt would probably be worthless to them, I tried to do everything in HYchecksumsUtils.py using built-ins and no 3rd party packages for this reason.

With that said I think it should still have some nicely formatted CLI output, I always like that in CLI scripts so you know what’s going on and if it was successful rather than scripts that just output a mess of data or nothing at all. Probably doesn't need to nicely display all the data, people can just use a spreadsheet program to view the output CSV.

AkBKukU commented 1 year ago

My inclination right now is to overwrite both data blocks to be identical right now and use a +1 LSB. I haven't been able to fully test this (I will tomorrow) but that seems to be the most consistent method to pass so far. I'm working on adding an option right now to allow picking which data block to work with so the user can decide which to use. I don't think there will be a way to tell which is the newer one though as it would likely require decoding audit info which I will not be doing.

My plan is to make the "official" data block size 8,192 bytes starting from the first 01. The audit data from the hard drives of my used machines is about 1680 bytes which made the total data size somewhere around 5300 bytes, but I'm going to overshoot a bit just to be sure. I'm avoiding the term "CMOS" in everything I'm doing because I feel it confuses what and where the data is. I'm guessing they had a plan at some point to either use the BIOS ROM or maybe a chip on the Diego board to hold settings. But other than the references from reverse engineering the code we wouldn't come up with that name and it's never shown to the users. So I think it's just best to avoid it so it isn't confused with anything going on with the PC side of the arcade hardware. I'm trying to use "raw" in all of those places instead to indicated that it is not decoded data.

There may be an easy solution here for a pretty CLI output. I'll make all the parameters for the different sections just take an optional filename as a value. Then it can pretty print by default and only output CSV data to the filenames provided. The GUI idea can just be axed. Like you said, it really won't benefit the people who will likely use this tool.

WizardTim commented 1 year ago

The latest commit adds the parity check implementation for the previously unknown LSB error, it now calculates checksums for the supplied images perfectly 100% of the time, no messing around with +1 LSB!

...
  45 [./DataSamples/CF-Data-Race-slow.img]
Checksum for first cmos copy     : 93 8d c8 aa | 93 8d c8 aa 10101010110010001000110110010011
Checksum for second cmos copy    : 91 8d c8 aa | 91 8d c8 aa 10101010110010001000110110010001
Checksum abs diff between copies : 02 00 00 00 | 02 00 00 00 00000000000000000000000000000010 LOWER CHECKSUM
Checksum abs diff between midway :               00 00 00 00 00000000000000000000000000000000 Valid Checksums!
                                 |  Midway's   | My checksum
                                 |  Checksum   | SUM32 | little | 94 ba dc fe | 100% of Checksums Match (46)

The machine seems to know which block is newer by looking at the blocks of data after the two CMOS blocks that contains the Steven E. Ranck string, looking at the decompiled functions that write this data it appears to write a 64-bit timestamp from the RDTSC counter (uptime, not a datetime). Whichever block has the biggest uptime (written latest) and has a valid checksum seems to be used. This seems to be solely for power loss protection on an incomplete write so we can just ignore this if we duplicate the CMOS data into both blocks and leave the timestamps untouched in that extra block.

8,192 bytes is enough for all the data that thing could possibly write. But I will note it looks like in the code they’ve hardcoded each CMOS block as 0x7d00 (32,000) bytes long spaced 0x30000 (196,608) bytes apart, but they’re not using anywhere near that much of it so doing 200 kB of 00 read/writes would just be pointless and spend the CF card endurance, reading and checksuming on 8,192 bytes is good.

I agree CMOS is a terrible name for it, really implies they were going to use CMOS BIOS/RTC user data space but gave up on it either for performance, space or complexity reasons and just reused the code and dumped it to the HDD. I really wonder if there’s a 1.00a or 0.9X build of Hydro Thunder (or others in the Thunder series) that actually do save data to real CMOS. Calling it ‘raw data’ is probably a good call, but still mention it as being internally called “cmos” in the technical documentation.

Also, there is an ATMEL EEPROM IC on the Diego PCB as U10 (guess, can’t find a good picture of it) and but it’s used in HYDRO.EXE for some settings (maybe cabinet style?) but it’s referred to in functions as _diego_io_write_eeprom rather than cmos so I don't think it was intended for this data.

AkBKukU commented 1 year ago

Awesome! Being parity makes sense why it was intermittent, I'll get that added into the main code as well. Last night I got it to a functional state aside from that where it can import/export CSVs for times/splits and the whole raw data blocks from a complete drive image of arbitrary size(Although I gave up on checking to only write diffs so far). Getting parity added should make it "beta" worthy after I clean up the code some more and rewrite the documentation for what the new parameters do.

Uptimes make perfect sense, I had a suspision that was what those later bits of data were but I still haven't been able to put the time into cracking open the EXE myself to look into it. I was trying to think of how it would know which of the two data blocks was the newest one and couldn't see if being possible without some kind of outside data.

I don't like that I couldn't get a solid answer on the real data size but the 8,192 bytes does seem to work. And yeah, it would wasteful to do the full size they allotted to it. I'm still pretty sure they are starting at one point in memory and just moving a pointer while writing to the drive. So it not an intentional size thing.

I'm settling on a name like "Data Blocks" and have added an option to select which "block" you import/export from. It would be interesting to know more about the development history of the machine for sure to know why it was called that. I will add a mention about it being called cmos somewhere in the TECHNICAL.md file I've been working on for sure. I've been mostly writing that out to get my thoughts sorted on how to write the program and get notes for the video on this.

Ah, I wonder if U10 is the chip I've heard needs dumped on the Diego board. I'll have to take a look at dumping that some time. The manual mentions that it will use a dip switch setting from that board as the default for the cabinet size though. But that wouldn't need an EEPROM.

AkBKukU commented 1 year ago

After trying to make much broader changes to the stored track times, I've run into what may be another parity bit or different check because I can reliably get the game to reset even with the new parity bit. I have manually figured out what the correct value should be for this test case though.

To reproduce: Use these files: https://github.com/AkBKukU/HydroThunder-TimeTool/blob/main/DataSamples/test-case.tar.gz And this commit: https://github.com/AkBKukU/HydroThunder-TimeTool/commit/310b4185be4be83304788087c1b3310c8a8764b4

Running the following command uses the current checksum/parity scheme as is, it produces a raw data block with an invalid checksum for the game: python3 ht-time.py -r free-0.img --times times-bad.csv --write_raw mod-times.img --lsb_offset 0

Your Manual Check tool will give it a pass with these results:

$ python3 HTchecksumManualCheck.py mod-times.img 

Checking supplied offsets point to start of CMOS areas in image:
  Image # 0 PASS : Found [01 00 00 00 98 ba dc fe] @ 0x0 [mod-times.img]
  Image # 0 PASS : Found [01 00 00 00 98 ba dc fe] @ 0x0 [mod-times.img]

                                  |     Old      |    Newly     |                       
                                  |    Stored    |  Calculated  |                       
                                  |   Checksum   |   Checksum   |  Checksum Area Span   
----------------------------------|--------------|--------------|-----------------------
CMOS area #0 Checksum @ 0xc | e4 23 14 16  | e4 23 14 16  | 0x14 → 0x1513 
CMOS area #1 Checksum @ 0xc | e4 23 14 16  | e4 23 14 16  | 0x14 → 0x1513 

If I adjust the LSB by +2 then the game will accept it: python3 ht-time.py -r free-0.img --times times-bad.csv --write_raw mod-times.img --lsb_offset 2

As a baseline you can run the following with the checksum/parity as is and it will be valid: python3 ht-time.py -r free-0.img --times times-good.csv --write_raw mod-times.img --lsb_offset 0

The only difference between times-good.csv and times-bad.csv is changing R to B on the initials in the top score for Thunder Park. times-good.csv was based on the stock times and adding current world record times one at a time until the error occurred. These other changes must be made as well in times-bad.csv only changing the single R to B does not make the checksum invalid.

AkBKukU commented 1 year ago

Test-images.zip

Here are the data block images I made on stream.

AkBKukU commented 1 year ago

post-thunder-2lsb.zip Correct 2LSB offset post-thunder

AkBKukU commented 1 year ago

thunder-time.zip Time set with post-thunder loaded

WizardTim commented 1 year ago

thunder-time-no-audit.zip thunder-time-no-audit-no-splits.zip

Removed audit data

WizardTim commented 1 year ago

Before I sleep and forget this. The only thing I can think of is it has to be a rounding thing. You put in a time that isn't rounded the same, the game reads and rounds it, the time changes, then it does the checksum which ends up 2 LSB higher. Solution may be to round all user inputs with the formula I put in that other issue. Idk I didn't think too hard about it I sleep now.

AkBKukU commented 1 year ago

I was having a similar thought right after stream but what you are speculating could make sense. If it is running the checksum on the section of RAM in the game and it automatically processes the floats I could see that happening.

The only thing is that earlier I was saying I got it down to a point where I could change just a letter in an initial. So I don't know if the floats are it. We might even be finding an edge case bug, who knows.

I'll trying messing with it some more another day still to see what I can find. I think a good next step might be cutting the times to just tenths of seconds to see if it can load arbitrary data like that. It will be annoying if emulating the game becomes necessary to monte carlo test this with random data until we find a pattern. It may also be time to see if I can get in contact with the original developer to see if they have any insight.

WizardTim commented 1 year ago

Currently those algorithms predict checksums with 100% accuracy, and I cannot find anything that would argue it is not implementing some other undiscovered feature of the checksum so I’m betting this a materialization of the Centiseconds Rounding Bug Part 2: Electric Boogaloo and will continue this discussion in the issue for it.

WizardTim commented 1 year ago

Issue fixed in pull #8.

This branch's code was experimental and is not useful for the final ht-time.py implementation so won't be merged.