ali1234 / vhs-teletext

Software to recover teletext data from VHS recordings.
GNU General Public License v3.0
203 stars 22 forks source link

Training creates hamming files that are the wrong length for PatternCUDA #3

Closed grim-fandango closed 8 years ago

grim-fandango commented 8 years ago

Hi Alistair,

I piped the output from training -g to raspi-teletext, recorded it on Betamax, then captured it using vbicat. I then did a 'training -t' on the vbi file, then on the output of that did --parity, --hamming and --full (not sure if 'full' corresponds to debruijn, but that's a different matter).

The hamming file isn't a multiple of 1024 so PatternCUDA won't accept it. Am I doing something wrong or is there an issue with the training script?

Thanks

Jason

ali1234 commented 8 years ago

You probably did not record long enough to get every possible pattern. You need a couple of hours.

On 14 Aug 2016 22:16, "grim-fandango" notifications@github.com wrote:

Hi Alistair,

I piped the output from training -g to raspi-teletext, recorded it on Betamax, then captured it using vbicat. I then did a 'training -t' on the vbi file, then on the output of that did --parity, --hamming and --full (not sure if 'full' corresponds to debruijn, but that's a different matter).

The hamming file isn't a multiple of 1024 so PatternCUDA won't accept it. Am I doing something wrong or is there an issue with the training script?

Thanks

Jason

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/ali1234/vhs-teletext/issues/3, or mute the thread https://github.com/notifications/unsubscribe-auth/AAnywnegxQvcGeApRQRy5T6f782hir6Rks5qf4WogaJpZM4Jj_RJ .

grim-fandango commented 8 years ago

I gave it two hours' worth, but I get the same error. The file is smaller too, at 762,489 bytes as opposed to 819,214.

ali1234 commented 8 years ago

Do you get rejects when running training -t?

ali1234 commented 8 years ago

The final training data files have a 14 byte header and then 25 bytes per recognized pattern. The hamming data should have 32768 patterns and the parity data should have 4096. Both multiples of 1024.

14 + (25 * 32768) = 819214 14 + (25 * 30499) = 762489

So you don't have a full set of patterns for some reason. That can only happen if some patterns weren't found in the raw training samples.

grim-fandango commented 8 years ago

It started out at 1% rejects then goes to 0%. When complete, it says: "1:30:11 : 5815928 lines, 1075/s total, 1073/s teletext, 0% rejected. "

If there are 1075 lines/s and 1073 lines/s are teletext, then it would seem that the rejection rate is actually 0.19%.

That was for a 2.5 hour sample.

moonhouse commented 8 years ago

See https://github.com/ali1234/vhs-teletext/blob/5dc0236c598dfb01aa9f830c77972b0bd2f194f1/teletext/vbi/map.py#L51

Format string is '%.0f%%'. 0.19% will be displayed as 0%.

ali1234 commented 8 years ago

That is an acceptable error rate so I am not sure what is happening. The checksumming should prevent the pattern from being read incorrectly (eg with an offset).

What do your intermediates look like?

grim-fandango commented 8 years ago

Well, I've just run it again and it's generated a file of the right length. Not sure what was different this time. But thanks for looking at it and sorry for the wild goose chase. :-/

ali1234 commented 8 years ago

That is extremely strange. To be honest I have forgotten how you even use the trainer bits. Any chance you could write some docs for it?

grim-fandango commented 8 years ago

I can only think that I ran it previously on Windows, although why that would matter I don't know.

My current problem is generating the parity training tables, as the training script literally grinds the machine to a halt after a while and I have to hard reset. I guess it's swallowed up all available RAM.

Sure, I'll write some docs, but I don't know much! Seems weird to write docs for it if I've not got it working yet.

ali1234 commented 8 years ago

That is quite possible. It bucket sorts the packets in memory and then takes the average for each pattern at the end, so it will basically load the entire training file into RAM, which would be 10GB if you have 5 million lines. I had 16GB of RAM when I wrote this but I've upgraded since then!

grim-fandango commented 8 years ago

Yeah, just had a look at the code, there doesn't seem to be an easy way to spilt it up as it seems to require that it's all in RAM as you say. My training tables file is 29GB from a 14,894,235,648 byte VBI file. If each frame is 64K and contains 32 lines, then there will be 7.2M lines! There's 16GB RAM in this machine.

I have to try to chop down the original VBI file and run it again - I did about 2.5 hours of recording to get a file that size; maybe I over-egged it a bit!

ali1234 commented 8 years ago

I might be wrong about the specifics actually. But for sure it needs a lot of RAM and will take a long time.

ali1234 commented 8 years ago

Here is a braindump about how training works:

https://github.com/ali1234/vhs-teletext/blob/master/TRAINING.md

(I wrote it mostly from memory so it might be wrong.)