c0pperdragon / Amiga-Digital-Video

Add a digital video port to vintage Amiga machines
299 stars 30 forks source link

Atari 8 bit #5

Closed c0pperdragon closed 3 years ago

c0pperdragon commented 4 years ago

Created a new specific thread. The previous was getting too long already.

My profile is now this: Atari_800_14Mhz.txt

As I have seen as comments in my palette generation program, I unsuccessfully tried to create a palette from an algorithm. This just did not look good, so I have switched over to a pre-made one. But it also had color issues as well

I have now downloaded a bunch of different variants and will try which one maches my TV best.

by the way: I just spotted a glitch in my firmware. So far it appears in only one demo that messes with sprites...

c0pperdragon commented 4 years ago

After doing some calculations, it turns out 360 pixels would take about 50.8us on a PAL machine, which is very close to what I am showing on the C64 (400 pixels). This value is also pretty close to the PAL video timings itself which would specify 52us of visible screen area. The current firmware only shows 344 pixels - probably some earlier tweak. I will extend this to 360, but need to make the horizontal shift more sensible. There is something weird going on in the way the first two columns in the basic screen are intentionally left blank (I have once heard to avoid the overscan area). Maybe I should center the remaining 38 columns into the middle of the visible area and adjust the sync accordingly. I will need to experiment with this a bit.

Anyway, this will surely not eliminate the artefact in the Reharden demo. But you can always crop this on the RGBtoHMDI.

IanSB commented 4 years ago

Anyway, this will surely not eliminate the artefact in the Reharden demo. But you can always crop this on the RGBtoHMDI.

As long as it's on the composite output as well then it's faithfully reproducing the output of the Atari so that's OK.

c0pperdragon commented 4 years ago

I finally realized that the ANTIC actually tells the GTIA chip exactly which area or the scren are to be blanked. So I just also use this information now. The visual screen size is quite wide now, and many bugs in demos become quite apparent. But this is faithful to the composite video output, so I keep this for 288p mode. Upscalers (including the RGBtoHDMI) can easily crop away the border garbage. In the integrated 576p line doubler mode on the other hand, I am cropping the output because this mode is intended to be directly used on TVs without the ability to do more post-processing. I have fine-tuned this to match the scroller of the "Reharden" demo, With 340 pixels width I can get rid of the garbage at the right as well make the left edge perfectly straight. This setting also is perfectly sufficient for all other demos I have tested.

c0pperdragon commented 4 years ago

The change alos shifted the relative positions of the sync signal, and this seems to confuses the RGBtoHDMI, and the colors are all wrong now. I will fix this. Release 1_3 was a bit premature, it seems...

c0pperdragon commented 4 years ago

Sorry, I just realized that with the new way to handle sync and image generation, it would be pretty awfull to change the behavior on my side. Maybe you can fix this in the RGBtoHMDI? The relative positions of the sync and the pixel data has changed by one pixel.

c0pperdragon commented 4 years ago

No, I was mistaken The sync shifted by two pixels, so everything should work as before. Will investigate...

c0pperdragon commented 4 years ago

LOL! I just switched the Pb and Pr cables, therefore the colors were off! So everything is perfect, actually.

IanSB commented 4 years ago

Maybe you can fix this in the RGBtoHMDI? The relative positions of the sync and the pixel data has changed by one pixel

You can adjust the delay value in the sampling menu to fix that

IanSB commented 4 years ago

I've tried the new firmware with the increased visible area and it works OK. As you mentioned, more artifacts are visible and while tweaking the geometry settings I found a bug in the Atari capture code that calculated the H-offset incorrectly. I have fixed this and made some other updates so you can use preference menu settings to crop the screen area rather than adjusting geometry settings. Here is the updated version (full SD card file set): AtariTest_v0.04.zip

Here are some examples of the settings:

Normal full capture showing edge artifacts (at least I assume that's the case, I don't have a composite output wired up yet): capture52

This has "Crop Overscan" set to 40% in the preferences menu capture67

(In this case, Overscan is the area between the minimum and maximum sizes in the geometry menu. Crop 0% = use max area, Crop 100% = use min area with other values being in between.

The capture below has the "V Adjust 625<>525" setting turned on. This allows PAL sources to be displayed with NTSC aspect ratio and NTSC sources to be displayed with PAL ratio: capture58

c0pperdragon commented 4 years ago

Using this options I have set the minimum width to 320 and the maximum to 360. By cropping 60%, this gives just the correct size to work with the mentioned demos. But somehow this cropping percentage gives weird results. The vertical cropping changes in a very unpredictable way. Maybe it would be better to have seperate options for vertical and horizontal cropping? The requirements may be very different in different circumstances. Often it is just the horizontal overscan that contains broken stuff.

IanSB commented 4 years ago

The vertical cropping changes in a very unpredictable way.

It's trying to maintain the same aspect ratio while also rounding vertical sizes to the nearest 2 and horizontal to the nearest 8 pixels.

When using interpolated scaling, the maximum h/v size is used as the source area and that is then scaled to fill the 4:3 area of the screen so the max area should have the right number of pixels for a 4:3 image. When you use the overscan in this mode it crops off the border but then the cropped image is scaled to the full 4:3 screen so it is effectively a zoom option.

The aspect ratio of the min area usually differs from the aspect ratio of the max area so it has to scale the cropping to cope with that and maintain the aspect ratio of the max area.

IanSB commented 4 years ago

I put the original 3 pin 3.3v regulator MCP1754ST-3302E/CB I was using back on the board and that works fine so it looks like the problem was entirely the FPGA sampling point.

c0pperdragon commented 4 years ago

That is good to know, even if this proofs that that we were chasing the wrong problem
Anyway, I have already changed the design files to use the same regulator as for the C64 board, as this is working so flawlessly.

IanSB commented 4 years ago

That is good to know, even if this proofs that that we were chasing the wrong problem Anyway, I have already changed the design files to use the same regulator as for the C64 board, as this is working so flawlessly.

When I tested with that regulator, it had less noise than the MCP1754ST-3302E/CB so probably a good idea.

bwmott commented 3 years ago

@c0pperdragon

I've been reading through this thread on Atari 8-bit support for RGBtoHDMI. I was wondering if instead of using the YUV converter board, if it might be possible to build an adapter board that takes the 4-bits of LUMA from GTIA (PINS LUM0, LUM1, LUM2, LUM3) along with a circuit to reconstruct the 4-bits of CHROMA by looking at the delay between the GTIA COLOR and OSC PINS. Looking through some of the technical details on the GTIA makes it seem like this might be possible:

http://ftp.pigwa.net/stuff/collections/atari_forever/www/www.atari-history.com/archives/gtia.pdf

Screen Shot 2021-03-07 at 9 49 47 PM

Screen Shot 2021-03-07 at 9 50 32 PM Given the 4-bits of LUMA and the reconstructed 4-bits of CHROMA the pixel color could be determined using a palette lookup. I'm not sure how hard it would be to reconstruct the 16 CHROMA values by looking at the delay between the signals. In the table above each increase in the CHROMA values results in 35ns of additional delay. Any thoughts on if decoding the CHROMA from the delay would be possible with a "simple" circuit?

c0pperdragon commented 3 years ago

This decoding of the chroma information was actually my first approach to the matter. After many, many attempts I still could find no reliable way to do this. The signal produced by the GTIA is just to wonky and there are always some miss-detected pixels. This forced me to use the digital information exclusively and reconstruct the color from that.

But maybe there is a more light-weight approach to that in conjunction with the Pi: Using a cheaper CPLD that is also 5V-tolerant to implement just enough logic to reconstruct the chroma signals and then use the luma signals from the GTIA. Maybe this could fit on and adapter board that goes between the GTIA and its socket and this also provides a 40-bin female socket to plug the Pi in.

I have no experience with the CPLD in question and don't know if the required amount of logic will actually fit. Also the physical size of the CPLD may be a problem. I need to do more checks on the feasibility of that.