My interpretation of this is it does basically half horizontal (luma: read pixel) resolution and halves the vertical chroma (colour) resolution.mrbombermillzy wrote: ↑Tue Dec 27, 2022 8:39 pmIn more detail: Amiga had a frame buffer mode that displayed 736x483 pixels, but only 4-bits per pixel (16 colors). We used that mode, but re-interpreted the data. We combined alternate pixels so that we had 8-bits per pixel at half the rate. Even (I think) lines stored data as Y + (R-Y), Y - (R-Y), etc. [Luma +/- red color difference value, alternating] at a 7.159MHz sample rate. Odd lines stored data as Y + (B-Y), Y - (B-Y), etc. [Luma +/- blue color difference value, alternating], logically shifted by half a 7.159MHz sample. So every line would contain every other raw sample for an NTSC composite video signal. In order to reconstruct the missing data, it would generate the Y component by averaging the values left and right of the missing sample, and the chroma component by subtracting the two adjacent samples from the previous line. The final value was created by adding these values together.
...
Instead of a frame of 736x483 pixels of 16 colors, we got an effective resolution of about 368x483 pixels of full NTSC type color.
This would actually work very well with old component or composite video (he mentions NTSC, but PAL would be the same, I think) as the colour resolution is inherantly lower than the spacial (pixel) resolution. Think of how the old Spectrum colour attributes worked, but in slightly better fidelity.
I'm not convinced this would work so well with modern RGB displays, but you can fool the eye pretty well in the colour space, so it's possible.
Of course we don't have a 640x16 colour mode, so we'd be dropping down to 160x200 pixels and, I think, an effective 160x100 colour grid with about 32k colour options on average.
This relies on the unit remembering the previous scan line in order to do the maths and the ST-side software calculating the appropriate values for either the even or odds lines.
The overhead for the latter worries me. I'm not sure it would take any less computational effort than palette switching. Would be fine for static images, but then so is palette switching, albeit with only 512/4096 colour options then.
The former isn't really an issue with modern technology, but I still think the bottleneck would be ST-side.
For example, a scandoubler must remember a line at the very least. An upscaler likely a full frame. These already are doing half the work so output manipulation is almost trivial, but where do you want to spend the ST time?
- You can do 8 bit (true colour) 160x200 by just combining adajent pixels (not much overhead);
- You can do 8 bit 320x200 by having two framebuffers toggled each VBL for half the frame rate (more software overhead in maintaining the triple buffers now);
- You can do similar to what this is doing, which is a combination of the first approach with maths on adjacent scanlines (quite a bit of ST overhead, each 'pixel' has to be recalculated twice, I think);
- You could do a combination of the first and the second options and have 160x200 and up to 65k palette at half frame rate;
- Hell, you could do this for high-res mode and have 640x400 x 16 colours at 18Hz using an Amiga-style planar setup (four separate bitplanes).
BW