First if we see that whole thing "on topic" we can assume that "he" really deals with a 23.976 progressive Stream, framebased and therefore to be encoded as Progressive with ZigZag scanned DCT 8x8 Matrix.
And as this thread here changed to the "interlaced" Divx/XVID 4:2:0-YV12 subject, I fished something out of the www and other forums.
If you want to see an interlaced 4:2:0 YV12 XVID (also Divx would behave the same) without postprocessing you don't need to get that one at 100fps.com, you can see my two pics in this thread above where the upscaling of the image by 3times gives you a good comparison.
Gentlemen, the problem in here's not Divx or XVID in general, the Problem is interlaced 4:2:0 and therefore interlaced YV12 (YV12 4:2:0 is mpeg4 standard). And thats also an issue if capturing interlaced sources using mpeg2! Means at NTSC telecined captures and also in case of PAL if Hollywood movie broadcastings have been treated by a pal speedup (23.976 to 25.000 + adding of that PAL Country audio) AND phase shift (appears as interlacing "look")
Watch this:
http://www.mir.com/DMG/chroma.html
Means:
YUY2 = half horizontal but full vertical color resolution
YV12 = half horizontal AND half vertical color resolution
(and as Interlaced needs full heigth to be fieldbased ... therefore comes the chroma bug in case of interlaced)
Also in that Link you can see WHY mpeg1 can't be encoded as interlaced!
As Chroma Samples are centered BETWEEN lumasamples which makes interlacing impossible even at full height.
And here an explanation of LigH at doom9/Gleitz.de translated by Googles language tools:
Quote:
Originally Posted by LigH
Perhaps still times somewhat in more detail (largely we discussed fully similar things e.g. in this contribution): Everything (all!) MPEG-compatible video formats (all the same whether MPEG1 -, Mpeg2 or MPEG4-kompatibel) store according to standard brightness and color difference information with the Subsampling configuration 4:2:0, (nearly) the same also by unkomprimierten YV12-AVIs one would use; others are permitted partly also, must be adjusted however separately, and are not not DVD compatible partly.
_ a unkomprimiertes Rgb24-avi needs 8 bits for each pixel in all three components (red, green, blue): 3 components * 8 bits/1 pixel = 24 bits per pixel. So far simply. Now a little more complicated: With 4:2:2-Subsampling (as with YUY2 or UYVY) each individual pixel has a brightness value Y, but two pixels each lying next to each other divide one color difference component each U and V. that means: The smallest unit, which one can store with (groups of) complete 8-bit bytes, is a group of two pixels each lying next to each other. Those need then 2 bytes for the individual y-values, and 1 byte each for the common u and CV factors - the smallest storable unit are thus 2+1+1 = 4 bytes. That makes thus * 8 bits/2 pixels = 32 for 4 bytes bits/2 pixels = on the average 16 bits per pixel. And end the conclusion still more complicated: With 4:2:0-Subsampling (as with YV12) each individual pixel has a brightness value Y, but four neighbouring pixels each lying in the square divide one color difference component each U and V. that means: The smallest unit, which one can store with (groups of) complete 8-bit bytes, is a group of four secondary and among themselves each lying pixels. Those need then 4 bytes for the individual y-values, and 1 byte each for the common u and CV factors - the smallest storable unit are thus 4+1+1 = 6 bytes. That makes thus * 8 bits/4 pixels = 48 for 6 bytes bits/4 pixels = on the average 12 bits per pixel.
__ and why now DivX announces "24 bits", while XviD announces "12 bits"? "no notion - the programmers ask!" Perhaps the DivX is somewhat inaccurate and announces as preferential decoding format "RGB24", although its natural unkomprimiertes format is most similar actually to YV12 - because practically all video processing programs can process unkomprimiertes RGB video problem-free. The XviD is there perhaps more exact; (as in the contribution specified above) each program in a the position is not only to process planar formats. Therefore the XviD codec offers itself additionally also as "codec" for YV12, in order to support such (in this regard ' unable ') programs to make and a conversion of YV12 available into the desired format. (since short DivX does that by the way also...) Or it depends on with which the DivX /XviD codec was originally fed (thus whether RGB24, YUY2 or YV12 to MPEG4 were compressed). And then it would be actually because of which was used program, and like it was adjusted (e.g. VirtualDub or Mod in the nearly Recompress or Full processing mode). In the case the question about the quality differences would be to be answered in such a way: "the fewer transformations between the formats, the better" - then would be the winner: YV12 (for YUY2 average value formations and interpolations would be necessarily, for RGB24 even a complete conversion between RGB and YUV; there much accuracy is lost!).
__ programs, which can be decoded a video compressed by a VfW codec (e.g. with DivX or XviD in the MPEG4-Format), may itself by the way of the "image Compression manager" (the technology, which exists since Windows 3.x with VfW 1.x) in a list wish, into which format they would gladly have decoded it - in the order, as would have sie's dearest. Some programs wish themselves of the ICM only RGB24; others would have gladly only YUY2 or UYVY, then RGB24; only few (like VirtualDubMod) permit also YV12. That might be connected with the fact that YUY2 and UYVY are "packed" also exactly like RGB24, while YV12 is "planar", and therefore to be completely differently treated must. Details in addition in mentioned here above the contribution...
|