digitalFAQ.com Forum

digitalFAQ.com Forum (https://www.digitalfaq.com/forum/)
-   Encode, Convert for streaming (https://www.digitalfaq.com/forum/video-web/)
-   -   Odd/even fields vs. TFF/BFF? (https://www.digitalfaq.com/forum/video-web/10152-odd-even-fields.html)

WarbirdVideos 11-28-2019 01:09 PM

Odd/even fields vs. TFF/BFF?
 
What ever happened to odd/even fields?

With odd/even the TV set (decoder) actually showed two separate fields (1,3, 5... and 2, 4, 6...) to create a frame (30fps, b&w). But do the codecs actually scan lines 1, 2, 3, 4, etc and keep them seperate (?), or did they just change the terminology? Why do some codecs begin with BFF, assuming "bottom" = scan lines 2, 4, 6 ...

Thanks,
Steve

msgohan 11-28-2019 07:21 PM

The only codec that comes to mind which specifically requires BFF is DV. I don't know why they chose it.

Most allow you to specify, yourself.

I don't understand what you're asking with the other questions. :question:

WarbirdVideos 11-29-2019 07:02 PM

I suppose the real question is: Why would a scan begin with the 2nd scanline (BFF) when #1 is the first scan line (TFF)? Also, why is there an option to choose TFF or BFF when there is so little info on which one to choose? Most people say: I "think" it's TFF, but run a test and look for the smoothest motion or something else ambiguous. Is there a definitive list somewhere which clearly states interlaced codec's field orders?

lordsmurf 11-29-2019 07:07 PM

Everything but DV is TFF.

msgohan 11-29-2019 08:59 PM

Quote:

Originally Posted by lordsmurf (Post 65053)
Everything but DV is TFF.

jjdd posted samples showing that at least one driver version for his "ATI TV Wonder USB 2.0" spits out PAL BFF: http://www.digitalfaq.com/forum/vide...html#post64266

Quote:

Originally Posted by WarbirdVideos (Post 65052)
Why would a scan begin with the 2nd scanline (BFF) when #1 is the first scan line (TFF)?

https://lurkertech.com/lg/fields/#wh...anceselectable
https://lurkertech.com/lg/video-systems/#debacle
https://www.dvmp.co.uk/digital-video.htm (particularly the Field Order section which attempts to explain how spatial order is not necessarily temporal order)

Ambiguous terminology makes these things even more complicated than they already are on a technical level.

Quote:

Is there a definitive list somewhere which clearly states interlaced codec's field orders?
The device that does the digitizing is what determines the field order of the capture, not the codec.

Quote:

Also, why is there an option to choose TFF or BFF when there is so little info on which one to choose? Most people say: I "think" it's TFF, but run a test and look for the smoothest motion or something else ambiguous.
You're locked into what your device samples. Your only options are specifying it correctly, incorrectly, or modifying the capture afterward and losing 1 or more lines.

Wrong field order is a difficult thing to miss, if you view the video correctly.

lordsmurf 11-30-2019 10:40 AM

3 thoughts:

1. You made this post in the EDIT/ENCODE forum. When editing/etc video, you must pay attention to TFF/BFF. Everything but AVI can be run through a codec checker, but also realize it can be wrong. You can have progressive in interlace, or interlace in progressive (BAD!), or flopped fields because the person didn't know what he was doing (BAD!). You need to preview those in a player like VLS, see how it reacts to 2x Yadif. Or better yet, watch on interlaced CRT. As msgohan says, it'll be obvious if wrong.

2. In terms of capture, how/why does NOT matter. It just is. So do what is required. (Almost?) everything uses TFF, aside from DV cards/boxes/cameras that need BFF. No idea why. And I've never seen a satisfactory answer. But when you understand all the dumb stuff DV did/does, all the terms DV misuses/abuses (examples: "capture", "audio lock"), then you'll just roll your eyes at the nonsense. Your thought will be "figures". DV was retconned to mean "digital video" (sort of how DVD was "versatile", not "video", and I always forget the DV original term), but it could be "dumb video" as far as I'm concerned.

3. Don't overthink easy aspects of video, unless you're wanting to be an engineer, or software/filter dev. Video has enough of a learning curve without making things harder on yourself for no reason.

hodgey 11-30-2019 11:41 AM

Quote:

But do the codecs actually scan lines 1, 2, 3, 4, etc and keep them seperate (?), or did they just change the terminology?
The output data from analog video decoder chips is typically one field after each other (otherwise they would have to buffer fields). Codecs may or may not bunch 2 and 2 fields together or in some cases store 2 fields as one progressive frame. If they are stored as fields, they are then typically weaved together when decoded from a video codec or in case of a capture device, from the driver output.

What codecs do internally vary a bit:

I believe lossy formats that support interlaced video (e.g mpeg2, DV etc) will encode on a field basis internally (when in interlaced mode), as otherwise one may get weird artifacts, though they may have a concept of storing 2 and 2 fields together in the data stream.

Now the specification documents for DV, mpeg2 and other industry formats are typically hidden behind paywalls costing a lot of $$$, so I haven't looked at how it's done in detail. One could also dig through source code for the exact details but that's a bit more tedious.

As for lossless codecs: Huffyuv assumes video taller than 288 pixels is stacks interlaced and then stacks every other line side by side. Lagarith is derived from huffyuv so it may or may not do the same, while I believe utvideo and ffv1 just store whole frames and does not have any concept of interlaced video at all.

WarbirdVideos 11-30-2019 06:31 PM

Thanks for the info guys - especially the links provided by MSGOHAN for the deep dive into this! You are correct Lordsmurf, I do not want to be an engineer or a coder, but I need to get it right because I have around 300 tapes on Hi8, DV, DVCAM and HD. As part of my Television Production degree back in '75, I had to complete a Broadcast Engineering program and be able to troubleshoot and successfully repair TV sets and other equipment. But analog was a mature technology, especially compared to ever-evolving technology of today.

My ultimate goal here is to obtain the best "source" footage extracted from the various tape formats that I shot over the past 25 years. That source footage will be the archived master of the historically significant material of WWII vets. I figure some day, computers will be able to take low resolution Hi8 and DV footage and turn it into 4k video and beyond.

My goal is to get it online, and also send the raw source files to a place where future generations can utilize it. So for now, I'll be deinterlacing it with QTGMC and upscaling it for editing. I'll likely denoise it first, but I need to run tests, unless someone has done this and has written it up. I may sharpen it after the upsize, or wait until I do the final output from Vegas Video.

Any other guidance will be greatly appreciated!
Steve


All times are GMT -5. The time now is 04:00 AM

Site design, images and content © 2002-2024 The Digital FAQ, www.digitalFAQ.com
Forum Software by vBulletin · Copyright © 2024 Jelsoft Enterprises Ltd.