12-22-2016, 09:28 PM
|
|
Site Staff | Video
|
|
Join Date: Dec 2002
Posts: 14,041
Thanked 2,552 Times in 2,170 Posts
|
|
Are we talking about the same thing here?
I'm referring simply to the image above
That's what masking typically looks like. Sometimes more, sometimes less.
I don't usually approve of auto-cropping 12/12/16/16, but instead cropping as needed. Sometimes that's more, sometimes less, sometimes not at all.
Note: I see deter mentioned Michael Jackson -- probably some nth generation reference to the many VH threads in recent years, all of which I find 100% pointless. I'm not referring to that junk. I'm ONLY talking about the example of a masked image.
Quote:
That's ridiculous and patently untrue for many viewers with new tv's.
|
No.
Most all SD and HD TVs mask. Even HD streams have an overscan, though it is more like 2% instead of 5% average. HDTVs tend to crop 4x3 and 16x9 feeds differently. Some cheap/crap sets don't mask anything, and some allow 100% coverage by disabling overscan in the menu. A few models have an option enabled by default, but I've rarely seen those.
FYI: I ghostwrote TV reviews at one point, and I'm not wrong. I hear assertions that "new TVs don't crop" about once per year, for at least a decade now. Those assertions are always false.
Every year, I see articles debunking the "no overscan on HDTV" myth.
A quick random article found via Google: https://www.cnet.com/news/overscan-youre-not-seeing-the-whole-picture-on-your-tv (and I'm no fan of CNET).
Many Blu-ray players, HTPCs, etc, also crop output, based on several factors.
Quote:
Originally Posted by sanlyn
I'm not talking about "seeing" the mask anyway. It's not a great idea ("artistic license" or not) to cut off chunks of images. Ask the creator or an archivist or the owner how they feel about it. This is just mediocre grunt work looking for praise and justification.
|
Quote:
It's obvious from looking at the video that big chunks are missing from the frames.
But not to my videos and not to my clients'. Any tyro can send a video through cheap tricks like this and re-encode ad infinitum. I wouldn't recommend it and don't subscribe to it.
|
Big chunks?
At very most, it's 7% of the edges, usually half that.
There's no other way to restore video along the edges of the frames.
^ Even that statement isn't 100% accurate, because the frame was never 100% image anyway. It was black/dead space (with ugly image borders), closed-caption noise (non-image data leaking into image area), head-switching noise from analog tapes, tearing-related noise if analog tape, and a few less-seen others.
Once that noise is cropped, you may need to add a few pixels for balance.
Beyond that, it was well known that video overscan existed, so any director/producer/etc that aligned important elements on the edge of the frame was a non-professional. (Early HD was similar, with important content being truncated to the 4x3 center for years, with only periphery to the edges. Remember, I worked for studios. We had these conversations. My favorite examples of this was My Name Is Earl, which had comedy gags purposely only viewable on the HD versions.)
The idea that "archivists" must retain noise is an amusing notion, and I'm not sure how that was started. Yes, for some things, that is true. For others, no. The content matters, not the edge of the frames. This isn't the Zapruder film. However, you should always keep pre- and post-restored copies in archives. You want a good copy to view, and the not-pretty copy for later needs (mostly for new innovations that allow better-still restorations).
Quote:
The same author used the same gimmick to lacerate another already-damaged video posted by the same author elsewhere.
|
Masking isn't a gimmick. Masking is for (a) bitrate control, (b) if/when viewed at computers. Noise eats bitrate, and that's a concern for distribution.
You seem to be in a mood lately. Are you okay?
|
Someday, 12:01 PM
|
|
Ads / Sponsors
|
|
Join Date: ∞
Posts: 42
Thanks: ∞
Thanked 42 Times in 42 Posts
|
|
|
12-22-2016, 11:47 PM
|
|
Premium Member
|
|
Join Date: Dec 2009
Posts: 324
Thanked 28 Times in 26 Posts
|
|
Lord Smurf,
My image is just a .jpg file cause your website would not accept the file format. It is just a blue screen with cropping boarders that fits as general setup for most VHS tapes. Personally do not mask my recordings, only for online viewing. The way I did this was used filled boarders in virtual dub and than pulled a frame and than put blue in the boarders. For online viewing you want to make the picture as even as possible, if you take 7 seven from the bottom you take 7 from the top. 10 from the sides and so forth.....
Sanlyn,
If you crop the video on VHS to remove the scan lines, you are blowing up the size of the pixels, it works ok sometimes cause most YouTube VHS videos suck anyway. But if you don't want to inflate the real image than you need to fill boarders. The information is not lost or inflated it is just blacked out.
I do frame restoration, your a Virtual Dub, Script guy who is way more detailed than I will ever be. However when doing video restoration cause I don't have $20,000 to blow on high end software, to restore damage video in a segment, let us say 10 fields, Chroma Key is the best way to do this. You mask out the damaged section and rebuild the frames. I draw them if needed based on another image. It doesn't destroy the video it actually fixes the video.
Last edited by deter; 12-22-2016 at 11:58 PM.
|
12-28-2016, 06:22 AM
|
|
Premium Member
|
|
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,319 Times in 989 Posts
|
|
Quote:
Originally Posted by deter
Sanlyn,
If you crop the video on VHS to remove the scan lines, you are blowing up the size of the pixels, it works ok sometimes cause most YouTube VHS videos suck anyway. But if you don't want to inflate the real image than you need to fill boarders. The information is not lost or inflated it is just blacked out.
I do frame restoration, your a Virtual Dub, Script guy who is way more detailed than I will ever be. However when doing video restoration cause I don't have $20,000 to blow on high end software, to restore damage video in a segment, let us say 10 fields, Chroma Key is the best way to do this. You mask out the damaged section and rebuild the frames. I draw them if needed based on another image. It doesn't destroy the video it actually fixes the video.
|
I understand what you're saying and doing, but you don't understand what I'm saying and doing. I repair edge borders and head-switching noise in Avisynth, not with VirtualDub masking. Cropping in Avisynth does nothing to change pixel size as you imply. A simple procedure for clearing head-switching noise in Avisynth would be this example that removea 8 pixels of bottom noise, re-centers the image vertically, and leaves other portions of the original image (including existing borders) intact:
Code:
Crop(0,0,0,-8).AddBorders(0,4,0,4)
This is done so many times and is posted in various incarnations in so many forum posts here and in other forums I'm amazed you haven't noticed. This method uses lossless media that does not require resizing and shouldn't involve re-encoding unless you're trying to work with a lossy original, which is a quality hit to begin with. You can use any method you wish, but I prefer to leave the original core image untouched as much as possible.
I don't have a super PC or $20,000 worth of software. I'm using Windows XP in a home built PC with a small Gigabyte mATX mobo and i5 Intel processor purchased on a Super Clearance Day at a local MicroCenter store. My capture PC is a cheap Biostar job with an old 2-core AMD 2GHz cpu, a 12-year-old ATi card and setup built from spare parts and components scavenged from discarded PC's. Like many users I have disabled overscan on my HDTV's and will not buy a TV if overscan can't be disabled -- such a TV lacking many basic features has poor picture quality to begin with and a menu control system that doesn't allow for precise grayscale calibration. There's already enough low quality and sloppy work out there without me being forced to accept or even pay for more of it.
|
12-28-2016, 06:40 AM
|
|
Site Staff | Video
|
|
Join Date: Dec 2002
Posts: 14,041
Thanked 2,552 Times in 2,170 Posts
|
|
VirtualDub "resize" filter doesn't resize if you set the new size to the same as the source.
vdub-resize.jpg
Avisynth is just another way to do the same thing.
Depending on my workflow for the project, I'll use either. Same difference.
IMPORTANT: Always remember to mask/crop by 2, if interlaced!
|
08-16-2017, 04:51 PM
|
|
Free Member
|
|
Join Date: Nov 2015
Posts: 54
Thanked 10 Times in 8 Posts
|
|
Here is an update of how I am doing today. I do not use Handbrake anymore, I did not find it so bad, but yes Avisynth with the QTGMC script is better. Here is what i did: I installed Avisynth and the QTGMC script. I use Megui to run an Avisynth script. I use the "One-click" mode ( I find it more easy ). I insert the Avisynth script in the "Avisynth Profile" (see below).
Some of my Megui Settings
Choose One Click, Config, One Click configuration Dialog
Video tab:
Encoder x264: *scratchpad*
"Don´t encode video" disabled
"Force key frames for chapter marks" enabled (whatever that means)
Output Resolution (Max. Width) = 720 (PAL)
"Autocrop" disabled
"Anamorph output" enabled (this is important if the video is not resized in the Avisynth script)
"Automatic Deinterlacing" disabled
Avisynth profile ( choose config to make profile ) example:
<input>
AssumeTFF() # For MiniDV use BFF
ConvertToYV12(interlaced=true)
QTGMC( Preset="Faster")
SelectEven() # Add this line to keep original frame rate, leave it out for smoother doubled frame rate
Crop(12,4,-12,-12) # left, top, right, bottom
AddBorders(12,8,12,8) # left, top, right, bottom
# Levels(16, 1, 235, 0, 255, coring=false)
# LimitedSharpenFaster()
# Spline16Resize(960,720)
The three last lines are disabled commands ( by the "#" mark at the beginning of the line) and normally not used
Crop values can be adjusted if the video needs it eg. to 8 pixels cropping left and 16 pixels cropping right
Audio tab: Nero AAC: *scratchpad*
Output tab: MP4 enabled
Avisynth QTGMC sample video
https://www.youtube.com/watch?v=jsVYJ1pRlXY
This is an example of a VHS tape in bad quality i have captured. After 0.50 the video is processed with the Avisynth script above
I still capture to MPG (MPEG-2) in PAL with Hauppauge USB Live2 USB capture device. ( MiniDV is captured to AVI by Firewire ). I have tried to capture to AVI-files (much bigger) but could not see any difference. After advice in this thread I have tried Virtualdub and HuffYUV capture (on Windows 10 and Windows XP) but I could not make it work. I had problems like jitter, lost frames, inserted frames, audio out of sync.
Comments appreciated
What about the Megui settings and script?
What about the framerate of MPG capture, I think it is 6000 "DVD quality" now. Is it a good idea to use a higher bitrate?
Would AVI capture theoretically be better or faster processed by the QTGMC script?
Last edited by jnielsen; 08-16-2017 at 05:11 PM.
|
08-20-2017, 10:39 PM
|
|
Premium Member
|
|
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,319 Times in 989 Posts
|
|
Thanks for the update.
Since I don't use Handbrake, MeGUI, or similarly restricted software, I can't comment extensively on them, except for two of your entries:
Quote:
"Force key frames for chapter marks" enabled (whatever that means)
|
Does your video have chapters?
Quote:
"Anamorph output" enabled (this is important if the video is not resized in the Avisynth script)
|
You can't post anamorphic video to YouTube. Sites like YouTube are square-pixel formats only. I assume that by this time you know what square-pixel means. Don't you see what YouTube did to your anamorphic video?
Some notes on your sample script:
Quote:
Originally Posted by jnielsen
Code:
ConvertToYV12(interlaced=true)
|
This line isn't necessary. MPEG2 is already YV12.
Quote:
Originally Posted by jnielsen
Code:
QTGMC( Preset="Faster")
SelectEven() # Add this line to keep original frame rate, leave it out for smoother doubled frame rate
|
SelectEven() does keep the original frame rate, but it does so by throwing away half your frames, causes judder during motion, and discards 50% of the video's resolution. Unfortunately it's necessary for some websites, but it's just another of many ways that online posting lowers video quality standards for the visually naive.
Quote:
Originally Posted by jnielsen
Code:
# Levels(16, 1, 235, 0, 255, coring=false)
|
I realize that this line is commented-out, but I'd suggest that you keep it that way and avoid using code like this. This code will crush dark detail and clip/distort brights.
Quote:
Originally Posted by jnielsen
|
Allow me to qualify that statement. The sample is not a sample of your capture, but a sample of how you processed and re-encoded it, and how Youtube resized and re-encoded your re-encode. It's not a sample of an original capture nor is it MPEG2, so not much can be said on questions you might have about the original. Was your original capture a 4:3 image pillarboxed in a 16x9 frame? The aspect ratio of the image in YouTube's version seems slightly off.
Quote:
Originally Posted by jnielsen
I have tried to capture to AVI-files (much bigger) but could not see any difference.
|
Then you should not waste your time with lossless capture. The major difference is that lossless capture of analog noise has no added digital compression noise or artifacts, and no loss of original data. Removing those artifacts is difficult and reduces detail. Lossy compression is exactly what it's name says it is: what goes into the lossy encode is less than what you started with, and you'll never see the original 100% again. Each time you modify and re-encode a lossy capture you lose more data. Lossy captures are a final delivery format not designed for further modification without degradation. That includes the lossy compressed audio as well, which is likely why your Youtube audio sounded shrill and weird. Whatever the original capture looked like, the YouTube video shows typical signs of data loss through excessive re-encoding.
Quote:
Originally Posted by jnielsen
After advice in this thread I have tried Virtualdub and HuffYUV capture (on Windows 10 and Windows XP) but I could not make it work. I had problems like jitter, lost frames, inserted frames, audio out of sync.
|
Did you use VDub's defauilt capture settings? Did you have a frame-sync tbc in circuit? You said nothing about your system configuration at capture time or your VDub settings for that particular capture device (VDub's defaults should not be used for that Hauppauge device). Did you also try lossless capture with AmarecTV, which many others use?
Quote:
Originally Posted by jnielsen
Would AVI capture theoretically be better or faster processed by the QTGMC script?
|
The final results would certainly be cleaner and sharper, and it's not "theoretical". But I thought you said you tried lossless and couldn't see a difference. There are thousands of examples of lossless captures in this forums and how they were improved for final output. Unfortunately we don't have even your lossy original as a sample, so not much can be said pro or con about improving it.
|
The following users thank sanlyn for this useful post:
jnielsen (08-21-2017)
|
08-21-2017, 07:03 PM
|
|
Free Member
|
|
Join Date: Nov 2015
Posts: 54
Thanked 10 Times in 8 Posts
|
|
Thank you for the extensive answer
Quote:
Originally Posted by sanlyn
Does your video have chapters?
|
No.
Quote:
Originally Posted by sanlyn
You can't post anamorphic video to YouTube. Sites like YouTube are square-pixel formats only. I assume that by this time you know what square-pixel means. Don't you see what YouTube did to your anamorphic video?
|
I usually do not make the videos for posting to Youtube. I make them for my clients who often have some old tapes they want "on a USB-stick". They want "MP4" and they do get a 720x576, 25 frames/sec (PAL) non-interlaced MP4 file made with the Avisynth script in Megui as described above.
I know what square pixels mean, and if I do not enable the "anamorphic output" it outputs square pixels. The 720x576 video is then only 5/4 ( 720/576=1.25 meaning too narrow). If I enable "anamorphic output" it correctly outputs 4/3 format (1.33). The funny thing is that the output from MeGUI is actually a little wider than 1.33. It is 1.37. It does not bother me that it is a little wider, but I wonder why.
I have experimentet by disabling "anamorpic output" and doing a resize instead to get a 4/3 square pixel video by this command in the script:
Spline16Resize(960,720)
960/720 = 1,33 It means 4/3 ratio with square pixels
It looks ok, but not really better than just the 720x576 with anamorphic output, and it takes longer time.
Sometimes combined with
LimitedSharpenFaster()
If I use it together with the sharpener for movies in good condition, like AVI files from MiniDV tapes. It then looks slightly better but not much.
The sample video in Youtube is atypical. I think Youtube did nothing to it. I did it before uploading. Because it is made from a mix from MPG and MP4 files in Serif MoviePlus and exported by "Export Movie, Youtube, Pal HD 720p 25". It seems to have the slightly off 1.37 aspect ratio created by the anamorphic output in Megui, and then some added black borders on the edges to fit the chosen wide screen format. The reason for choosing 720p is that there is no 576 lines option in the export for Youtube, only 640 x 480 (but maybe I should choose this one anyway to avoid the strange black borders).
The video is meant as a demonstration of how TBC and Avisynth improves quality in three steps. https://www.youtube.com/watch?v=jsVYJ1pRlXY
Quote:
Originally Posted by sanlyn
This line isn't necessary. MPEG2 is already YV12.
|
OK, thanks, but can I keep it anyway? what about other formats like AVI?
Quote:
Originally Posted by sanlyn
SelectEven() does keep the original frame rate, but it does so by throwing away half your frames, causes judder during motion, and discards 50% of the video's resolution. Unfortunately it's necessary for some websites, but it's just another of many ways that online posting lowers video quality standards for the visually naive.
|
OK, I have just seen it used like this in some script examples. I figure the QTGMC uses both an even and an odd frame to create two new non-interlaced frames. Then the odd one is discarded, but the even one left has still been made with information from both frames. So I do not think it discards 50% of the original videos resolution. The video stil has 25 "Pictures"/second".
I understand the quality would be better and "smoother" if keeping the 50 frames/sec. But my idea was to keep the original 25 frames pr. second.
I do not use the videos for posting on websites, but mainly for giving to clients on a USB-stick. My clients must be able to play them on many devices, like mobile phone, tv set, computer, and also many uploads to cloud services like one-drive, dropbox or icloud, some do editing also. Therefore my idea was that sticking to the original 25 fps was the best, but will 50 fps also be compatible?
# Levels(16, 1, 235, 0, 255, coring=false)
Quote:
Originally Posted by sanlyn
I realize that this line is commented-out, but I'd suggest that you keep it that way and avoid using code like this. This code will crush dark detail and clip/distort brights.
|
That is also my experience, I have seen the levels command used in some sample scripts. I think the idea is that it somehow translates the grey levels to fit the computer screen more than a tv. And it also at first sight looks better (more contrast) on the computer screen, but yes, details in the dark disappear, so I do not use it. I also tried ColorYUV(levels="TV->PC") but it seems to have the same issue. I do not quite understand these commands, but this is another discussion.
Quote:
Originally Posted by sanlyn
Allow me to qualify that statement. The sample is not a sample of your capture, but a sample of how you processed and re-encoded it, and how Youtube resized and re-encoded your re-encode. It's not a sample of an original capture nor is it MPEG2, so not much can be said on questions you might have about the original. Was your original capture a 4:3 image pillarboxed in a 16x9 frame? The aspect ratio of the image in YouTube's version seems slightly off.
|
The original capture was MPG2 4:3. The pillarbox and 16:9 frame is because of the chosen 720p youtube upload size. The aspect ratio is slightly off about 1.37 (instead of 1.33) probably because Megui makes this slightly off aspect Ratio when exporting MP4 "anamorpic output" from my 720x576 MPG2 input. See link to original files at the bottom of the post
I Wrote: I have tried to capture to AVI-files (much bigger) but could not see any difference.
Quote:
Originally Posted by sanlyn
Then you should not waste your time with lossless capture.
|
I did not capture lossless AVI, I think it was AVI "DV" they were about 10GB/hour instead of MPG2 4GB/hour.
Quote:
Originally Posted by sanlyn
That includes the lossy compressed audio as well, which is likely why your Youtube audio sounded shrill and weird. Whatever the original capture looked like, the YouTube video shows typical signs of data loss through excessive re-encoding.
|
The audio in the Youtube is worse than in the original capture. Usually I find the sound good. I value good sound and even sometimes use another VCR if the audio does not play good on the first one. Should I use something else than Audio tab: Nero AAC: *scratchpad* in MeGui or other settings in capture?
Quote:
Originally Posted by sanlyn
Did you use VDub's default capture settings? Did you have a frame-sync tbc in circuit? You said nothing about your system configuration at capture time or your VDub settings for that particular capture device (VDub's defaults should not be used for that Hauppauge device). Did you also try lossless capture with AmarecTV, which many others use?
|
This is how I did
Virtualdub install:
http://www.digitalfaq.com/forum/vide...lters-pre.html
Install HuffYUV codec 32 bit (not MT or 64bit):
http://www.digitalfaq.com/forum/vide...l-huffyuv.html
AVI capture with Virtualdub:
http://www.digitalfaq.com/guides/vid...virtualdub.htm
1. File, Capture AVI
2. Audio, compression = pcm 48k, disable audio playback (or frames will be dropped)
3. Video, format, Set the compression mode to YUY2 if available (does not work)
4. Video, compression, HuffYUV or M-JPEG
5. Optionally choose Video, Cropping, Noise reduction or filters
6. File, Set Capture file
7. Capture, capture video
Frames were lost, tried to disable audio playback, lesser dropped frames, but many inserted and audio out of sync. Resulting in jitter in video and audio massively out of sync. I did use a Panasonic NV-HS960 VCR (built in TBC) and Hauppauge USB2 Live capture device. I then gave up. Thank you for the tip about AmarecTV, maybe I will try it, someday when I feel fit to take up lossless recording again.
I do not know if I have a "frame sync" TBC. I have a built in TBC in the Panasonic NV-HS960 ( it can be disabled by a button on the front). I also have a Panasonic ES-10 that I use as TBC, especially for more worn out tapes, it is much "stronger" (used in the sample video). And also a Panasonic ES-15 (not tested). I have not tried the Panasonic with Virtualdub.
Quote:
Originally Posted by sanlyn
The final results would certainly be cleaner and sharper, and it's not "theoretical". But I thought you said you tried lossless and couldn't see a difference. There are thousands of examples of lossless captures in this forums and how they were improved for final output. Unfortunately we don't have even your lossy original as a sample, so not much can be said pro or con about improving it.
|
As mentioned it was not lossless AVI.
This is the original MPG2 file (with and without TBC)
https://1drv.ms/v/s!Au87Yx6urKlahu9igRc_8JEpfiQXTQ
This is the MP4 file made with the Avisynth script in MeGui
https://1drv.ms/v/s!Au87Yx6urKlahu9nTkDgFurwcvFNiw
Choose "download" to download the original
0-10 sec. without TBC (VHS tape PAL)
10-19 sec with TBC
19-30 sec second run without TBC
30-41 sec. second run with TBC
If there are better ways to improve it (other TBC, other script) I am nterested.
Last edited by jnielsen; 08-21-2017 at 07:23 PM.
|
08-21-2017, 10:26 PM
|
|
Premium Member
|
|
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,319 Times in 989 Posts
|
|
I can offer detailed suggestions for improving the original MPEG you provided, but not until the end of the week when I return to my home PC, so I apologize for the delay. But I will say that the frequent and very visible horizontal dropouts can and should be cleaned with Avisynth filters, and color can look richer and cleaner with more standard resizing. With reference to the audio, no one has used mp2 audio for years. It's low-quality compression.
I'm sorry to learn that your clients have equipment that is so bad they can't play MPEG content from a USB device. They must be using some very poor equipemt indeed, since MPEG is one of a few remaining universally playable codecs in the world, is far more widely accepted by playback devices than is mp4 or h264, is the standard for HDTV broadcasting, and is 1 of only 3 mainstay codecs used to encode commercial BluRay discs, which have both progressive`and interlaced formats.
Thanks for your detailed reply, but I'm afraid your answers reveal misinformation about some aspects of video. In particular.....
25fps PAL interlaced video is designed to play at 50 fields per second. Whenit is deinterlaced, interlaced 25fps PAL results in 50 images per second, not 25. When a deinterlacer like QTGMC or yadif deinterlaces 25fps interlaced files, the resulting frame rate is 50 frames per second, not 25.
You can maintain 25 fps from deinterlaced video only by discarding 50% of the frames. So you have been robbing your clients by throwing away 50% of their videos. I have no idea where you gathered your explanation of non-double-frame-rate deinterlacing that you posted, but it is patently incorrect.
There are other misconceptions I can't address address now that I'm on the road with a slowpoke Netbook for internet work. All that aside, perhaps I can offer some enlightenment concerning capture, whether with Virtualdub or something like AmaracTV, both of which are used for lossless capture. I don't know what you meant when you wrote that you're using "filters" with VirtualDub capture (that's a no-no which will cost dearly in lost frames and bad audio sync) or whether you used the ES10 with VirtualDub (and if not, why didn't you?), but you're starting to make shivers of apprehension run up my spine. So I'll offer this updated 21st-century version of the VirtualDub capture guide in 5 sections that begins here: Capturing with VirtualDub [Settings Guide] , and hope you will take special note of video sync options in section 5 which begins at 5: Capture (top menu) .
Meanwhile thank you for your additional samples. The first thing I'd say is that no one, under any circumstances, has gained anything or made visible improvements by upscaling VHS to HD frame sizes. HD is based on high resolution sources, not on low resolution sources blown up into big frames. But the latter does seem to be some sort of misguided fad these days.
|
08-21-2017, 10:54 PM
|
|
Site Staff | Video
|
|
Join Date: Dec 2002
Posts: 14,041
Thanked 2,552 Times in 2,170 Posts
|
|
Deinterlacing gives you 25fps. You can specify 50fps if wanted.
Standard script:
Code:
AssumeTFF() # optional, BFF for DV source
QTGMC(Preset="Slow") # best deinterlacer - balances speed + quality
SelectEven() #
#SelectOdd() # alternative
Assume field dominance
Deinterlace
Select retained field
50fps is valid for almost nothing, so he's doing the right thing.
Nothing is being lost if QTGMC. It creates 50 frames where only 25 had existed (using the 50 woven fields).
Loss of when you do simple drop-field (odd or even).
|
08-22-2017, 06:26 AM
|
|
Premium Member
|
|
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,319 Times in 989 Posts
|
|
Thanks to lordsmurf for the details. Deinterlacing always has a cost, whether original fields are retained or not. It's sometimes a necessary evil and, these days, is one of many butchering techniques required by various media distribution concerns (strictly for their own benefit, not for the viewer). Dropping fields is a loss of temporal resolution. Period. There is nothing to be gained by it other than reducing bandwidth. It's part of the general lowering of quality that appears in all internet streaming schemes, whether video or audio. That lowering of standards characterizes two generations of listeners and viewers who find it not only acceptable and harmless but consider it preferred. It's now becoming standard SOP in tech forums. I guess quality reduction has become a specific requirement for video restoration, despite the recent decision by the likes of YouTube to accept and post 50 and 60fps video. I'll have to refrain from the debate since my personal family viewing habits don't indulge low-quality sources. Extending lowered standards of video restoration for clients' videos and calling it some form of quality bonus is rather bogus IMO. So I'll leave that discussion to those who support it.
|
08-23-2017, 02:23 AM
|
|
Premium Member
|
|
Join Date: Feb 2016
Location: Perth, Australia
Posts: 470
Thanked 3 Times in 2 Posts
|
|
Quote:
I figure the QTGMC uses both an even and an odd frame to create two new non-interlaced frames. Then the odd one is discarded, but the even one left has still been made with information from both frames. So I do not think it discards 50% of the original videos resolution. The video stil has 25 "Pictures"/second".
|
No, QTGMC creates each frame from one individual field (half a frame), which is why what comes out of QTGMC has twice has many frames as what goes in. So when you discard the odd frames, you're discarding all the information that was in the odd field of the original.
|
08-23-2017, 02:53 AM
|
|
Site Staff | Video
|
|
Join Date: Dec 2002
Posts: 14,041
Thanked 2,552 Times in 2,170 Posts
|
|
Quote:
Originally Posted by koberulz
No, QTGMC creates each frame from one individual field
|
Not correct.
If it was, then the resulting frames would be (assuming 720x480 source) only 720x240. Because that's the resolution of a single field. QTGMC does indeed interpret data from both fields (plus others) to merge into a single new frame.
50fps isn't likely to do anything other than give you a doubling of 25fps, nor 59.94 for 29.97.
50fps is unlikely to be any smoother with motion, for the same reason.
QTGMC is based on much older deinterlacers, including NNEDI and even plain ol' BOB. QTGMC is a mix of interpolation, motion analysis, anti-aliasing, and denoise. Simple separation of fields, with anti-alias (which is what you'd get from simple field separation 25>50fps) was never the intention. 25>25 or 29.97>29.97 was always the goal.
You're not "discarding" anything. It's all being analyzed to create a frame that did not actually exist. The idea that you must retain all fields is incorrect.
QTGMC is not a simple drop-frame (odd or even) method. That throws away. Not this.
All of the advanced deinterlacers are based on edge-directed interpolation. This includes:
- QTGMC, which is simply the update of TempGaussMC (TGMC) that used Gaussian blur in addition to EDI aka why QTGMC has noise reduction built in.
- Yadif and Yadifmod; earlier works, based on the same deinterlacing theory. Yadif is "yet another de-interlacing filter". The mod version allows external EDIs.
- NNEDI2, NNEDI3, EEDI3, etc
An EDI is (for interlaced video) a bob that looks forward and back to create the new frame.
So again, the idea that 25fps is "throwing away" data is simply false. At most, it's not making extra frames.
FYI: Most of this deinterlacing theory doesn't even go back 10 years. What existed before it was lousy, mostly "adaptive" methods. And sadly, still in use. For example, the current All-New Popeye episodes on Amazon were adaptive deinterlaced (weave/blend pseudo-temporal method), and look like crap. Lots of aliasing noise, do to simple non-EDI/temporal methods. I'd have never encoded that badly when I did studio work!
I can see how 25>50fps could be ideal for SD>HD work, but SD>HD is a usually bad idea for other reasons. It all depends on the project.
|
08-28-2017, 12:54 PM
|
|
Premium Member
|
|
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,319 Times in 989 Posts
|
|
Well....at the risk of putting myself in bade with the powers that be, I don't know that I can agree with some of this. Perhaps some of it should be stated in another way.
Quote:
Originally Posted by lordsmurf
Quote:
Originally Posted by koberulz View Post
No, QTGMC creates each frame from one individual field
|
Not correct.
If it was, then the resulting frames would be (assuming 720x480 source) only 720x240. Because that's the resolution of a single field. QTGMC does indeed interpret data from both fields (plus others) to merge into a single new frame.
|
Hmm. By default QTGMC uses NNEDI3 and double-rate deinterlacing. Of course, QTGMC offers a way of single-rate work (by throwing away alternate fields). I was under the impression that QTGMC creates two progressive frames from each interlaced field. I'm now wondering why, when I use QTGMC, the frames are 720x480, not 720x240. So I'm getting two full-sized frames for every one original. The frame rate is doubled, and I have twice the number of frames. When I look at the original interlaced fields in a frame, I see two different images. When I look at the deinterlaced frames, I see two distinct images from each original frame and the two frames look just like the two original fields except for the height resize. Maybe I'm doing it wrong. Or perhaps I'm reading you incorrectly.
Quote:
Originally Posted by lordsmurf
You're not "discarding" anything. It's all being analyzed to create a frame that did not actually exist. The idea that you must retain all fields is incorrect.
|
Looks to me as if each original frame cntains two distinct images. Maybe I'm viewing it incorrectly, but it looks like two images to me and the resulting deinterlaced frames contain the same image content resized into two full frames. I had no idea that interlaced video contained only one image. Reading you wrong again? Maybe confusing same-rate deinterlacing in NNEDI3? QTGMC specifies double-rate and all field retention for the NNEDIx NSD eedIX, even when working with progressive source (InpuType=1, 2, etc.).
If interlaced video doesn't contain two two fields with two distinct images created at different instances in time, then deinterlacing isn't necessary. So why do it? Maybe this concept needs to be stated differently.
Quote:
Originally Posted by lordsmurf
So again, the idea that 25fps is "throwing away" data is simply false. At most, it's not making extra frames.
|
LS, I don't know what that means. According to the documenation for Avisynth and SelectEven():
Quote:
SelectEven(clip clip)
SelectOdd(clip clip)
SelectEven makes an output video stream using only the even-numbered frames from the input. SelectOdd is its odd counterpart.
Since frames are numbered starting from zero, by human counting conventions SelectEven actually selects the first, third, fifth, etc, frames.
|
So what happens to the frames that SelectEVen() and SelectOdd() don't use? Seems to me like they're discarded.
If you start with interlaced video at 59.94 fields per second, and you end up with 29.97 fields per second, you get one-half the temporal resolution of the original. An object or action that appears in only one of the dropped original fields will be discarded, unless the multiple-frame interpolations recreates it. Is this what happens when my TV plays DVD's, the TV discards half the fields? I don't think that's really what you meant.
I guess everyone knows by now that QTGMC will discard alternate fields for you without the external Select() functions. Just specify "QTGMC(FPSDivisor=2)" and QTGMC will do the following internally, selecting only alternate frames and discarding the others:
Code:
decimated = (FPSDivisor != 1) ? sblurred.SelectEvery( FPSDivisor, 0 ) : sblurred
Or you can specify a number like FPSDivisor=3, and QTGMC's code becomes literally "SelectEvery(3,0)" which will discard 2 of every 3 frames, keeping only frame 0. In that case the origina interlaced frame rate of 29.97 will drop to 19.98 fps. And you will definitely see the sputtery effects of reduced temporal resolution during motion, if you didn't see it at one-half resolution.
You can also undo most or almost all of QTGMC's interpolations, motion smoothing, and denoising by using a "Draft" preset, which uses a simple Bob -- yet you still, somehow, end up with full-sized frames, two for every original interlaced frame.
Quote:
Originally Posted by lordsmurf
I can see how 25>50fps could be ideal for SD>HD work, but SD>HD is a usually bad idea for other reasons. It all depends on the project.
|
Yes.
|
08-29-2017, 02:14 AM
|
|
Site Staff | Video
|
|
Join Date: Dec 2002
Posts: 14,041
Thanked 2,552 Times in 2,170 Posts
|
|
Nah, no risk at all.
Let me try this another way.
Interlaced video fields are equivalent/exact 720x240 (NTSC). It's 50% of the vertical resolution of the on-screen image.
Remember that interlace alters in time. One odd shows for every two even and one even shows for every two odd.
You have 4 basic type of deinterlace:
1. Drop-field (throw away 50% of the image; the only deinterlace method that "throws away" anything). Due to every other video line now missing, straight lines are aliases/step-stepped/jaggy.
2. Blend. Creates ghosts, but at least lacks aliasing/jaggies.
3. Bob. Separates fields, doubles 29.97fps to 59.94fps, and stretches to 720x480. Aliasing/jaggies still present, though harder to see.
4. Complex methods based on a bob: NNEDI, QTGMC, Yadif/mod/x2, etc.
The complex methods usually separates fields, and then processes them, based on properties of neighboring frames, to create new video frames that never actually existed.
... and I think that's where the disconnect is for you: "never actually existed".
So a deintelacer like QTGMC will take a video, split the fields to create double frames, and then alter each frame to creates something new. The processing is mostly to anti-alias, but not the only consideration. QTGMC gives us that lovely alias-free progressive image we all want.
This all gets more confusing because the advanced deinterlacers have options to select the way fields/frames are processed, and each has a different default setting. Notably, the (old?) default of NNEDI3 is to drop-frame and them anti-alias the leftover.
So, technically, when using default QTGMC, you are right in that you "throw away" 50% of the frames, after processing is finished, in order to restore the initial frames. However, you're throwing away frames that never existed anyway. Most often, that 50% extra frames are primarily just duplicate data. Remember, it was shot 29.97, not 59.94. One could argue that only 29.97 complete moments in time actually existed.
The main reason you keep the 2x extra frames during processing, and discard later, is to aid the processing. Let the deinterlacer decide what is salient information to create the new frames from the old fields. It will create 59.94 ideal frames, and you'll retain 29.97 of them. It's much better than only giving it half the data for processing.
Deinterlacing is one of the concepts that's never confused me.
Does that makes more sense now?
Also...
Note that humans see only frames, not fields -- viewed on interlaced device, for this video discussion. This is what allows us to watch video on interlaced CRTs to begin with. So you cannot argue "59.94 fields per second" in a visual temporal, in terms of "throwing away" data, as it's data we cannot see. We only see the 29.97 frames per second. You have to throw away whole frames to notice a temporal image reduction. We can only see that fields were tossed due to the lowering of resolution, anti-aliasing, lower motion, etc.
Humans see motion, not frames/fields, which is why video gamers are ridiculous ("I can see all my 100+fps on my video game card!!!"). The scientific community generally accepts 40-60 max, with some exceptions due solely to motion (ie, subliminals).
This is also why 240Hz/480Hz/etc for TVs is starting to get ridiculous now. Much like cameras and megapixels, Hz is faux information. With camera sensors, it's now about optics/glass and dynamic range. With TV, the actual limiter is interpolation, deinterlacers, anti-judder, etc. As is always the case, the source determines quality far more than the TV does anyway. Yet many judge a TV based on specific DVDs used for demo.
|
08-29-2017, 01:18 PM
|
|
Premium Member
|
|
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,319 Times in 989 Posts
|
|
Thank you again, but I don't see any sign of normal deinterlacing operations dropping fields. The version I get from some others is that progressive video has frames, but interlaced video does not. Interlaced video consists of a stream of half-height fields, not frames. Each two fields are interleaved together, and this pairing of two fields can be called a "pair", a "box", or if you will an interlaced "frame". In a full deinterlace operation, each two interleaved fields are separated and resized via various means into two distinct full-height images called frames. Then the original pairings, boxes, or interlaced container "frames", or whatever you want to call them, are discarded.
|
08-29-2017, 11:07 PM
|
|
Site Staff | Video
|
|
Join Date: Dec 2002
Posts: 14,041
Thanked 2,552 Times in 2,170 Posts
|
|
Quote:
Originally Posted by sanlyn
The version I get from some others is that progressive video has frames, but interlaced video does not. Interlaced video consists of a stream of half-height fields, not frames. Each two fields are interleaved together, and this pairing of two fields can be called a "pair", a "box", or if you will an interlaced "frame". In a full deinterlace operation, each two interleaved fields are separated and resized via various means into two distinct full-height images called frames. Then the original pairings, boxes, or interlaced container "frames", or whatever you want to call them, are discarded.
|
That's not really any different than what I'm saying.
You get it. We're not using the same words ... but you seem to understand.
- Yes, each frame is two fields (each temporally displaced from one another).
- And yes, some deinterlace methods separate the fields into new frames, by stretching the field vertically by 200% (sometimes using EDI).
And to counteract the artifacts of the new frames, when using a quality deinterlacer (like QTGMC), it has to be further processed using data from neighbor frames. To fully understand QTGMC, you have to understand the earlier TempGaussMC, which is uses temporal processing. The keyword here is "temporal". The new frames are based on the old fields, and are not the old fields themselves. And what often happens during processing is that you end up with ~60 frames that are 90%+ identical to one another (differing only due to high motion).
Wikipedia says this succinctly:
Quote:
Deinterlacing requires the display to buffer one or more fields and recombine them into full frames.
|
Do like a Bible reader: dissect and digest that 1 sentence slowly.
- "Buffer" insinuates processing.
- "Recombine into full frames" means you take 2 fields at 59.94 fields per second, and transform them into a progressive frame at 29.97 frames per second. You don't create 59.94 frames per second. (That can be done, but is atypical. In earlier years, the processing time alone was a nightmare. Remember most of these deinterlacing methods, even QTGMC's base TempGaussMC, are 10-15 years old.)
But you're also not wrong about "throwing away" (discarding, losing) data. I didn't make that entire clear earlier.
Wikipedia is again useful here:
Quote:
The European Broadcasting Union has argued against the use of interlaced video in production and broadcasting [sic] The main argument is that no matter how complex the deinterlacing algorithm may be, the artifacts in the interlaced signal cannot be completely eliminated because some information is lost between frames.
|
Indeed, deinterlacing always lose/tosses some data. But, as I highlighted, it's only "some" information. Not 50%.
The only deinterlace that loses 50% of data is the raw drop-field method, where you completely toss the odd or even into the trash. It's quick and dirty. And for many, it's the only method that exists/existed, even in professional software like Adobe Premiere. You only had/have access to advanced deinterlacers if you're willing to delve into complex software like Avisynth. Adding Yadif to VirtualDub was pretty major at the time. Not even NLEs had Yadif, and I still think it's the case.
I think you just need to refine your understanding a bit. Otherwise, you understand perfectly.
As I stated elsewhere:
- best SD deinterlace for SD is 1:1, 720x540 or 640x480 for streaming (as 4x3/16x9 disc deinterlacing is pointless)
- best SD deinterlace for HD is 1:1, 1280x720 (720p), either 59.94 or 29.97 (depending on specs of project, dictating data size; usually disc = 59.94, streaming = 29.97).
So I'm not at all anti-59.94. It will further hinder loss (10% at most, probably much less), but at a cost of data size (about 200%).
I've been really busy lately, but I just had to take a break for this conversation. I've enjoyed it!
|
08-30-2017, 07:19 PM
|
|
Premium Member
|
|
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,319 Times in 989 Posts
|
|
Quote:
Originally Posted by jnielsen
I usually do not make the videos for posting to Youtube. I make them for my clients who often have some old tapes they want "on a USB-stick". They want "MP4" and they do get a 720x576, 25 frames/sec (PAL) non-interlaced MP4 file made with the Avisynth script in Megui as described above.
|
Just curious: why does it have to be deinterlaced? Why mp4? Why lower quality by resizing to square pixel? Your clients don't have equipment that can play standard DVD or BluRay?
Quote:
Originally Posted by jnielsen
|
The downloads look like re-encoded cuts, but I'm guessing. I don't usually work with lossy captures, too much unnecessary work and heavier filtering involved. But the links are workable. It looks as if your ES10 had its DNR enabled. In any case, there are better filters in Avisynth that don't soften video as much, and the Es10's dnr can cause ghosting. But there's no motion here, so it's still workable. The 720p link with tbc on is rather noisy, with obvious horizontal dropouts and rips.
I've attached two demos. The first is a similar step-by-step you posted on YouTube showing each step as 1: No Tbc, 2: Tbc On, 3: Borders & Levels fix, then 4: Denoised (horizontal noise and dropouts fixed). The original VHS has slightly illegal luma levels, but that was easy enough to fix with Avisynth's SmoothAdjust filter, and some mild edge halos (fixed with DeHalo_Alpha). The dropouts, spots, and comets required lordsmurf's mod of a median averaging filter I call FixRipsP2. There are several versions posted all over doom9 and a couple were posted in digfitalfaq recently. For all this work I used SeparateFields, without deintelacing. The attached VHS_Tbc Off_Tbc On_BordersLevels_Denoise_All.mp4 is anamorphic 480i playing at 4:3.
The second attachment, VHS_Denoised_720p.mp4, is the 1280x720p version of the fixup @50fps. It was deinterlaced with QTGMC after it was denoised and resized using Avisynth's 16-bit dither plugin. The h.264 encoder I used was TMPGenc Video Mastering Works.
|
08-31-2017, 07:15 PM
|
|
Premium Member
|
|
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,319 Times in 989 Posts
|
|
Time for a self-correction (again):
Quote:
Originally Posted by sanlyn
The attached VHS_Tbc Off_Tbc On_BordersLevels_Denoise_All.mp4 is anamorphic 480i playing at 4:3.
|
It's 576i, not 480i. It's PAL.
|
All times are GMT -5. The time now is 09:31 AM
|