VirtualDub and Avisynth filter help? - digitalFAQ Forum
 Forum VirtualDub and Avisynth filter help?
 Ask Question Join / Register FAQ Search Today's Posts Mark Forums Read

#1
08-09-2014, 09:02 PM
 pinheadlarry Invalid Email / Banned / Spammer Join Date: Jul 2014 Posts: 76 Thanked 0 Times in 0 Posts
Hello again,

This is originally what i wanted to ask during my first thread, but that led to various other topics that helped me understand half of what i was trying to do. But now i'm trying to figure out filters..

I've found a couple tutorials but they have left me confused 1/4 of the way through. There isn't much for information for beginners other than personal threads, so i figured i would start a thread for myself.

I'm capturing vhs to prores422 and opening with qtsource in avisynth/virtualdub. I was going through a tutorial that had me open up the avisynth script in virtualdub, but i got an error with qtsource. I tried going through some of the filters in both programs, but to be honest there is just so many options and information i feel like this would take me years to even come close to something one might consider a 'good script'.

So i'm hoping I can get some better knowledge here..

Example..
https://www.dropbox.com/s/hifa7eqd7kj5cvz/example.mov
Someday, 12:01 PM
 Ads / Sponsors Join Date: ∞ Posts: 42 Thanks: ∞ Thanked 42 Times in 42 Posts
#2
08-09-2014, 10:25 PM
 premiumcapture Free Member Join Date: Dec 2013 Location: Boston, MA Posts: 584 Thanked 68 Times in 62 Posts
http://sourceforge.net/projects/fcch...time%20Plugin/

Try this. Worked on Windows 7 for me but there's a few alternatives if it doesn't load.
#3
08-09-2014, 10:56 PM
 pinheadlarry Invalid Email / Banned / Spammer Join Date: Jul 2014 Posts: 76 Thanked 0 Times in 0 Posts
i'm pretty sure i already installed that plugin because i can open the file fine in virtualdub. but for some reason qtsource won't transfer from as to vd?
#4
08-09-2014, 11:05 PM
 premiumcapture Free Member Join Date: Dec 2013 Location: Boston, MA Posts: 584 Thanked 68 Times in 62 Posts
LordSmurf is actually working on a script that has the best of what most tapes need. I am not sure when he'll be finish, but when he does, it should simplify a lot.

I like to use AvsP, which can feel a little easier, but depending on the filter I actually use XVID4PSP as it can make applying a single filter or two a lot easier.
#5
08-10-2014, 10:29 AM
 sanlyn Premium Member Join Date: Aug 2009 Location: N. Carolina and NY, USA Posts: 3,648 Thanked 1,176 Times in 953 Posts
Quote:
 Originally Posted by pinheadlarry i'm pretty sure i already installed that plugin because i can open the file fine in virtualdub. but for some reason qtsource won't transfer from as to vd?
That link is a VirtualDub plugin. It has nothing to do with Avisynth. Avisynth has no idea what happens in VirtualDub or which VDub plugins are used. And vice versa: VirtualDub has no idea what Avisynth is doing other than sending out decompressed video frames to be viewed. All Avisynth does is open and decode the named file, run any Avisynth filters specified, then make its output available to whatever app is looking at it. Its output is decoded uncompressed video...and audio, if audio is present and can be decoded.

The VirtualDub QT plugin .vdf file opens various .mov files (if it can), converts the video to RGB (whch you might not want, especially with the video sample submitted), and makes certain assumptions about the video's structure that you might not need or want.

I don't know what "transfer from as to vd" means. I guess you mean "avs to virtualdub" ?? If you see VirtualDub or Avisynth error messages, you have to give more detail about what the message says.

Avisynth's qtSource plugin consists of qtSource.dll and an html documentation file. The only file that belongs in the Avisynth plugins folder is the .dll. DO NOT COPY html files to Avisynth's plugins folder. The only files that belong in that folder are those that installed with Avisynth, along with plugins that you add as .dll, .avs, and .avsi . Don't keep your own user-created .avs scripts in the plugins folder, as your own scripts are temporary anyway and you'll soon have a plugins folder the size of the Congressional Library if you keep all your scripts there.

How plugins are detected for Avisynth and VirtualDub:

VirtualDub recognizes .vdf files as plugins. When VirtualDub opens, it scans the plugins folder and internally makes a list of all its plugins, so they will appear in VDub's filter dialog window.

Avisynth recognizes .dll, .avs, and .avsi files as plugins. A .dll or an .avsi is automatically detected when an Avisynth script runs. An ".avs" plugin is not an auto loader, they must be explicitly imported using Avisynth's "Import()" function if the script needs them. There are a handful of other .dll's that require a special loading function because of the way they are compiled, but instructions for those plugins always tell you what to do. Two such plugins are yadif.dll and ffms2.dll.

If you look at the html document that came with the qtSource plugin, you'll probably notice that "qtSource" is not shown as a function. The name of the plugin isn't always the name of its main function. For example, the QTGMC deinterlacer downloads as an autoloader script over 200 lines long named "QTGMC-3.32.avsi". If you type that name in a script as QTGMC-3.32.avsi or just QTGMC-3.32, you'll get an error. The name of the main function is simply "QTGMC". Many functions and plugins have a long list of parameters that can be set for different values, but most filters -- that's most, but not 100% of them -- can be run with their default settings. Here is how some familiar plugins would be typed using their default settings:
QTGMC()
LSFmod()
MCTemporalDenoise()
SangNom()

But there are just as many functions and plugins that require at least one parameter to be specified. For example, you don't run these built-in Avisynth functions without setting one or more specific parameters:
ColorYUV()
Tweak()
Trim()
AviSource()

How do you know what to specify? You look over the documenation and you look at the way others use them. True, much documentation is over the heads of newcomers, but the basic stuff is, well, pretty basic.

If you have the qtSource.dll plugin in your Avisynth plugins folder, you can write this script and save it as something named "first run.avs" or whatever name you want, then open it in Virtualdub.

Code:
qtInput("Drive:\path\to\video\example.mov",audio=1)
info()
Note that "Drive:\path\to\video\example.mov" is not valid. I typed the example that way to show how to place the path and name of the input video. On my computer that script reads as follows:

Code:
qtInput("E:\forum\pinheadlarry\Aug09\example.mov", audio=1)
info()
The "Info()" function will display some file data on the output screen. If you don't want to see that info, just delete that line of text. Or you can keep it there, but just put a "#" comment-marker at the start of the line, and Avisynth will ignore it:

Code:
qtInput("Drive:\path\to\video\example.mov",audio=1)
# info()  <-- the starting # makes this line a comment, which will be ignored.

Code:
info()  # <-- info() will run, but it is followed by comment text which is ignored.
The filters to use depend on what you want to do. I'd suggest that you first set some decent, valid luma and chroma levels, as many shots in this clip are unviewable.

Like most editors, VirtualDub converts input to RGB32, but for viewing only. What happens to this file if you view it in VirtualDub and then just close it without doing anything? Nothing. If you use "Save as avi..." to output another copy of it, by default VirtualDub outputs uncompressed RGB24. Otherwise you have to specify a colorspace and compressor for output. If saved in other colorspaces or compressors, your original 244kb example.mov would by default save the file as follows:
(uncompressed RGB24): 1,233 kb
(comcompressed YUY2): 825 kb
(lossless huffyuv YUY2): 255 kb
(lossless Lagarith YUY2): 235 kb
By default audio is saved as uncompressed PCM unless you specify otherwise.
#6
08-10-2014, 12:35 PM
 pinheadlarry Invalid Email / Banned / Spammer Join Date: Jul 2014 Posts: 76 Thanked 0 Times in 0 Posts
Thanks sanlyn for the very informative post. My ego hopes i didn't come off as incompetent as to not know do simple tasks as install or call a plugin. But I still do appreciate the answer don't get me wrong.

i double checked and yes, i did have that quicktime plugin installed for virtualdub.

After removing some filters i blindly added to the avs script, i was able to open the script in virtualdub. So obviously that was my fault for just assuming the script wouldn't transfer.

Here is the tutorial i was following. I know it's using a cartoon, but i figured i could adjust the settings to my liking. But half way through i got confused and just stopped. Is the article worth a revisit?

http://www.animemusicvideos.org/guid...spostqual.html

I'll have to reread your post and do some googling before i ask more technical questions. But when you said 'filters depend on what i want to do', i didn't realize there were different spectrums to cleaning up a video? I'm sure knowledge in this field probably leads to some very technical options, but what about just a standard clean up?

I'm not sure i have the right vocabulary to explain what i'd like to do. But similar to the tutorial above, i'd like to just make a better picture. I'm just not sure what they means as far as filters or time spent. I'm not even sure what the stand out problems are in a video.
#7
08-10-2014, 02:23 PM
 sanlyn Premium Member Join Date: Aug 2009 Location: N. Carolina and NY, USA Posts: 3,648 Thanked 1,176 Times in 953 Posts
The AMV site is mostly about toons, but its principles apply to any video. The link you posted is version 2 of the AMV guide. Version 3 is at http://www.animemusicvideos.org/guides/avtech31/. You'll find many of the procedures are about the same. The newer Avisynth sampler is at http://www.animemusicvideos.org/guid...post-qual.html.

Then there's another old guide (old = 2009) from Scintilla at http://www.aquilinestudios.org/avsfi...dex.html#intro. There's an index at the top of the the intro page.

Both of the links above are decent sources for samples of what various problems look like. About 90% of the filters can be used on "real" video. Degrainers, anti-alias, smoothers, dot crawl and chroma cleaners work on almost anything, but something like a line darkener....well, that would apply mostly to line art, but you never can tell when some offbeat technique just might be handy.

Many current "official" Avisynth plugins are at http://avisynth.nl/index.php/External_filters, though it hardly covers all. One handy feature is that it lists plugins by category. It also has links to some geeky discussion threads.

Don't discount VirtualDub, either. It's far more extensive than you'd expect. But it's best to work with Avisynth first.

As for a "standard" cleanup script, no one has ever concocted one. No such thing as a standard video problem. There are "common" problems. But no one script could even cover the different problems seen in your sample from scene to scene. Looking over your new sample now. Will try to post some specifics a little later.
#8
08-10-2014, 03:57 PM
 pinheadlarry Invalid Email / Banned / Spammer Join Date: Jul 2014 Posts: 76 Thanked 0 Times in 0 Posts
Thanks sanlyn, I'm going to spend a chunk of my night going through those tutorials.

i'm not sure if my first exmaple was the best so i uploaded 2 more short clips that may be better to work with.

https://www.dropbox.com/s/7m1x45xvb1od7qa/example2.mov
https://www.dropbox.com/s/hax4ztlo7bgliok/example3.mov
#9
08-10-2014, 07:04 PM
 lordsmurf Site Staff | Video Join Date: Dec 2002 Posts: 9,463 Thanked 1,573 Times in 1,373 Posts
You need to be careful with the AMV site. Some of the "help" there is terrible.

- Find television shows, cartoons, DVDs and Blu-ray releases at the TVPast forums.
#10
08-10-2014, 08:35 PM
 sanlyn Premium Member Join Date: Aug 2009 Location: N. Carolina and NY, USA Posts: 3,648 Thanked 1,176 Times in 953 Posts
Yeah, some of their other procedures strike me as mysterious, often too simple. Fairly OK page on the plugins, though. They left out a lot of heavy hitters and some details, but I guess one has to start somewhere.

@pinheadlarry, somewhere the AMV site advertises a big download file full of filters. Avoid that one. It's behind the times and overwrites stuff you need.
#11
08-12-2014, 01:57 PM
 sanlyn Premium Member Join Date: Aug 2009 Location: N. Carolina and NY, USA Posts: 3,648 Thanked 1,176 Times in 953 Posts
So here are a few simple scripts to show how this stuff works. And to get into analyzing some problems.

Have you had a look at the first short shot in your sample "example.mov" (frames 0 to 130)? Note that in the code below I've used the path where that .mov clip is stored on my PC. You'll have to modify that path to point to the file on your system.

In Avisynth it's easy to make a clip using only frames 0 to 130 and its audio:
Code:
qtInput("E:\forum\pinheadlarry\Aug09\example.mov",audio=1)
Trim(0,130)
Save the small file if you want as YUY2 using the compressor of your choice, I used Lagarith. In VirtualDub, play that file one frame at a time. You will see what appears to be 1 interlaced frame every 2 frames, until the fade to black. If you tried opening that clip with the info() function displaying file info onscreen, you'll note that VirtualDub thinks this clip if bottom field first (BFF). Avisynth usually assumes BFF. The clip is actually Top Field First (TFF). For that reason, as you'll later see, you usually have to specify TFF or BFF in an Avisynth script to keep this matter straight.

This shot was encoded as interlaced. If it's interlaced, we should be able to deinterlace it. Deinterlacing will take the 2 fields in each frame, separate them, and expand each of those fields to full-frame size. It will double the number of frames and double the frame rate. Because each field in an interlaced video represents 2 instants of time in a single frame, deinterlacing should reveal two "frames" for each original field, and each new frame should be a different image when the original object moves.

The simplest and least talented of deinterlacers is the Bob() function. But it's fast and OK for analyzing stuff. So, the code below deinterlaces this clip using Bob():

Code:
qtInput("E:\forum\pinheadlarry\Aug09\example.mov",audio=1)
Trim(0,130)
AssumeTFF().Bob()
If you play the bob() results one frame at a time, each new frame should look different. But that's not what happens here. You'll see that every group of 5 frames shows 2 consecutive interlaced double-images. Why? Well, among the several most damaging ways to deinterlace video, two of the really bad ways are near the top of the list: (1) deinterlace using field blending. (2) Deinterlace film source that is progressive 23.976 fps and uses pulldown (telecine) instead of being interlaced. The space cadets who processed this video made both mistakes, not just one. In a few instances, one might fix it (unblend). Most of the time, however, it can't be fixed.

In this case it's not fixable because the images are actually field-blended progressive video, encoded as interlaced. If you try to use inverse telecine and deblenders (it won't unblend anyway), you'll usually get a blended result that is somewhere around home-camera speed of 18 or 20 FPS. You can use other methods that will get 23.976FPS film speed, but still with blended frames. You can try a whole slew of over 20 de-blend filters found on Doom9, but none of them will restore this clip to its original state. Field-blending is the worst. Now you know. Maybe someone else can come up with a fix.

Last edited by sanlyn; 08-12-2014 at 02:43 PM.
#12
08-12-2014, 02:17 PM
 sanlyn Premium Member Join Date: Aug 2009 Location: N. Carolina and NY, USA Posts: 3,648 Thanked 1,176 Times in 953 Posts
More scripts, and some plugins you need to learn...

Even if you decide to use that first shot in "example.mov", there's little you can do with it. Too much luma and chroma data is destroyed. I'm guessing, but this shot looks like a special effect applied on purpose ? ? Any histogram or vectorscope will describe the problem. Below is a capture of frame 37 from the original .mov:

Whether it's a special effect or not, this image has hardly any pixel data. It's washed out and very like a reddish-sepia print. This is evident from the YUV histogram, shown below. You get the YUV histogram in the form of a waveform monitor wih this code:

Code:
qtInput("E:\forum\pinheadlarry\Aug09\example.mov",audio=1)
Trim(0,130)
Histogram(mode="Classic")

The waveform has several sections. The yellow-orange side borders represent the undesirable luma and chroma range that are darker than RGB 16 (left side) or brighter than RGB 235 (right side). The desirable area lies in the dark area between those two borders. The green line down the middle represents the middle of the spectrum, or RGB 128. Pixel values would normally populate most of that black area. But we see that 90% of the pixels have been squeezed ("crushed") into a thick vertical line around RGB 200 on the right. There's a small scatter of "dust" around the middle of the black area representing stray pixels that have survived, such as those in the darker shadow areas and hair. There is no other data to work with. You couldn't widen or exapnd that thick bright line of crushed pixels. Crushed=detroyed.

The two 'scopes below are the way this image displays in RGB. These are an RGB "parade" histogram on the right, and an RGB vectorscope on the left. This VirtualDub histogram filter doesn't work in Win7 or 8.

RGB histogram (left-hand chart): This histogram shows average luminance (the white section at the top) and has one section each for Red, Green, Blue. Dark values are at the left, bright values on the right. The histogram mirrors the YUV info -- all of the data in this image has been squished into 4 small "spikes' at the right-hand side. There's no other data to work with.

RGB Vectorscope (right-hand chart): Luminance and color are joined in this 'scope. The spread of pixels has the dark values in the center, while brighter pixels radiate outward. The small circle of boxes indicates the limits of the RGB 16-235 range. You can see that the only data is a small blotch of flesh-tone pixels near the center that radiates toward the upper left. Other colors aren't present.

If you want, you can try to add a little pizzaz to frame 37 by using the Avisynth COLORYV() function and the SmoothLevels function (a function of the SmoothAdjust plugin). The code below attempts to do this:

Code:
qtInput("E:\forum\pinheadlarry\Aug09\example.mov",audio=1)
Trim(0,130)
ColorYUV(off_y=-40,cont_y=70,off_u=7,off_v=-4)
ConvertToYV12(interlaced=true)
SmoothLevels(16,0.95,255,16,245,chroma=200,limiter=0,tvrange=true,dither=100,protect=6)
The code does several things. The Crop() function cleans up the top and bottom borders and centers the image. If you play this video you'll see head-switching noise along the bottom and a "twittering" or hopping border across the top. These are removed with Crop() and then AddBorders() makes new black borders to center the image and restore the 640x480 frame.

ColorYUV is used to shift that thick white line of pixel data toward the left (darker) part of the spectrum, while lume contrast (cont_y) increases the darks and brights to try to widen the available values. Off_u shifts blue 7 points to the brighter right side, and Off_v shifts red a little to the darker left side. ConvertToYV12() converts the colorspace properly for use by the next plugin. Then SmoothLevels() is used to smooth luma and chroma to prevent hot spots and to make the colors look less banded. The resulting frame 37 is below:

No, doesn't look so great. Most flesh colors have values in the middle of the spectrum, but there aren't many real midtones around. Almost everything is in that thick vertical line above the midtones. But it does have some dimension to it and looks sharper. Unfortunately, all these fixes look like garbage when you get to the fade to black (frame 54, below):

The proposed luma and chroma fix results in some bizarre posterization and oversaturation effects during the fade. It looks progressively more gruesome as the fade continues toward black. You can see that dark detail under the wood fixture has turned completely black, although some details were clearly visible in the original frame. So this image "fix" is really impractical and, at the end, it's an ugly fadeout with a wild flurry of huge clumps of simmering black blobs by the time it's over.

Finally, there's the fade to black in frame 130. Again, the YUV waveform has a single thick white line of data at the far right, around RGB 11 to 16, to indicate that a black screen has hardly more data than the other images.

Probably better to leave that shot as-is except for some denoising to calm down that fade a little.

Attached Images
#13
08-13-2014, 07:19 PM
 pinheadlarry Invalid Email / Banned / Spammer Join Date: Jul 2014 Posts: 76 Thanked 0 Times in 0 Posts
i guess i found what i'm doing for the rest of the night lol. will report back after i go through all this. but thank you in advance
#14
08-13-2014, 07:44 PM
 sanlyn Premium Member Join Date: Aug 2009 Location: N. Carolina and NY, USA Posts: 3,648 Thanked 1,176 Times in 953 Posts
from my post #12:

Quote:
 Originally Posted by sanlyn ........ The two 'scopes below are the way this image displays in RGB. These are an RGB "parade" histogram on the right, and an RGB vectorscope on the left. This VirtualDub histogram filter doesn't work in Win7 or 8.
Ooooops! Sorry, folks, my bad. The RGB "parade" histogram is on the left. The vectorscope is on the right. I should know left-right by now. Sometimes at 2:00 AM, though, I forget.

Here's a tip I saw posted some time ago, even though the content might be so obvious it seems silly:

The code in an Avisynth script is executed line by line in the order that the statements appear. The ouput from Line 1 becomes the inpit for line 2. Output from line 2 becomes the input for line 3. And so on.

What this means is that you can insert comment markers (the # symbol) to cause a line to be ignored. So you could comment-out the lines and then start un-commenting them one by one to see what accumulated lines do. For example, take this fictional script where all 4 lines will be executed in order:
Code:
line 1
line 2
line 3
line 4
Then comment-out the last 3 lines to run only line #1:
Code:
line 1
#line 2
#line 3
#line 4
You can uncomment the lines one by one, but keep them running in the same sequence. Sometimes if the output from a previous line doesn't run, the next line won't run properly or might not run at all. So don't take away the comment markers at random.

You once remarked that you'd like some sort of "standard script" to use for everything. That might be possible, especially if you have a video with shots that all have the same problems. Most video doesn't work that way -- however, VHS has some fairly common problems that require pretty much the same cleanup. It's possible to have a standard filter set and a standard sequence, but quite often some of the defaults or specific parameter settings might have to change to suit the content.

The filter samples shown in the earlier links to the AMV filter page and Scintilla's discussions are old standby's that people use frequently. As I said, line darkeners are really for use with cartoon line art, but anti-alias fllters, denoiers and sharpeners are useful everywhere.

Meanwhile I played with some of the shots in your samples and can try to come up with some sample scripts later. Sometimes the documentation can make things look more complex than necessary. Seeing how it's done in practice and in scripts from other threads will make it look easier, I'm sure.

Last edited by sanlyn; 08-13-2014 at 08:35 PM.
#15
08-14-2014, 12:10 PM
 pinheadlarry Invalid Email / Banned / Spammer Join Date: Jul 2014 Posts: 76 Thanked 0 Times in 0 Posts
Quote:
 Originally Posted by sanlyn Sometimes the documentation can make things look more complex than necessary. Seeing how it's done in practice and in scripts from other threads will make it look easier, I'm sure.
This. I'm overwhelmed by how dense these programs are.
#16
08-14-2014, 03:34 PM
 sanlyn Premium Member Join Date: Aug 2009 Location: N. Carolina and NY, USA Posts: 3,648 Thanked 1,176 Times in 953 Posts
No problem. Most scripts you'll see are fairly short. You only have to learn it once. Most of the time you'll use the same filters for similar videos, just change the settings when needed.

Got very busy around here the last couple of days, but I'm preparing a couple of samples for later. Sorry for the delay.

Yeah, the text of some of the heavy-hitter plugins like QTGMC is really big. Good thing the designer worked all that out for us --you can run that monster with only one line of code in your own script.
#17
08-16-2014, 01:10 PM
 pinheadlarry Invalid Email / Banned / Spammer Join Date: Jul 2014 Posts: 76 Thanked 0 Times in 0 Posts
Looking forward to the script, sanlyn.

I've been playing around with different filters you guys have recommended on this thread and the last, specifically QTGMC. But i just can't get it right. I can always find one out of so many frames that are still interlaced. Going to continue more tonight. I'll try and post some screens of what i'm talking about later on.
#18
08-16-2014, 02:25 PM
 sanlyn Premium Member Join Date: Aug 2009 Location: N. Carolina and NY, USA Posts: 3,648 Thanked 1,176 Times in 953 Posts
QTGMC deintelaces completely. If you refer to camera shots such as the one discussed in post #12, that shot was encoded as interlaced but has been field-blend deinterlaced before it got to you. It's rare to be able to restore that kind of fake deinterlace, as deinterlacers can't do it and most unblend plugins wouldn't be able to help very much. You might be encountering a few clips like that one. You might also be looking at telecined shots that should be inverse-telecined, not deinterlaced. The shot in post #12 appears to have been a PAL to NTSC conversion that was telecined to get 25fps up to 29.97 fps, then incorrectly field-blend deinterlaced. Blended means there aren't two separate top-and-bottom fields in the frame that contain two different images: the original two fields were blended into one. Both fields contain the same image, with a blended "ghost" instead of two separate images.

If you run the statement "QTGMC()" as-is, it's the same as running QTGMC with "slow" default settings. The slowest settings run slowest because they make more repairs and do more denoising. The faster prwesets don't clean clean up as well, but they're usually adequate for most purposes. I've been using these variations:
Code:
AssumeTFF().QTGMC(preset="medium")
AssumeTFF().QTGMC(preset="fast")
AssumeTFF().QTGMC(preset="very fast")
If you want to add extra denoising and cleanup to any of those statements, do it this way:
Code:
AssumeTFF().QTGMC(preset="medium",EZdenoise=2)
AssumeTFF().QTGMC(preset="fast",EZdenoise=2)
AssumeTFF().QTGMC(preset="very fast",EZdenoise=2)
If you want even more denoising and motion smoothing, try this:
Code:
AssumeTFF().QTGMC(preset="medium",EZdenoise=3,denoiser="dfttest")
There are three sources of QTGMC documentation:
1. The html that comes with the plugin
2. The avsi script itself. Opens best with Windows Notepad. Don't use "wrap text" when viewing it. The first several dozen lines of text describe all the defaults for each of the presets.
3. The doom9 thread on QTGMC: http://forum.doom9.org/showthread.php?t=156028. Don't get into too big a rush trying to get thru that thread. It's over 50 web pages!

Note that for final output, DVD is usually interlaced, and standard defintion BluRay/AVCHD is interlaced for disc output. Interlaced usually displays fast motion and camera pans more smoothly. Deinterlace or inverse telecine are usually used for cleanup purposes that require it, then are usually reinterlaced or telecined at the end.

Last edited by sanlyn; 08-16-2014 at 02:47 PM.
#19
08-17-2014, 05:17 PM
 sanlyn Premium Member Join Date: Aug 2009 Location: N. Carolina and NY, USA Posts: 3,648 Thanked 1,176 Times in 953 Posts
Quote:
 Originally Posted by pinheadlarry when you said 'filters depend on what i want to do', i didn't realize there were different spectrums to cleaning up a video? I'm sure knowledge in this field probably leads to some very technical options, but what about just a standard clean up? I'm not sure i have the right vocabulary to explain what i'd like to do. But similar to the tutorial above, i'd like to just make a better picture. I'm just not sure what they means as far as filters or time spent. I'm not even sure what the stand out problems are in a video.
Recognizing a few things can help build a vocabulary. Keep in mind that the sources you're working with are great examples of how not to process video. As you'll see, some glitches are impossible to correct and some can be fixed but the fix could look worse. VHS is bad enough without adding bad processing or dubbing to the mix. It makes it really difficult for newcomers. I've been there. Many of us are still there!

I took 3 more camera shots from your example.mov clip. Earlier I posted notes about that clip's first shot. I'm calling that scene "A". The next three shots I'll call B, C, and D. Some sample scripts might help you define what a "better picture" means in terms of cleaning up problems, even if "better" has different meanings for different people. I'll try to focus on problems that are common and obvious (at least, they should be obvious).

To start, for the moment I'll skip scene "B" (the night-time shot) and move to scene C. This is the guy leaping into the scene in early dawn light, or maybe it's late afternoon. If you consider that the first frame in that shot as number 0, the image below is frame 59 from the original clip (it's still interlaced):

Minor points: there's the usual head switching noise at the bottom border. The top border is a broken black and white line. A "standard" filter and procedure would be to crop off the noisy border stuff with Crop() and replace it with clean blacks using AddBorders(). The black borders will blend in with any TV screen background, but noisy borders won't.

The sky has some magenta blotches. Not much grainy noise, but clearing the blotches will remove pixel data and cause banding effects where the sky colors gradually change. So a debanding filter (gradfun2DBMod) and a little fine film-like grain (AddGrainC) were used.

Along the right side of the guy's head and on some of the fence posts you'll see a bright edge line called a halo. You'll have to sharpen this scene, but most sharpeners will worsen halo effects. So a de-halo filter will be needed (DeHalo_Alpha) after sharpening with LimitedSharpenFaster. And if you look closer at his arms you'll see a small amount of reddish smear against the sky. Increasing saturation will increase that discoloration, so some chroma cleaners would be needed to control it (FFT3Dfilter in chroma-only mode and CNR2).

Interlace combing is always seen on a computer. On these videos it seems excessive, even with deinterlacing media players. It might look a little worse because (I guess) some of these shots appear to be sharpened while still interlaced (a non-no in anyone's book). Rough sawtooth edges isn't interlace combing -- look at the guy's head, arms, and slacks. Those can be smoothed a bit without totally obscuring the soft detail in the figure. But my guess is that this shot has already been denoised and, again, done while interlaced. Anti-alias filters are nearly as destructive as dot crawl filters. Better to take it easy with those and live with some imperfection in the edges rather than destroy everything that's left from the original processing. I actually used three anti-alias and edge smoothers in filtering this shot, all of them mild. Otherwise, the guy's face would be totally smeared.

Which brings up a major problem here. It's really poor lighting, made worse by the camera's autogain and auotcolor features. You can't use just a primitive "Bright" control here to reveal darker detail. Well, you could, but you'll soon see that all the "detail" you'll get from the darks is what you already see. Part of the trouble is that the brightest part of the guy's face is at about RGB 30 to 45 -- an extremely narrow range. But most of the background and trees and other stuff are in the same tonal range. A brightness filter would simply gray out everything "down there", making it look like an unreal blur.

Worse than that, the camera's autogain changes levels three times between the start frame and the end frames. If you brighten the darks in this part of the shot, the start and end of the shot will be blown away. What the camera crew needed was light in the shadows to begin with. The contrast and level changes in this shot are far beyond the capacity of video to manage it. Lighting did change overall during the shot, but shadow lighting remained the same. Brightening part of the scene will brighten all of it. Below are three YUV histograms of how black levels change between the first frame, the middle frames, and the end. The histograms also show that black levels are already too high to begin with, so this scene looks undersaturated and washed out all the way through.

Above, the YUV white section shows luma values. Darks are to the left, brights to the right. You can see that blacks at the start and end are rather high, at about RGB 40 to 50, and the middle is depressed in all the frames. With so little midtone data, you won't get clear skin tones. In the right-hand histogram's far right side you see a sharp white spike that indicates bright clipping.

ColorYUV() and the SmoothAdjust plugin were used to level things out, along with an avs scripted plugin called ContrastMask.avs, which mimics some Photoshop masking techniques. After blacks were lowered to real-world values, this mask raised the darkest stuff just enough to be able to see something.

Deinterlacing was required for some of the plugins. QTGMC was used for that and to get some motion noise reduction. Then the clip was re-interlaced at the end for smooth motion in this fast scene.

The script below looks twice as long as it should because I added comment lines. Notice the first line. It uses the Import() function to copy the ContrastMask.avs scripted function. An avs scripted function that's in your plugins folder doesn't autoload. You have to Import() it. Change the path statement to match the location of your plugins folder. The ContrastMask.avs filter is attached at the bottom of this post. Copy it into your Avisynth plugins. ContrastMask() also requires the VariableBlur plugin, attached.

Code:
Import("D:\AVisynth 2.5\plugins\ContrastMask.avs") ## <<-- Change path to your plugins folder !!
Trim(246,386)
ColorYUV(cont_y=8,off_y=-12,off_v=1,cont_u=-30,off_u=-1)
ConvertToYV12(interlaced=true)

# --- 2 edge cleaning filters ---
TComb()
maa()

# --- deinterlace + decomb ---
AssumeTFF().QTGMC(preset="very fast",border=true)
vInverse()

# --- clean pink blotches ---
Cnr2("xxx",4,5,255)
MergeChroma(FFT3DFilter(sigma=5,bt=3,plane=3))

# --- anti-banding ---

# --- sharpen and edge clean ---
MergeChroma(awarpsharp2(depth=30))
LimitedSharpenFaster(strength=75)
DeHalo_Alpha()

# --- more edge cleaning ---
Santiag(2,2)

# --- add mild fake "detail" and "texture" ---

# --- clarify shadows, set levels, dither for cleaner color ---
SmoothLevels(12, 1.1, 255, 16, 250,chroma=200,limiter=1,tvrange=true,dither=100,protect=6)

# --- add color and depth ---
SmoothTweak(saturation=1.4)

# --- clean the borders ---

# --- restore interlacing ---
AssumeTFF().SeparateFields().SelectEvery(4,0,3).Weave()
Is this much work usually required? Usually, no. But these videos are in bad shape. Wait until we get to scene "B" that I bypassed a while back. An mpeg of scene "C" produced by this script is attached.

The other question is, did this work make a vast difference? Not that much. The shot was incorrectly photographed and improperly over filtered to begin with. The usual playback dnr didn't help much, either -- look at the foreground as the camera pans and you'll see its details disappear entirely for several frames . At least a TV screen won't blink when the overwrought chroma levels hit it, and it no longer looks like almost-black-and-white.

Scene "D" is next.

Attached Images
Attached Files

Last edited by sanlyn; 08-17-2014 at 05:58 PM.
#20
08-17-2014, 05:30 PM
 sanlyn Premium Member Join Date: Aug 2009 Location: N. Carolina and NY, USA Posts: 3,648 Thanked 1,176 Times in 953 Posts
Scene D: Daytime jump, public building.

You don't need a histogram to see what's happening with hot red saturation and blown-out highlights. Contrast has been pumped to make this look (I guess) like bright sunlight. In fact it was shot in overcast light -- the end frames have more detail that show what's missing, and there are no shadows on the ground. Apparently it previously rained, as there appear to be remnants of two rain puddles. The green leaves are almost turning olive with all that red. You can't do anything with the color because Autocolor changed it at the end of the shot, so adding blue will turn sidewalks purple at the end.

This one's easy because there's little you can do. Levels are calmed a bit, then deinterlace and some decombing with vInverse() is used to clean up some of the bad twitter when the camera pans across the brick wall. The shot was sharpened while interlaced or during dub or playback, as can be seen from the jaggy edges and dark combing on the figures. Undoing it would just inflict more wreckage.

Notice that the script below uses many of the filters and plugins used earlier. A few specific values change to suit the circumstances:

Code:
qtInput("E:\forum\pinheadlarry\Aug09\example.mov",audio=1) ##<<- Adjust the path to match your system
Trim(368,470)
ColorYUV(gamma_y=-15,cont_v=-120,gain_u=5,cont_u=20)
ConvertToYV12(interlaced=true)
SmoothTweak(saturation=1.2)
SmoothLevels(24, 1.0, 255, 16, 245,chroma=200,limiter=2,LMode=3,tvrange=true,dither=100,protect=6)
AssumeTFF().QTGMC(preset="fast")
vInverse()
Cnr2("xxx",4,5,255)
Santiag(2,2)
AssumeTFF().SeparateFields().SelectEvery(4,0,3).Weave()
If you don't find the motion and edge noise problematic, you could end the above script just before the line that runs QTGMC, by deleting those lines or commenting them out.

Fixing this shot poses a problem. When using Trim() to detach this shot from the main video, I left an extra 30 frames at the end to handle the dissolve into the next scene. It's a 1-second 30-frame dissolve that begins at frame 73 and ends at frame 102 of this scene. Do these filters affect the next scene? They sure do, which means that the guys who made this video forced you into one of three choices: (1) Accept this shot as-is and allow it to dissolve into the next shot, which already has depressed levels and a lot of low-light noise, or (2) Correct this scene but don't correct the next one, which will be too dark and noisy, or (3) correct both shots and learn to rework your own dissolve between two corrected scenes, which can be done in Avisynth. That would be another long demo, which I didn't get into at this point.

The RGB histogram below is taken from frame 55 of the "D" scene. The sharp peaks at the right show serious bright clipping, caused by raising midtones so far to the bright end that brighter detail was destroyed. You can darken the image, but it won't restore anything.

Attached Images
Attached Files