What are first steps to restoring captured AVI? (with samples) - digitalFAQ Forum
 Forum What are first steps to restoring captured AVI? (with samples)
 User Name Remember Me? Password
 Ask Question Join / Register FAQ Search Today's Posts Mark Forums Read

LinkBack Thread Tools
#1
07-05-2018, 11:03 AM
 JohnGalt Free Member Join Date: May 2018 Posts: 7 Thanked 0 Times in 0 Posts
Hi everyone. I'm just starting down this path and need some help learning. I've captured some camcorder shot VHS tapes and now am trying to find out what to do with them next. I've read a lot of the posts and guides here, but still have only gleaned a slight direction on where to start.

If you could take a look at these two samples, second sample coming in next post, and give me your feedback on what they need, I would appreciate it. Working through this a bit at a time will help me learn.

Oh and also please let me know if I could have done anything better in the capture process. I've still got about 35 tapes to capture to capture and process.

Here are the details. I used:
JVC GR-SXM520U SVHS camcorder
JVC HR-S7900U SVHS player
TBC-1000 (thanks LS)
ATI 600 USB (thanks again LS)

into:
Windows 10
Virtualdub 32 bit
Huffyuv 32 bit

I've installed Avisynth in anticipation, but haven't touched it yet.

And, if it is useful for any of this, I have TMPGenc's Video Mastering Works 6, purchased a while back for a different project.

Thanks folks -

Attached Files
 test sample 2a1.avi (93.70 MB, 74 downloads) test sample 1a2.avi (97.51 MB, 43 downloads)
Someday, 12:01 PM
 Ads / Sponsors Join Date: ∞ Posts: 42 Thanks: ∞ Thanked 42 Times in 42 Posts
#2
07-10-2018, 05:15 PM
 sanlyn Premium Member Join Date: Aug 2009 Location: N. Carolina and NY, USA Posts: 3,648 Thanked 1,264 Times in 971 Posts
Thank you for the samples.
Looks like delays in replying due to vacations. I took a look yesterday, will try to post detailed notes tonight or tomorrow a.m.

-- merged --

Sorry for the ddelay Alot of unexpected activity going on today.

And thanks again for the samples.

Quote:
 Originally Posted by JohnGalt I used: JVC GR-SXM520U SVHS camcorder JVC HR-S7900U SVHS player TBC-1000 (thanks LS) ATI 600 USB (thanks again LS) into: Windows 10 Virtualdub 32 bit Huffyuv 32 bit
A firm Ok on the components and software, but sorry to see that you're stuck with Win10. Which player did you use for your posted samples, the camcorder or the VCR?

Quote:
 Originally Posted by JohnGalt give me your feedback on what they need, I would appreciate it. Working through this a bit at a time will help me learn.
Most members here including those actually working as pros learned this stuff a step at a time. So don't feel alone.

Quote:
 Originally Posted by JohnGalt Oh and also please let me know if I could have done anything better in the capture process. I've still got about 35 tapes to capture to capture and process.
35 tapes isn't so bad. I started with 185 tapes and over 350 hours of recorded programs. That doesn't count some of my sister's nightmare home videos and a couple of out of print retails that I acquired since.

Overall your captures look better than most. And you prepared your samples correctly (most new readers don't), so it looks as if you've learned more than you think. There are problems -- after all, this is analog tape, so no surprise. some problems are rather common and easily corrected.

One thing to do during capture is to set your signal level within an acceptable range for digital video. This is measured in the Y (luminance) channel using VirtualDub's YUV capture histogram. After capture, one of he first things most people do is check the capture's signal levels to see if any corrections are needed, since signal levels change during some scenes especially with home movies. Why use a YUV colorspace in the first place? Because YUV is the way analog and digital data are store and broadcast. RGB is used for display.

The YUV system stores 3 data channels, Y for luminance, U for blue, V for red. So where does green come in? Green is derived by subtracting U and V from Y. YUV comes in many flavors, based mostly on the contrast range and number of bits used to store Y, U, and V. The "flavor" that is stored on VHS tape is analog YPbPr -- the nearest digital standard equivalent available for capture as we know it is YUY2. For that reason, YUY2 is usually recommended for the type of capture we're talking about. For every 4 chunks of Y or luminance data, YUY2 stores 2 chunks of U and 2 chunks of V. Numerically this type of YUV storage is also described as 4:2:2.

YUV data can be read and translated using the Y channel alone, without affecting the U amnd V channel. One reason why the video industry uses YUV is so that a monochrome display can ignore the chroma channels but still display a complete black and white image.

In the case of your samples, they contain an illegal luminance range that exceeds y=235. The accepted range of values for most popular encoded formats is y = 16 to 235. When played and displayed in RGB, YUV 16-235 values are expanded to RGB 0-255. But if the YUV bright Y values are already higher than 235, and if the display devices can't work beyond RGB 255, what happens to Y values that are already greater than Y=255 when expanded? They're clipped -- anything beyond RGB 255 is down-sampled to the same 255 value or is simply ignored. In other words, either way, clipped bright data is destroyed. So bright objects such as light bulbs, facial spectral highlights, and white clouds become "hot spot" whitish blobs with no detail. Often they change color and become bluish or cyan blobs. On an RGB histogram, luminance or colors that exceed RGB 255 will literally "climb the walls" on the right-hand side of the histogram, indicating that the video contains bright values that can't go beyond RGB 255. Hence, they are clipped.

The same fate awaits renegade U and V chroma values as well. In many cases using YUV, Avisynth has filters that can recover some of the lost brights by very gradually compressing overly bright values so that they lie within y=235. In this way, some of the bright detail that was originally threatened in YUV can be recovered. But values that go beyond y=255 are simply clipped in the camera or during playback and capture. And, yes, YUV can contain values darker than y=0 and brighter than y=255. RGB is limited to 0-255.

The same thing happens at the dark end. Objects that are dark at Y=16 are expanded in RGB to RGB=0. Zero means no color or detail transmission at all, which is what we call pure black. When you see black borders in a video, they're almost always darker than y=16. They are either really dark grays darker than y=16 or they are really zero-black at y=0...or even darker ("super-blacks)". That's OK for black borders because they're supposed to be black anyway. But for useful details, it's a killer. In your samples you won't see any luminance value darker than y=16 because the ATI 6000 automatically clips blacks at y=16. So any dark details in the original signal that were darker than y=16 can't be recovered. Since the ATI clips blacks at y=16 before the signal enters the capture software, the only way to recover lost blacks is with an external proc amp between the incoming signal and the capture device. But that's an expensive proposition if you want to avoid cheap "video enhancers" that make everything else look pathetic. On the other hand with most players the loss is relatively tolerable because today's LCD's aren't nearly as good with dark shadow detail as CRT's were. But an experienced eye can detect moderate dark clipping when it occurs.

You want to fix signal levels in YUV before video gets converted and expanded into RGB for any further processing. For display and for viewing in editors, video goes to RGB for display only. Display itself doesn't change the original source. But once you apply RGB filtering or rendering, clipped values are lost. Once RGB clips details, they can't be recovered when you go back to YUV for other processing or encoding.

For most users and pros who go into restoration, the first step is to check signal levels using the original YUV colorspace before any other processing. Avisynth is an app that can do this. You can check YUV levels by running a short Avisynth script in Virtualdub. The easiest and most intuitive way to check YUV levels is with the the old Avisynth standby YUV "Levels" histogram and the ColorYUV analyzer. I use the levels checking script below so many times that I saved a permanent copy of it and change it as required. This script will also serve as a quick lesson in writing avs scripts:

Code:
# change path statement below to match your suystem
AviSource("E:\forum\faq\JohnGalt\test sample 1a2.avi")
Crop(4,0,-8,-8)
#Histogram("Levels")
ColorYUV(Analyze=true)
This script is typed as plain text in Notepad. Save it with whatever file name you want, but the file extension isn't .txt, it should be .avs. You can save it as "Levels_check.avs" in Notepad by setting the Notepad "File type" to "all types" and changing the ending from .txt to .avs. To run and view this script, open it in Virtualdub using "File..." -> "Open video file...", locate the ,.avs script and click "OK" or "Open", whichever applies. Give Avisynth a few seconds to load and send the decoded video to VirtualDub. You can run the script using Virtualdub's "play" icon or you can scroll it one frame at a time. You won't eb saving this test script as a new AVi file, so you won't need the "Save" command at this point.

In the text above, notice that the first line of code is preceded by a "#" character. Anything that's preceded by "#" becomes a comment. Comments are ignored by Avisynth. That line of text won't be executed.

The AviSource() builtin function opens AVI files that use different codecs. These include huffyuv, Lagarith, UTVideo, DV codecs, and others such as DivX/XVid. If the codec is installed in your system, AviSource can use it to open and decode videos. Codecs h.264, Apple ProRes and many others require a different utility for decoding. The path portion of the code that locates the video is a string component, so it's placed inside double quotes.

Crop(4,0,-8,-8)
Removes black borders and bottom-border head switching noise to prevent affecting the histogram. All that the histograms should see is the core image content. The sequence of Crop numbers in parentheses removes 8 pixels from the left border, zero pixels from the top, 8 pixels from the right border, and 8 pixels from the bottom, in that order.

Note that the next line of code, #Histogram("Levels"), would ordinarily show the YUV "Levels" histogram, but in this case it's commented-out with "#" and won't execute. What we want first is the color analysis by the ColorYUV() function, which must be used by itself.

ColorYUV(Analyze=true)
This overlays the frame with a table of rows and columns containing various data values for comparison and anlysis.

The image below is a VirtualDub direct frame capture of the original interlaced frame 202 in the sample. The ColorYUV analysis numbers are overlaid onto the image (You can copy a frame directly to the Windows clipboard using "Video..." -> "Copy source frame to clipboard". Then paste from the clipboard into any image program. The default format is an uncompressed BMP bitmap). Note that the frame's borders are removed, so the numbers apply only to the image content.

Original frame 202 Analyze

In my image processing app I pasted an orange arrow over the image. The orange arrow near the upper left corner points to the "Maximize" row, which shows maximum Y, U, and V values. Under the "Y" column is the value "255". This tells us that bright clipping will occur in RGB. Also, in the "Minimum" row just above the Max numbers, there is no minimum value lower than y=16.

The YUV and RGB histograms will give us more detail. To get that histogram I have to change the script. In the script below, the "ColorYUV" line is disabled with a comment mark, and the "Histogram" line is enabled. The changed file is then saved as the same file name as before, which doesn't change. You can execute this script in Virtualduib by using by using "File..."-> "Reopen video file". Avisynth will decode and send the new version to VirtualDub starting at the same frame where the video was stopped.

Code:
# change path statement below to match your suystem
AviSource("E:\forum\faq\JohnGalt\test sample 1a2.avi")
Crop(4,0,-8,-8)
Histogram("Levels")
# ColorYUV(Analyze=true)
The VirtualDub frame below is the result, with the Avisynth YUV histogram appended to the right of the image.

Original frame 202 with YUV Histogram

I opened that frame capture in an image app (an ancient Photoshop for Win95) and saved only the YUV histogram:

Original frame 202 YUV panel

In the "Levels" histogram, the top band (white) shows Y luminance values. the middle band shows "U" (blue-yellow), the bottom band shows "V" (red-green). The left side of the bands are dark values, the middle line indicates exact midrange values, and the far right side displays bright values. A shaded color band on each side of the graph indicates values below y=16 on the left and values beyond y=235 on the right. The shaded portions are called "unsafe zones". The area between the shaded borders is called the "safe zone", which is where you would want most y and UV values to be placed.

In the YUV histogram are two pink arrows. The diagonal pink arrow in the upper right corner points to Y values that exceed y=235. There is also a sharp upward white"spike" at the right end that indicates bright clipping in-camera. Considering the content of the original image, this bright clipping would likely happen in the brightly lighted background, where you can see that there far less detail. There's not much you can do with values clipped in the camera, but there is also output from the player that extends father into the unsafe zone. It is possible to retrieve a few tiny details in that area using various filters, although likely it's not much detail that you could actually use. Still, some retrieval is possible with the right filter, which I'll demonstrate later...

On the opposite left-hand side of the white band you'll see a sudden sharp cutooff spike at the y=16 shaded area. The spike climbs up the wall of the border, with no data at all below it. This indicates black clipping at y=16. Nothing below that point can be retrieved.

I then revised the script to eliminate all histograms and to show only the core image, so that I could get an RGB-only histogram in VirtualDub but without borders for RGB display. So I commented-out the lines for "ColorYUV" and for "Histogram". Again, I saved the changes in the same file and opened it with "Reopen video file".

Code:
# change path statement below to match your suystem
AviSource("E:\forum\faq\JohnGalt\test sample 1a2.avi")
Crop(4,0,-8,-8)
# Histogram("Levels")
# ColorYUV(Analyze=true)
The VirtualDubbRGB histogram is generated by a 32-bit Virtualdub filter called ColorTools. The ColorTools.vdf filter update for Win7 and Win10 can be downloaded at http://www.digitalfaq.com/forum/atta...1&d=1487006540. I'm using the older 2006 version for XP, but the output graphics are exactly the same. Here's the RGB histogram for the original frame 202 image:

Original frame 202 VirtualDub RGB panel

The RGB histogram shows how values are expanded from y=16-235 to RGB=0-255. You can also see that all data is maxed out at 255 along the right border. In some areas of the histogram's borders, some colors are climbing up the wall on the right (indicating that red is a tad oversaturated, but not much), and on the left histogram border dark green is a little clipped. This isn't too bad, but we'd like to fix clipping in the source, not in RGB.

You want to fix signal levels in YUV before video gets converted and expanded into RGB for any further processing. For display and for viewing in editors, video goes to RGB for display only. Display itself doesn't change the original source. But once you apply RGB filtering or rendering, clipped values are lost forever. Once RGB clips details, they can't be recovered when you go back to YUV for other processing or encoding.

Here's the code I used to fix YUV levels. The filters are all Avisynth built-ins, so there's nothing extra to install:

Code:
# change path statement below to match your suystem
AviSource("E:\forum\faq\JohnGalt\test sample 1a2.avi")
Crop(4,0,-8,-8)
ColorYUV(gain_y=10)
Tweak(cont=1.05,dither=true,coring=false)
Levels(16, 1.1, 255, 16, 235, dither=true, coring=false)

#ColorYUV(Analyze=true)
#Histogram("Levels")
Notice that I left the analyzer and Histograms lines in the script, but enabled and disabled them as needed, to check my own numbers. After I had the settings I wanted, I disabled both lines for final output. I then loaded the VirtualDub RGB ColorTools filter to measure Avisynth's output.

Crop(4,0,-8,-8) removes unwanted border areas. The Crop() function has rules for its use in certain colorspaces and with interlaced\non-interlaced frames, so pay attentio0n to the table of rules which you'll find in the middle of the page at http://avisynth.nl/index.php/Crop. This documentatuion is also available in online help installed by AVisynth (see the next paragraph, below)

ColorYUV(gain_y=10) brightens the somewhat dim image by applying a multiplier to all pixel values, visually "shoving" all values toward the right side or bright end. In other words, it brightens from the bottom-up so to speak, and makes midrange tones like skin tones stand out a little more. It also shoves the clipped brights farther out past 235, but I'll compensate for that in the lines below.

You can do many things with the ColorYUV filter, all of which are documented online with graphs and many pictures at http://avisynth.nl/index.php/ColorYUV. A more simplified version of ColorYUV functions and all other Avisynth functions is in the Help that comes with Avisynth when it's installed. To display in-system help, go to your All Programs listing, find the Avisynth program group, open that programn group and click "Avisynth Documentation". The documentation is physically located in the Avisynth program folder under the "Docs" subfolder and then in the "English" subfolder under that. The file that actually executes the help display is "\Docs\English\index.htm", if you want to make a desktop shortcut for it.

Tweak(cont=1.05,dither=true,coring=false)
This line executes a tweak() function set up to mildly increase contrast. There are many different contrast filters; this one extends brights only from the midrange farther out toward the bright end. Again, that calls for some compensation at the bright end in the statement below it. Tweak functions are documented online at http://avisynth.nl/index.php/Tweak and in the program Help installed by Avisynth.

"dither" is turned on to use dithered or gradually varied values between color changes and to prevent hard edges and gaps in the spectrum when pixel values are spread out or compressed. Turning off Coring prevents sharp cutoffs at the dark and bright ends.

Levels(16, 1.1, 255, 16, 235, dither=true, coring=false)
The Levels() function is another one that can pull various tricks. See http://avisynth.nl/index.php/Levels or program Help. The terms that appear in parentheses stand for the following signal components, in this order: incoming low values (16), incoming gamma (midtones)(1.1), incoming high values (255), desired low value output (16), desired high value output (235). Dither is turned on and coring is turned off.

The set up in this line increases gamma by a small amount at 1.1, making midtones and some upper darks a little brighter and more clear. It also brings up noise in darker areas, but that will be handled later as well. The desired low value output stays the same at 16-input and 16-ouput, since there's nothing you can retrieve that's lower than y=16. But the high end is calmed down from all the previous brightening. Setting 255-input down to 235-output gradually converts the brightest brights down to a range that lies within y=235, so anything brighter than 235 is gently compressed to lower, dithered and interpolated values instead of being cut off.

For more about how to read RGB histograms and histograms in general, there is an excellent free website tutorial. The RGB histograms used in the tutorials are for still cameras, but they work identically for video -- after all, video is just a stream of still images.
Understanding histograms Part 1 and Part 2
http://www.cambridgeincolour.com/tut...istograms1.htm
http://www.cambridgeincolour.com/tut...istograms2.htm

There are other problems in the samples that I'll describe in the next post. I wrote new, complete scripts to filter and clean up your two samples with Avisynth and Virtualdub. I encoded the results into mp4 containers. They are attached as "test sample 1a2.mp4" and "test sample 2a1.avi".

Meanwhile there are old and basic illustrated tutorials at doom9.org for working in Avisynth and VirtualDub. They're a bit dated, but the only changes are that some older filters have been replaced with new ones. The displays and procedures haven't changed.
7.1 Postprocessing video using VirtualDub
7.2 Postprocessing video using AviSynth

You can encode the lossless captures and work files for DVD, standard definition BluRay, web posting, or anything you want. My scripts and filter setups for the two attached mp4 samples are a mess, so I'll clean them up first. In the next post I'll detail the scripts and processing.

Attached Images
 frame 202 Analyze.jpg (188.4 KB, 407 downloads) frame 202 YUV Histogram.jpg (57.3 KB, 577 downloads) frame 202 YUV panel.png (11.0 KB, 403 downloads) frame 202 RGB panel.png (17.5 KB, 402 downloads)
Attached Files
 test sample 1a2.mp4 (4.91 MB, 52 downloads) test sample 2a1.mp4 (5.27 MB, 24 downloads)
 The following users thank sanlyn for this useful post: JohnGalt (07-10-2018), lordsmurf (07-12-2018)
#3
07-11-2018, 04:04 PM
 JohnGalt Free Member Join Date: May 2018 Posts: 7 Thanked 0 Times in 0 Posts
Wow. Thanks for explaining all of this, now to try and understand it.

And, for what it is worth, I am capturing through the VCR.

Granted it will take me a while to digest this smorgasbord of information, but my first take is that if I adjust the Brightness and Contrast levels in my capture, I can eliminate the nasty Luminance clipping. And I am guessing I could do the same with Saturation (and tint??) to address the smaller issue with the green and the red.

Assuming I am keeping up here, I would think my first step in capture with Virtualdub would be to fire up the Histogram and do a run through of the video first to adjust the levels so nothing is clipped. Close, but not clipped.
#4
07-11-2018, 06:38 PM
 sanlyn Premium Member Join Date: Aug 2009 Location: N. Carolina and NY, USA Posts: 3,648 Thanked 1,264 Times in 971 Posts
Quote:
 Originally Posted by JohnGalt Granted it will take me a while to digest this smorgasbord of information, but my first take is that if I adjust the Brightness and Contrast levels in my capture, I can eliminate the nasty Luminance clipping. And I am guessing I could do the same with Saturation (and tint??) to address the smaller issue with the green and the red.
In a later post tonight I'll point out the way I set up levels for the mp4 samples I posted earlier, along with all the other wonderful maddening detail stuff. Meanwhile the setup for capture settings, temporary cropping, etc., are discussed in the updated VirtualDub settings guide, post #3 and #4 (Capturing with VirtualDub [Settings Guide]).

Adjusting saturation and other color elements with analog tape during capture is an exercise inn clinical masochism and never works. Analog, especially home movies, changes saturation and colors so often, minute to minute, it can drive you drive you bananas. Best to leave that for post-processing. Anyway, Avisynth and VirtualDub color filters are much cleaner and sophisticated.

Quote:
 Originally Posted by JohnGalt Assuming I am keeping up here, I would think my first step in capture with Virtualdub would be to fire up the Histogram and do a run through of the video first to adjust the levels so nothing is clipped. Close, but not clipped.
Yes, to set levels you should review just a short section of incoming signal to get an idea of how the levels are coming in. It's a little tricky until you remember that "Brightness" controls black levels, "Contrast" controls the brights. The controls interact with each other to some degree, but once you actually do it it becomes second nature. What you want is a setup that can handle a worst-case scenario. You're allowed a little overflow now and then, but that can be corrected later.

I'm cleaning up my really messy scripts and will post them tonight. Since you're new to Avisynth and to some VirtualDub add-on filters I'm preparing links for what you'll need to get you started. Stay tuned later.

And, yes, it does take patience, especially at first. It gets quicker and easier as you go along. At least your samples so far aren't the nightmares my sister sends me!
#5
07-11-2018, 08:02 PM
 JohnGalt Free Member Join Date: May 2018 Posts: 7 Thanked 0 Times in 0 Posts
The good news is that I followed the Settings guide in my initial setup, but will double-check them. Settings have magically changed on me before.

No rush on the script work if not convenient for you. You're helping me out.

For now, as reinforcement to lessons learned, I will recapture the few tapes I've already done with adjusted Luminance levels in mind.
#6
07-12-2018, 07:11 AM
 sanlyn Premium Member Join Date: Aug 2009 Location: N. Carolina and NY, USA Posts: 3,648 Thanked 1,264 Times in 971 Posts
Another delay, sorry. Bad weather brought a power outage that kept me off my PC and the internet. It's just as well, since getting familiar with restoration is going to take some time anyway.

Quote:
 Originally Posted by JohnGalt No rush on the script work if not convenient for you. You're helping me out. For now, as reinforcement to lessons learned, I will recapture the few tapes I've already done with adjusted Luminance levels in mind.
My typing is atrocious, so it will take some time to ready a post anyway.

Recapturing is always a headache. I hate to admit it, but I've done it many times.

Your samples are so well done I'm surprised someone else didn't jump at the chance to do something with captures that aren't total disasters. The worst thing you'll see in this post is that it's likely something you've never seen before. Almost everyone here has had the same experience.

Checking and setting levels and basic contrast adjustments are usually the first steps in restoration. What happens next depends on what you find while reviewing the capture. Denoising would almost always come next.

A more serious shortcoming than levels in your samples is stair-stepping/aliasing and distortion on diagonals during motion, due to either the camera or the player, exacerbated by jittery camera motion. Likely it was the camera, as most consumer cameras had sloppy interlace and other motion rendering difficulties due to the types of shutters used. To some degree a CRT-TV or projector with auto-flicker lighting or scanning, or other devices that simulate movie projection flicker, can make bad interlacing almost indiscernible. But LCD's and progressive digital displays aren't so talented.

The images below are a 2x blowup of samples of various distortion effects from two frames in your "test sample 1a2.avi". Lines that should be straight or smooth are rippled and/or misshapen.

edge noise.jpg

The effects change with each frame and make a lot of annoying noises during play. They would also eat up lots of bitrate when finally encoded to your output choices. In the larger picture across the top of the sample images, the underlining under the word "California" is badly rippled. Red letters in the word "Speed" have strong stair-stepping and square-pixels with a fluttery, crawling effect (often called "line twitter"). In the lower left corner, the shape of the blue chair's edge is notched and rippled. In the middle lower image, the black fixture on the yellow object has white-speckled DCT ringing and mosquito noise. In the lower left image the blue edge of the chair has soft, slurred stairstepping wavelets.

Sometimes a good deinterlacer can fix this debris. The best deinterlacer around is Avisynth's QTGMC plugin. Yadif is used sometimes, but it accentuates aliasing. Here, deinterlacing alone didn't help at all. I had to deinterlace and go to something more dramatic -- the FixRips median filter. FixRipsP2 averages the motion in 5 frames and tries to guess at corrections. It's a dangerous filter (and verrry slow) but in this case it seems to work OK. It also removes some grainy tape noise.

Because FixRipsP2 is so slow, I had to use filtering in two steps on "test sample 1a2.avi". Script #1 fixes levels and deinterlaces. Script #2 uses the output of Script #1 and runs FixRipsP2, a few other plugins, and some VirtualDub filters.

Unfortunately, except for color adjustment filters and the camcorder color denoise filter, VirtualDub has no equivalents for the Avisynth filters I used. In any case, you likely don't have those plugins so I've included links for hopefully everything I used. If I omitted something or if you have questions, just holler.

Here is script #1 for the "test sample 1a2.avi" cleanup. In all Avisynth scripts, code is executed in the order that the statements appear.

Code:
# ########################################
#
#      input = "test sample 1a2.avi"
#             STEP 1 SCRIPT
#
# ########################################

### --- Adjust the path statement below to match your system. ---###
AviSource("E:\forum\faq\JohnGalt\test sample 1a2.avi")
AssumeTFF()
ColorYUV(gain_y=10)
Tweak(cont=1.05,dither=true,coring=false)
Levels(16,1.10,255,16,235,dither=true,coring=false)

ConvertToYV12(interlaced=true)
QTGMC(preset="very fast",border=true,GrainRestore=0.3,FPSDivisor=2)

### ------------------------------------------------------------
### --- Save output in VirtualDub as "STEP 1.avi" using lossless
### --- Lagarith YV12, and use as input for STEP 2 script.
Copy this script and save it as "Step 1.avs". This will make it convenient to save the output in VirtualDuib using the name "STEP 1.avi".

The output of this script should be saved as Lossless Lagarith YV12. Huffyuv can't compress YV12. You can get the free Lagarith lossless compressor v.1.3.27 installer here: https://lags.leetcode.net/LagarithSetup_1327.exe. The installer sets up 32-bit and 64-bit versions in the right places. If you want a look at the Lagarith home page, it's at https://lags.leetcode.net/codec.html.

Lagarith is popular for intermediate working files because it makes slightly smaller lossless files than huffyuv. It's also said to be based on huff's core code base. You can save the output of the Avisynth script in Virtualdub by doing the following:
1. click "Video.." then click "color depth..." and select YV12 in the right-hand menu panel.
2. Then click "Video..." then "compression..." and select Lagarith, then set its configuration menu for YV12.
3. Click "Video.." -> then click "fast recompress" in the drop-down menu.
4. Then click "File..." -> "Save AVi...". Give the file a name (or take the default "step 1.avi" title), pick a location for the file, and click OK.

Using "fast recompress" won't run Virtualdub filters and will avoid a YV12->RGB->YV12 double conversion.

Now for the details:

### --- Adjust the path statement below to match your system. ---###
AviSource("E:\forum\faq\JohnGalt\test sample 1a2.avi")

The AviSource function opens and decodes the target video. Be sure to correct the path statement for the location of the file in your system. The entire path and file name go inside double quotes.

AssumeTFF()
This built-in statement informs Avisynth that the field structure in the target video is Top Field First (TFF). This overrides the Avisynth default that assumes field order is BFF (Bottom Field First). Because we'll be deinterlacing, correct field order is important.

ColorYUV(gain_y=10)
This sets luminance gain ahead 10 points, meaning that it "shoves" pixel values toward the bright end, brightening the video and extending upper midtones and highlights to brighter values. This can exceed y=235, but it's compensated later with the Levels() function.

Tweak(cont=1.05,dither=true,coring=false)
This is a mild contrast increase, again extending the brights to keep the video from looking as dim as the original. Again, the Levels() function will be used later to keep things within y=16-235. To prevents gaps in the spectrum due to stretched pixel values, dither is turned on to interpolate fill-in pixel values. Coring is turned off to prevent sharp cutoffs at the dark and bright ends.

Levels(16,1.10,255,16,235,dither=true,coring=false )
As explained in the earlier post, Levels() is used here to achieve the y=16=235 requirement. The positions of the numbers correspond to levels values in the following order: expected incoming dark values (16), the gamma value (set here to 1.1 for a mild increase in midtones), bright values incoming (255), dark values desired (set to 16, same as incoming dark values), and finally desired bright output (set down to 135 to very gradually contract overly bright values down to 235). Dither is turned on to fill-in compressed gaps in the new response curve, and coring is turned off.

ConvertToYV12(interlaced=true)
The original YUY2 capture colorspace must now be converted to YV12 for the processing that follows. This conversion is handled properly and with great precision by Avisynth, as long as you make a notation about interlacing. In this case the video is interlaced, so "interlaced=true" must be stated, as required for interlaced and telecined video. Most NLE's, including the pricey "pro" jobs, don't handle YV12 conversions as cleanly as Avisynth.

QTGMC(preset="very fast",border=true,GrainRestore=0.3,FPSDivisor=2)
QTGMC is the prime, uber deinterlacer, often better than your Tv or set top player. It's an industrial-strength product but surprisingly is really a text file instead of a coded.dll. It comes as either an .avs or .avsi script, which are plain text files. It has a few dozen adjustable parameters and 963 lines of code. At this point we could hardly spend all week on all the details of this filter, but fortunately it has presets that are used to automate many of those dozens of parameters. The "very fast" preset is one of them that goes fairly fast and skips a lot of the denoising steps found in the slower presets. For this particular input file, denoising will come in STEP 2.

"borders = true" tells QTGMC to use special resizing algorithm when creating full-sized deinterlaced frames, so that frame borders don't split or flutter. QTGMC deinterlces by separating the two interlaced fields in each frame. Then using special algorithms and motion compensated figuring, it creates two full-frames from the half-height fields. Thus the frame count doubles and the frame rate doubles to 59.94 fps. "GrainRestore=0.3" restores some of the original grain to the output so that the results don't look over-filtered or plastic. Finally, "FPSDivisor=2" restores the original 29.97 fps frame rate by discarding alternate frames. To some extent this decreases temporal resolution, but QTGMC has made partial allowances for it by interpolating motion values between original fields when it created the new frames.

Using this method to deinterlace and get a 29.97 fps progressive video was the only way to defeat the noisy distortions described earlier. We could have reinterlaced, but because of field phasing and shutter behavior in the camera, not to mention jittery camera motion, the noise wasn't repaired when the video was reinterlaced. Motion might be a little smoother when played as interlaced, but those bad distortions really spoil interlaced playback. You can always encode it using fake interlace flags in your encoder, if required.

You probably need links to the filters mentioned that are not built-in to Avisynth.

How to download Avisynth and VirtualDub filter packages: these usually come as .zip files and often include extras such as instructions and other documents. To make things very much easier now and in the future, create a folder or two for the sole purpose of holding filter downloads. For each filter make a subfolder named after the filter's name. Download the .zip or the filter into its own subfolder, then load a copy of the filter itself into the appropriate plugin folder. That way, nothing gets lost or confused and you always know where to go for info. For Avisynth, filters are in the form of .dll, .avs, or .avsi files. VirtualDub filters are .vdf files.

A complete set of updated QTGMC and its support files as of November 2017 was previously uploaded to digitalfaq as "QTGMC_New.zip" at this link: http://www.digitalfaq.com/forum/atta...g-qtgmc_newzip. When unzip'd, it creates a folder named "PluginsPackageNov2017". Inside that folder are several subfolders. There are also two text documents: an open-source license agreement (no, you don't have to buy or sign anything, it's just info) and a very important text file named "READ_ME_FIRST.TXT". Do yourself a favor and read it. It tells you what to do with this smorgasbord of subfolders, which is really very easy but of course very difficult if you don't know what to do next. READ_ME_FIRST.TXT handles that.

Once you unpack and load up the main .avsi file and all its support files, you'll have quite a startup collection of plugins. Most are stand-alone filters in their own right and are used by other big filters as well. Dfttest.dll, one of QTGMC's included support files, is a very good denoiser, and nnedi3.dll is used by other plugins.

There is also a subfolder of links to three Microsoft VisualC++ runtime files, required by some of the plugins. You will need the 32-bit ("x86") and 64-bit versions. If you're short of any VC runtime files, you'll get error messages. You should have all you need in this collection, and if you've been doing Win10 updates you likely already have them all. But for runtimes that you don't already have, look at https://support.microsoft.com/en-us/...al-c-downloads

Something else you'll need for Win10 and future filters are the old pre-Win7 32-bit runtimes. A special thread has been created to explain those two files and to download out-of-print copies of them. Have a look at Fix for problems running Avisynth's RemoveDirtMC.

And now, folks, for the STEP 2 script. The input for this script is the YV12 "STEP 1.avi" file saved from the previous STEP 1 script.

Code:
# ########################################
#
#         input= "STEP 1.avi"
#            STEP 2 SCRIPT
#
# ########################################

### --- Adjust the path statement below to match your system.    ------###
### --- In 64-bit W10 32-bit programs are in "Program Files (x86)". ---###
Import(D:\Avisynth 2.5\plugins\FixRipsP2.avs")

### --- Input is deinterlaced YV12 from "sTEP 1.avi" file.    ---###
### --- Adjust the path statement below to match your system. ---###
AviSource("E:\forum\faq\JohnGalt\Step 1.avi")
FixRipsP2()
GradFun2DBmod(thr=1.8)
AddGrainC(1.5,1.5)

###--- crop dirty border4s, Add new border, center the image ---###
Crop(4,0,-8,-8).AddBorders(6,4,6,4)

###--- To RGB32 for VirtualDub filters ---###
convertToRGB32(interlaced=false,matrix="Rec601")
# ########################################
# On output in VirtualDub, load Virtual Dub
# filters by loading a .vcf file. Filters
# required are ccd.vdf, ColorfMill.vdf,
# and Curves (gradation.vdf).
# .vcf file is test1a2_Vdub Settings.vcf
# ########################################
the details...

### --- Adjust the path statement below to match your system. ------###
### --- In 64-bit W10 32-bit programs are in "Program Files (x86)". ---###
Import(D:\Avisynth 2.5\plugins\FixRipsP2.avs")

The Import() function is used to load the text of an .avs plugin file into your code at runtime. You won't see the text while the script is running because it will exist only in RAM. FixRipsP2 is furnished as an .avs plain text file because it contains code and subfunctions that are duplicated in other plugins. This avoids confusion by not loading other plugins that contain the same code.

Avisynth plugins that are .dll or .avsi files will load automatically when they are needed by your Avisynth script. But .avs files don't load automatically; you have to load them explicitly with the Import() function. http://avisynth.nl/index.php/Internal_functions#Import is a link that leads to a long web page where you can also scroll around and look at a few hundred other Avisynth internal functions, if you'd like.

AviSource("E:\forum\faq\JohnGalt\Step 1.avi")
As usual, adjust the path for the location of the STEP 1.avi file in your system.

FixRipsP2()
This simple statement loads the FixRipsP2 .avs code and executes the filter. As explained, it's a median filter that "figures out" the median values of frames by comparing 5 other frames. We can't get into the math here (and I wouldn't understand it any more than you would), but this can be a dangerous filter. In making educated guesses about differences and averages between motion and objects across frames, sometimes motion is distorted and objects disappear for a frame or two. For example, if someone moved their hand rapidly the hand might be blurred or distorted. In a basketball game, a thrown ball might disappear for a frame or two. So this is one filter that is used sparingly, if at all. In this case it seems to work well.

Don't be surprised if this filter gives an SSE2 error message from Virtualdub. You can ignore the message. FixRipsP2 just won't use SSE2 processing in some systems if it can't do so, which slows it down. It's a slow filter no matter how your system runs it.

FixRipsP2.avs can be downloaded here: http://www.digitalfaq.com/forum/atta...d-fixripsp2avs. It requires the following support plugins:
# - RemoveGrain or RgTools.dll. If you have QTGMC,you already have this plugin.
# - MvTools. If you have QTGMC, you already have this plugin.
# - MaskTools. If you have QTGMC, you already have this plugin.
# - DePan Tools (with DepanEstimate & DePanInterleave). You will have to downlaod this one. It's attached to this post as DePan_Tools_1_13_1.zip

GradFun2DBmod(thr=1.8)
This filter softens the edges of gradients where smooth color changes are afflicted with hard edges rather than smooth transitions, such as in background wall shadow areas and skin tones from bright highlight to soft shadows. It does this by detecting hard edge blocks and blending pixels at those hard edges. GradFun2DBmod is also used by a couple of other Avisynth filters. The "thr" parameter adjusts the strength of the edge softening. Here, Thr=1.8 is a middle value. The plugin is classed as a debanding filter.

Get GradFun2DBmod.zip here: http://www.digitalfaq.com/forum/atta...adfun2dbmodzip. The filter requires some support plugins:
# - GradFun2db. Included with the GradFun2Dbmod.zip download.
# - AddGrainC. If you have QTGMC, you already have this plugin.
# - MaskTools2. If you have QTGMC, you already have this plugin.
# - RgTools package. If you have QTGMC, you already have this plugin.

AddGrainC(1.5,1.5)
This adds a fine film-like grain to the results, keeping them from looking over filtered and preventing the clay-face effect in skin tones. Clay face often results from the combination of a VCR's built-in denoiser and further filtering or degraining in post processing. A small amount of dithered grain acts as a mask that hides artificially hard facial lines. If you have QTGMC, you already have this plugin. Usually it is one of the last filters to be used in processing lineups.

Crop(4,0,-8,-8).AddBorders(6,4,6,4)
This crops unwanted dirty borders and head switching noise by removing pixels in this order: crop off 4 pixels from the left border, ignore border pixels across the top (zero pixels), crop off 8 pixels from the right border, then crop off 8 pixels from the bottom border. Next, AddBorders will create new borders and center the image as well as it can horizontally and vertically. It does so by adding black pixels in this sequence: add 6 black pixels to the left border, add 4 black pixels to the top border, add 6 black pixels to the right border, then add 4 black pixels to the bottom border.

Black border pixels blend in perfectly with black pillars added to 4:3 videos when played on 16:9 displays. The Crop() function is described at http://avisynth.nl/index.php/Crop, the AddBorders function is described at http://avisynth.nl/index.php/AddBorders. They're also described in the local Help files installed by Avisynth.

Together, Crop() and AddBorders() preserve the original 720x480 frame size. Because lossless AVI files don't contain data for display aspect ratio, the aspect ratio of this frame when played in media players is 3:2. At the end of post-processing, the 720x480 video will be encoded for a 4:3 display aspect ratio (DAR). If you want to view your files in VirtualDub as 4:3 instead of 3:2, right-click on the VDub panel you're viewing and select a 4:3 frame for display purposes only.

###--- To RGB32 for VirtualDub filters ---###
convertToRGB32(interlaced=false,matrix="Rec601")

This closing statement converts YV12 to RGB32 for VirtualDub filters, which work in RGB. At this point the video is progressive and non-interlaced, so "interlaced=false" must be stated in order for Avisynth to make that conversion with the correct precision. The specific color matrix used for conversion is "Rec601", which is the normal color matrix for standard definition video.

I added Virtualdub filters to VDub's filter list while viewing this Avisynth script. VDub will apply the filters to the script's output as the video is scrolled or saved to AVI. To load the filters with the same settings I used, you need a .vcf file. A .vcf is a plain text file that saves VirtualDub process settings. The three VirtualDub filters I used were Color camcorder Denoise (aka "ccd"), ColortMill, and gradation curves. The last two filters mimic the operation of filters in more expensive "pro" editors. The three filters must be present in your VirtualDub plugins folder. If you don't have these filters, there's another old digitalfaq upload that contains ccd, HueSatInt, ColorMill, Exorcist, and Gradation Curves for VirtualDub: http://www.digitalfaq.com/forum/atta...dub_filterszip.

Download and save the attached "test1a2_Vdub Settings.vcf" file. Don't save it in your Virtualdub plugins folder. Instead, create a subfolder somewhere for it or save it to the same area where your capture is located. To use the .vcf, click "File..." -> then click "Load pocessing settings..." and navigate to the .vcf file's location to select it for opening. Any filters you've already loaded in VirtualDub's filter box will be overwritten.

Because you will be applying VDub filters to Avisynth's output, you will have to set up full processing mode, which is VirtualDub's default. To make sure it's set that way, click "Video..." and in the drop down menu make sure "Full processing mode' is selected. You can still set the output color depth to YV12 and the output compression to Lagarith, and VDub will make the conversion when saving the file.

When you finally join all your processed capture segments together and encode them to your final output choice, everything will be converted to standard YV12 anyway.

This completes the detail of the two scripts used for creating the posted "test sample 1a2.mp4", which was encoded using h.264 and a 4:3 display aspect ratio. The encoder was TMPGenc Video Mastering Works.

Fortunately only one script was required for the second sample,"test sample 2a1.avi". Now that power is restored in my neighborhood, I can post the details later today. I used a single short script but it has differences. You'll need time to absorb this information overload anyway.

Attached Images
 edge noise.jpg (87.4 KB, 572 downloads)
Attached Files
 DePan_Tools_1_13_1.zip (791.5 KB, 56 downloads) test1a2_Vdub Settings.vcf (3.7 KB, 17 downloads)
 The following users thank sanlyn for this useful post: KenInCa (07-12-2018), lordsmurf (07-12-2018)
#7
07-12-2018, 07:28 AM
 lordsmurf Site Staff | Video Join Date: Dec 2002 Posts: 11,704 Thanked 2,139 Times in 1,840 Posts
sanlyn always gives some good in-depth critiques for how to correct color and attack lots of various noise. I've skimmed his posts here, seems like some fine advice.

My only word of warning is this:

1. Realize everything can change from clip to clip, or even scene to scene in the same clip, for something shot on home video. The cameras were terrible, lighting was whatever the sky/sun or lightbulb did. So corrections can quickly be wrong. So you can't take his script, and cram the whole video through it.

2. Don't get carried away. Restoring is about making it better, not making it perfect. And that has tradeoffs, like funds for hardware, CPU cycles for software processing time. Sometimes you must make judgement calls on what you can afford, or how long you'll willing to wait for the encode to end. Avisynth can be a beast on CPU, and 30fps (realtime) to 10fps is a quick typical drop (and still acceptable), but you can also quickly go down to 1fps if not careful.

Sometimes I make those mistakes. Same for having to re-capture, usually due to rushing.

A big part of video restoration is learning to identify your mistake, not simple write scripts and buy hardware. Also being able to recognize an issue, and how to fix it.

Carry on.

- Did my advice help you? Then become a Premium Member and support this site.
- For sale in the marketplace: TBCs, workflows, capture cards, VCRs
 The following users thank lordsmurf for this useful post: sanlyn (07-12-2018)
#8
07-12-2018, 07:24 PM
 sanlyn Premium Member Join Date: Aug 2009 Location: N. Carolina and NY, USA Posts: 3,648 Thanked 1,264 Times in 971 Posts
"test sample 2a1.avi" didn't have the noisy distortions seen in "test sample 1a2.avi", but it does have aliasing and buzzing edges. A similar deinterlacing routine used in the earlier script and an anti-alias plugin solved most of that problem in Sample 2a1, so only one script is needed. The 2a1 sample also illustrates that the same filters and values can't work for every segment of analog source, especially when it comes to sweeping variations in home video scenes. Retail tape issues have problems of their own, but they don't vary as much as home shots.

The first step is checking levels and overall image quality. Sharpness is good and not overdone. Like the earlier 1a2 sample the background noise is low, with a hint of the usual JVC softening of very fine detail, but otherwise there's visible and persistent interlace edge twitter and aliasing from the camera. The scene looks dim and slightly underexposed, as is obvious by looking at it and verified in the histograms. But the histograms change markedly by the end of the short clip.

The image below has borders removed and is overlaid with the ColorYUV Analyze grid. This is the first frame in the clip.

The readout shows no minimum dark detail below y=16, and some specular highlights are peaking at y=255. There is an obvious yellow-red color imbalance -- it shows up in the readout as notably higher average and max values for the V (red) channel than for U (blue). Also minimum values for V (red) are much higher than minimums for U (blue).

Below are YUV (left) and RGB (right) panels for the same frame as above:

The dim nature of the image is shown in both histograms as an early falloff of bright values just above the upper midtones. There is not as much black clipping as in the earlier sample, but shadow detail still looks a bit "thick" with dark objects lacking as much detail as in orther areas. Still, it's tolerable and won't be noticed by most viewers.

In the RGB histogram note that max brightness (the white band) is at a relatively low RGB 214 rather than RGB 255, whereas the Analyze panel picked up specular highlights that hit y=255. The difference in RGB is that the YUV histogram measures luma (Y) directly, but RGB stores brightness with the same data as the 3 color channels, so it measures the average brightness of all the colors together. The RGB histogram shows that green is dimmer than red and that there is a deficit of blue (these are also evident in the YUV histogram).

By the end of the clip, however, lighting and camera angle change and give different histograms. The Analyzer grid below is from the last frame in the clip:

There is still a max luma at y=255, and some objects have brightened. U and V values are still widely separated, so the red-yellow imbalance hasn't changed. Again, in the luma column there is no value darker than y=16. The darkest shadow details still look the same as earlier, despite a brightening light in the background wall.

Below, YUV (left) and RGB (right) panels for the same frame:

Luminance (white bands) in both histograms have changed markedly. Earlier high peaks in the midrange of the first frame are now almost levelled off, indicating that the camera's exposure feature (the work of the devil IMO) has made a gamma change for the change in lighting. After a scene is recorded, there is no way to compensate for what camera "features" are doing to an image. There are filters that make the attempt, and sometimes they actually work to a certain extent. But more often you just have to adjust as best as you can for the duration of a scene. The central figure doesn't seem to have changed much in exposure, but the background wall is brighter in this frame and has a little more visible tape noise.

We now know that bright-end contrast increases by the of the clip while the central figure remains pretty much the same throughout the shot. In any case perfection isn't possible, so we just do the best we can without going crazy.

In the next post I'll go to the single script The single script I used for "test sample 2a1.mp4"a nd some more posted plugins.

Attached Images
 2a1 Analyze 1.jpg (96.4 KB, 380 downloads) 2a1 YUV and RGB Panel 1.png (27.8 KB, 379 downloads) 2a1 Analyze 2.jpg (98.0 KB, 560 downloads) 2a1 YUV and RGB Panel 2.png (27.5 KB, 556 downloads)
#9
07-12-2018, 07:50 PM
 sanlyn Premium Member Join Date: Aug 2009 Location: N. Carolina and NY, USA Posts: 3,648 Thanked 1,264 Times in 971 Posts
The script for the Sample 2a1 mp4 posted earlier uses a stabilizing filter called Stab (how clever!) which calms the jittery camera motion just a little by about 4 to 6 pixels, mostly horizontally. It's an optional filter and you can skip it if you wish. But getting a slightly more stable image makes filtering and encoding more efficient.

Code:
# ########################################
#
#      input = "test sample 2a1.avi"
#              AVS SCRIPT
#
# ########################################

### --- Adjust the path statements below to match your system for Avisynth's ---###
### --- plugins folder. In W10 Avisynth is usually in C:\Program Files (x86) ---###
Import("path\to\AVisynth\plugins\Santiag_v16.avs")
Import("path\to\AVisynth\plugins\RemoveDirtMC.avs")

### --- Adjust the path statement below to match your system. ---###
AviSource("E:\forum\faq\JohnGalt\test sample 2a1.avi")
AssumeTFF()
ColorYUV(cont_v=-20,off_v=-2,off_u=10)
Tweak(cont=1.2,sat=1.2,dither=true,coring=false)
Levels(16,1.1,255,16,235,dither=true,coring=false)

ConvertToYV12(interlaced=true)
QTGMC(preset="very fast",EZDenoise=4,denoiser="dfttest",ChromaMotion=true,\
ChromaNoise=true,DenoiseMC=true,border=true,FPSDivisor=2,GrainRestore=0.3)
Stab()
Stab()
RemoveDirtMC(20,false)
Santiag(4,4)
Crop(8,4,-12,-8).AddBorders(10,6,10,6)
ConvertToRGB32(interlaced=false,matrix="Rec601")
return last

# ########################################
# On output in VirtualDub, load VirtualDub
# filters by loading a .vcf file. Filters
# required are ccd.vdf, ColorfMill.vdf,
# and Curves (gradation.vdf). The
# .vcf file is test2a1_Vdub Settings.vcf
# ########################################
As with the earlier sample, I saved the VirtualDub output to lossless Lagarith YV12. Your encoder is going to make it YV12 anyway so you may as well let it be done by an app that does a decent job of it. Meanwhile I used "full processing mode" for running the VDub RGB filters on the script's output.

Now for the details:

### --- Adjust the path statements below to match your system for Avisynth's ---###
### --- plugins folder. In W10 Avisynth is usually in C:\Program Files (x86) ---###
Import("path\to\AVisynth\plugins\Santiag_v16.avs")
Import("path\to\AVisynth\plugins\RemoveDirtMC.avs" )

These statements import the entire text of two Avisynth plugins into your script after the script is started in VirtualDub (you won't see the script texts, but they will exist in memory). Why are they published as .avs file in the first place? I'll explain again: Many Avisynth plugins are published as .avs text files rather than encoded .dll's. The usual reason is that the plugins come in several versions with similar coding that can be confusing when Avisynth starts loading plugins at run time. Encoded .dll's and .avsi files are loaded automatically at run time. But .avs files have to be explicitly loaded with the Import() function.

### --- Adjust the path statement below to match your system. ---###
AviSource("E:\forum\faq\JohnGalt\test sample 2a1.avi")

Again, good ol' AviSource() is used to open and decode a standard AVI file.

AssumeTFF()
We will be deinterlacing and rearranging fields, so specifying the field order is important. TFF and BFF are field parity functions. If you want to see some tricks you can play with Avisynth parity functions, use your installed Helpfiles or try http://avisynth.nl/index.php/Parity.

ColorYUV(cont_v=-20,off_v=-2,off_u=10)
"cont_v=-20" lowers V-channel red-green contrast with a negative value, reducing brights a bit but also lightening the darks a little. This form of contrast tends to "shrink" the luminance band from both ends toward the middle. It differs from other "contrast" commands which do more for the brights than any other part of the spectrum. Using contrast in this manner is really a way to reduce red-green saturation somewhat. Reducing it more would have turned the image yellow.

"off_v=-2" applies a negative offset that reduces the value of all red pixels by 2. If you viewed this in a histogram it would literally look like "shoving" the red-green band toward the left (darker). "off_u=10" does the opposite, applying a positive offset that shoves the blue channel toward the right side (brighter). It's another correction for the red-yellow imbalance. However it does affect the actual color of "real" blacks and black borders in an image, but it will be corrected later in RGB.

Tweak(cont=1.2,sat=1.2,dither=true,coring=false)
The kind of "contrast" in Tweak's "cont=1.2" differs from the "contrast" function in ColorYUV. Rather than shrink values from both ends toward the middle as ColorYUV does, Tweak's contrast pushes brights ahead or pulls brights back, but with much less affect on the darks. This form of contrast behavior is more like the way a contrast control works in a proc amp, VDub capture controls, or TV.

"sat=1.2" is a mild increase in overall chroma saturation. Brightening luminance all by itself doesn't change the colors at all in YUV, yet the impression you get when YUV luma is brightened is that the colors have somehow changed. They haven't, but your brain insists that the colors look too tame. Increasing saturation cures that impression (but also causes some mild chroma overshoot past 235. It will be fixed with the Levels function and in RGB). As usual dither is turned on to avoid response gaps, and coring is turned off to avoid sharp dark or bright cutoffs.

Levels(16,1.1,255,16,235,dither=true,coring=false)
Here is where we correct some of our our corrections, so to speak. Some original brighhts and some corrected brights extend past y=235, so this statement limits bright output to 235 by gradually lowering the values of over-bright pixels so that they "fit" inside the 16-235 corridor without being sharp0ly clipped at the high end. Blacks remain the same at 16 input and 16 output. The midrange still looks a little dim, so gamma gets a mild increase of 1.1. Dither is turned on to avoid response gaps, and coring is turned off to avoid sharp dark or bright cutoffs.

ConvertToYV12(interlaced=true)
The filters that follow will work in YV12 rather than YUY2, and Avisynth knows how to make that conversion correctly when interlaced is specified. At this point, interlaced=true. Is the conversion different for interlaced and non-interlaced? Yes. Do most NLE editors take note of the differences? Nope. Do they degrade the image when they don't make the conversion properly? Yes.

QTGMC(preset="very fast",EZDenoise=4,denoiser="dfttest",ChromaMotion= true,\
ChromaNoise=true,DenoiseMC=true,border=true,FPSDiv isor=2,GrainRestore=0.3)

Here, QTGMC is used to deinterlace as well as to denoise. In fact many people use QTGMC as a denoiser. The EZDenoise function is set to nothing by default at "very fast" presets, but we override that by setting EZDenoise to 4 and specifying the denoising filter as dfttest. The values also tell QTGMC that when denoising, take account of ChromaMotion and ChromaNoise (otherwise only Y luma is filtered). Set DenoiseMC to true to turn on motion compensation analysis when looking at dirt and bad edges. "border=true" resizes borders properly when QTGMC creates full-sized frames from half-height fields. "GrainRestore=0.3" restores a small amount of the original grain to avoid a plastic, over-filtered look.

As in the previous script, FPSDivisor=2 drops alternate fields to maintain a 29.97 fps frame rate. This does cut down on temporal resioltuioon during motion, but it's better than the nasty split lines and buzzing edges that came from the camera. Meanwhile QTGMC has applied some interpolation between two fields to compensate somewhat for the temporal effects.

From this point the video is progressive, not interlaced. You can encode it with interlace flags at final output if you want (standard definition BluRay will demand it) without visible damage, although some edges will still twitter a bit because of the camera's crazy field phasing.

Getting QTGMC and all its support files was described in the previous post. The link for the QTGMC package is http://www.digitalfaq.com/forum/atta...g-qtgmc_newzip.

Stab()
Stab()

Okay, we are "stab"-ing this clip twice. Camera jitter is not just distracting. It makes denoising more difficult and will devour encoding bitrate, which will have to be a pretty high bitrate to render so much camera motion. Stab() is a mild stabilizer that will at least calm things a little. You do lose a small bit of image real estate, because when Stab() moves the image 2 or 4 pixels this way or that, the borders also move and will "twitch" visibly when changing position in the final output. The way to fix this is to let the script process until finished, then review the results in VirtualDub and take note of how much twitching you see in the result.

It helps to have a VDub filter that lets you overlay a temporary black border to see how much new border pixels you need to keep all 4 sides quiet. The VDub filter for that is the old BorderControl v2.35 .vdf. The only problem with it is that it's kinda old and you can't save its settings in a .vcf file (the .vcf will crash). The original download site remains well hidden behind a slew of questionable archive sites, so I have attached it here as BorderControl235.zip.

Then go back to the script, adjust the Crop and AddBorders to new values, then run the script again and overwrite the Avi with a new one. The other way is to save the AVI file without changing the script, then open the finished AVI in VirtualDub and apply BorderControl, and then save another new AVI. Just remember that you can't save BorderControl's settings.

Stab() ships as an auto-loading .avsi file and is attached as stab.zip.
It requires RgTools.dll. If you have QTGMC, you already have RgTools.
It requires Depan Tools. This was previously posted and attached as DePan_Tools_1_13_1.zip

RemoveDirtMC(20,false)
This is another old standby that does a decent job of removing noisy tape grunge and excessive grain. Also does a little edge smoothing and can be set up at stronger values than 20 to remove spots and some dropouts. Some people use a value of 40 or50, or up to 100. The stronger the setting the more it removes, including stuff you'd rather keep. The "false' in the parameters tells the filter that your video is not pure grayscale. This is an .avs script because there are dozens of versions of the main internal function (RemoveDirt), and the version you want is the one in this .avs file.

RemoveDirtMC.avs can be downloaded here: http://www.digitalfaq.com/forum/atta...emovedirtmcavs.
It requires RemoveDirt v0.9 support files: http://www.digitalfaq.com/forum/atta...ovedirt_v09zip. The removedirt_v09.zip file contains an instructional text file, "How to install these files". Please don't ignore it.
The plugin also requires mvtools and masktools dll's. If you installed QTGMC and its support files, you already have both plugins.

If you haven't already done so, check this digitalfaq thread about older 32-bit VisualC++ runtimes that you might need for Win7 thru Win10 for running plugins like RemoveDirtMC and which Microsoft doesn't install: .
Fix for problems running Avisynth's RemoveDirtMC

Santiag(4,4)
Santiag is a line smoother and anti-alias filter. It has a bunch of parameters but the only ones you need now are the two shown, which indicate horizontal strength and vertical strength. 4 for each is a decent starting value that doesn't inflict much damage (i.e., detail softening). Values up to 8 for each strength will work but watch for excessive softening. There are so many versions of Santiag's internal code, Santiag_v16 comes to us an .avs file.

Santiag_v16.zip is attached. There are no instructions, but there are links in the texzt of the .avs to a totally geeky discussion thread about it at doom9.org.
It requires nnedi3.dll. If you have QTGMC installed, you alrweady have that .dll.

Crop(8,4,-12,-8).AddBorders(10,6,10,6)
The crop() values were determined using the BorderControl method explained above for Stab(). This removes dirty borders and replaces them with more centered versions for a frame size of 720x480.

ConvertToRGB32(interlaced=false,matrix="Rec601")
This is a standard conversion for applying VirtualDub or other RGB filters to the script's output.

The three VDub filters I used were Color Camcorder Denoise (aka "ccd"), ColorMill, and gradation curves. If you don't have them yet you can use the same link from the earlier post to get VirtualDub Filters.zip (http://www.digitalfaq.com/forum/atta...dub_filterszip).

The previous post also described how to use a .vcf settings file to load the filters and settings. The .vcf file for the video discussed here is attached as "test2a1_Vdub Settings.vcf"

Here is a link to an older thread with discussion and pics on how to use all three if these VDub filters: Encoding from Huffyuv?. There are several other posts in this thread that show samples of solving various problems, along with a comparison video and pics on how to improve a nightmare home video worse than yours (Encoding from Huffyuv?). You can learn much by browsing other restoration threads.

return last
I often use this statement out of habit. All it does is return (or transmit) the very last thing that happened preceeding the statement. It can be used as a tester or debugegr because you can place it anywhere in the script to stop processing wherever you want and check the results.

If you can get this far, you'll have a pretty firm grounding in Avisynth -- especially for someone who has never used it -- and a nice collection of plugins for Avisynth and VirtualDub. I sort of felt the same intimidation when I started, but far worse. Lordsmurf and a few other forum members brought me to Avisynth kicking and screaming all the way. I don't know how I worked for so long without it.

Attached Files
 BorderControl235.zip (26.8 KB, 19 downloads) stab.zip (474 Bytes, 25 downloads) santiag_v16.zip (1.9 KB, 28 downloads) test2a1_VDub Settings.vcf (3.8 KB, 14 downloads)
 The following users thank sanlyn for this useful post: homefire (11-27-2019)
#10
07-13-2018, 03:28 PM
 JohnGalt Free Member Join Date: May 2018 Posts: 7 Thanked 0 Times in 0 Posts
Wow again! And Thanks again!

I'm heading out of town for a few days and will tackle this when I get back.

 Similar Threads Thread Thread Starter Forum Replies Last Post steffen42 Capture, Record, Transfer 0 06-26-2018 05:13 PM SOXSYS General Discussion 9 03-01-2018 01:35 PM Sossity Computers 1 10-25-2011 03:54 AM cyber-junkie Restore, Filter, Improve Quality 1 01-30-2010 03:39 PM myron Capture, Record, Transfer 1 05-07-2004 02:46 AM

 Thread Tools

All times are GMT -5. The time now is 03:34 AM
 -- digitalFAQ Classic -- digitalFAQ Responsive (beta) Top of Page  -  Site Home  -  Forum Home  -  Archive  -  Forum Policies