Considering the horrible playback problems, you have very limited expectations for final output. In most circumstances these captures would be considered unworkable. Most people would simply say that you should get better captures and save yourself some grief. But you seem determined to use them anyway, so I'm suggesting a few ideas that would make limited improvements, more or less. The results will be far from ideal.
The images below are before-and-after comparisons of the input image and the filtered image.
In the picture above
, the left hand image is from the original unfiltered frame. Note the blue chroma bleed and overrun on the blue shirt sleeve. There is also color overrun on the boy's shoulder and arm. In the skin tones, note the mottling, grain, and clumpy mosquito noise on the arms and on the boy's shirt. The right-hand image shows the results of filtering with Avisynth and VirtualDub
. The Avisynth plugins used were QTGMC with dfttest denoising, RemoveDirtMC, ChromaShift, and aWarpSharp2. The VirtualDub
plugins used were Color Camcorder Denoise ("CCD"), SmartSmoother HiQuality by KlaUS post, and MSharpen by Donald Graft. Later, in the discussion below, I'll go into detail about these filters and the workflow.
In the picture above
, the left-hand image is from the original unfiltered frame. The cross-hatching and chroma dropouts are typical for these captures. These defects are not normal. They shouldn't exist. The cross-hatching looks worse due to over sharpening in the player. The right-hand image shows the results of filtering. The same filters described above were used. The green chroma pumping couldn't be fixed entirely, but probably most viewers won't notice.
Scripts and filters applied to mislav1.avi
Trim(last,0,300) + Trim(last,506,663) + Trim(last,946,0)
The Avisynth Import()
function is used to load the code for two plugins, RemoveDirtMC.avs
. The plugins are in script form rather than compiled as .dll's. Many popular Avisynth filters are published as .avs scripts. Often this is done because there are multiple versions of some plugins, and parts of the internal code are the same as code used by other filters. Loading multiple copies of the same code causes errors. But importing only the desired code from an .avs script avoids that problem.
Avisynth plugins appear in three forms: .dll, .avsi, and .avs. Compiled .dll's and .avsi files are automatically loaded when a script calls for them. But .avs filters must be explicitly imported with the Import() function, unless you're willing to copy the entire text of the .avs filter into your own script. It wouldn't be a good idea: some .avs filters are hundreds of lines of code.
Note that the path statements for the location of the .avs plugins is "D:\Avisynth 2.5\plugins\". You must modify that path statement to match the location of Avisynth plugins in your system.
Import() function: http://avisynth.nl/index.php/Internal_functions#Import
Trim(last,0,300) + Trim(last,506,663) + Trim(last,946,0)
function is used to open and decode the video. Again, you should modify the path location to match locations in your system. The Trim()
functions are chained together with "+" symbols and joined as a single stream of video segments. These Trim statements select only the smooth flowing frames of the sample, thus discarding the "stopped" segments of frozen frames. The term "last
" refers to "the last video stream mentioned" in a previous statement, meaning that it refers to the video that was opened and decoded in the AviSource statement.
You can also do this editing in VirtualDub or another editor that can handle lossless files.
The Avisynth default field order priority is Bottom Field First (BFF). Unfortunately that will yield incorrect results if you deinterlace a video whose field order is Top Field First (TFF), and the output will have back-and-forth motion stutter. AssumeTFF()
overrides the default and assumes that everything that follows will be TFF.
converts the YUY2 video to a YV12 colorspace. It's a good idea to let Avisynth do this because Avisynth does it correctly (many NLE editors aren't so careful, including Adobe). Note that with these conversions you must indicate whether or not the video stream is interlaced. Yes, it matters. Otherwise just about anything will be assumed and your chroma channels will be a mess. YV12 will be needed for the filters that follow. Some of the filters will also work in YUY2, but it's a poor idea to jockey back and forth between different colorspaces. Better to do it just once.
In the QTGMC statement, the "\" symbol is used to concatenate multiple lines of code. It's known as the line-continuation symbol. QTGMC
, the premiere deinterlacer, is used here to deinterlace and output 50fps progressive video. The "very fast
" preset is specified to set up many of QTGMC's default operations. QTGMC is also used for some denoising and for shimmer and motion correction. The dfttest
plugin, which is supplied with the QTGMC package, is specified for basic denoising with an EZDenoise
strength of 6, and it's also a stand-alone filter in its own right.
The other parameters in the QTGMC statement will override some QTGMC defaults and activate some others. Motion compensation is enabled to include chroma (ChromaMotion
cleaning, MotionCompensated denoising (DenoiseMC
), a small amount of GrainRestore
to prevent over-filtering effects, sharpness
at 75% to prevent over sharpening effects, and border=true
to resize borders in such a way as to prevent split or fluttery borders.
has a special algorithm and tweaker to help calm excessive interlace combing effects. In this case it's optional, but it does make an interlaced final version look cleaner.
vInverse2 is available here: http://avisynth.nl/index.php/Vinverse
. The version you want is the
"x86" (32-bit) version. This plugin and many others also require the 2012 VisualC++ runtime. A link for the x86 version of the runtime is on the download page. But the same runtime link is also available in the QTGMC package
, which is at http://www.digitalfaq.com/forum/atta...g-qtgmc_newzip
. Not only does the .zip contain all the support files you'll need, it also has stand-alone filters that you will use elsewhere, especially in this project.
is an all-around favorite cleaner that works on floating tape noise and mild mosquito noise and grain, and has even been used to clean spots and dropouts. Here its strength is set at a midstream 30
, but when used over 40 it can make some moving objects disappear. The "false
" parameter tells RemoveSpotsMC that this video isn't grayscale. ChromaShift
is used to lift displaced or bleeding chroma upward by 2 pixels. aWarpSharp2
is used to constrict noise around edges, especially color stains that tend to bleed into other areas. MergeChroma
is used to limit aWarpSharp2 to working on chroma only.
Get RemoveDirtMC.avs at this link: http://www.digitalfaq.com/forum/atta...emovedirtmcavs
. You'll also need mvTools2, RemoveDirt.dll, and Removegrain (or RgTools), which are all included with QTGMC. You also want to look at this post: Fix for problems running Avisynth's RemoveDirtMC
the ChromaShift plugin is at http://www.digitalfaq.com/forum/atta...romashift27zip
The aWarpSharp2 download page is at http://avisynth.nl/index.php/AWarpSharp2
. The filter requires the Microsoft VisualC++ 2015 x86 runtime, which has a link on the aWarpSharp2 page -- but the same link is already included in the QTGMC package.
MergeChroma and other Merge() functions
has several versions, but the long-time favorite is at http://www.digitalfaq.com/forum/atta...arpenfasterzip
. It also requires MaskTools2 and RemoveGrain, which are included with QTGMC (see earlier links for the QTGMC package). The "edgemode=2
" parameter tells the filter not to sharpen edges, in order to avoid edge halos.
adds a mild amount of very fine film-like grain to mask hard edges in smooth areas such as skin tones and to avoid an over-filtered "plastic" look. This plugin is part of the QTGMC package.
removes border pixels in this order: 6 pixels from the left border, zero pixels from the top, 14 pixels from the right border, and 10 pixels from the bottom. The side borders bound the usual SMPTE image in the central 704-wide pixels of the image. Although the original right border is really 9 wide at the bottom of the frame, it warps to 14 pixels wide at the top. This is the only crop that actually cuts into the image by 4 pixels, but the warp at the top of each frame is really ugly and should be removed. AddBorders(10,4,10,6)
restores the original frame size of 720x576 and centers the core image in the frame by adding black borders as follows: 10 pixels at the left border, 4 pixels along the top border, 10 pixels to the right border, and 6 pixels across the bottom. On any TV, black borders blend in with the display's black background. Since most modern TV's still use overscan by default, the borders won't be seen anyway.
Rules for the Crop() function (please read this!)
The colorspace is converted to RGB for the VirtualDub filters that will be used. Yes, you must
specify if the video is interlaced or not. At this point after QTGMC the video is not interlaced. The color system matrix for standard definition video is stated as "Rec601
". The last statement in the script is "return last
", which simply means "output the last thing you just performed", which was the conversion to RGB32.
Now, what about those VirtualDub filters I mentioned?
When you are running an Avisynth script in VirtualDub, you are allowed to load VDub filters into its filter chain. You must use "full processing mode" or the filters won't take effect. Because running VDub filters involves an automatic RGB conversion if you haven't done it already, you are still allowed to specify a color depth and a compressor for output if you don't want RGB after the filtering is done.
In this case I loaded three VirtualDub filters and specified an output video color depth of YV12 and an output compressor of lossless Lagarith. Why? If the next step is encoding for your final output, the encode is going to be Yv12 anyway. You might as well let VirtualDub make that output conversion after its filters are applied.
By the way, you will need the Lagarith lossless compressor. Its free, and everyone's media player can decode it. The reasons for its use are that it make slightly smaller files than huffyuv
is still more efficient for capture), and because huffyuv can compress only YUY2 and RGB. Huffyuv can't compress YV12, but Lagarith can handle RGB, YUY2, and YV12. Lagarith's installer is simple: just double-click it and it's there in a few seconds. Get Lagarith here: https://lags.leetcode.net/codec.html
The three VirtualDub filters I used were Color Camcorder Denoise (ccd.vdf)
, Smart Smoother HiQuality (SmoothHiQ.vdf)
, and MSharpen (MSharpen.vdf)
Color Camcorder Denoise ("CCD") is attached as ccd_v1.7.zip.
SmartSmoother HiQ and MSharpen are each attached to this post.
The attached SmartSmoothHiQ.zip contains the .vdf filter plus a "SmartSmootherHQ.htm" file and a subfolder named "SmartSmootherHQ_files". You should copy everything
-- the .vdf, the .htm, and the subfolder -- into your VirtualDub plugins folder. What are the htm and the subfolder for? They are the help files that will display when you click the "help" button in the filter's setup dialog window.
The attached MSharpen.zip contains the MSharpen.vdf filter and MSharpen.html. The html is the online Help file. Copy the .vdf file and
the .html into your VirtualDub plugins folder. When you load the filter in Virtualdub and click "Help" on the filter's dialog setup window, the html will display inbstructions.
In order for you to use exactly the same VDub filter chain and settings I used to create the output files, I have attached "VDub_settings.vcf
". A .vcf is a file of saved VirtualDub process settings. Download the .vcf file and save it somewhere (do not save it in your plugins folder). When you open VirtualDub, click "File.." -> "Load processing settings", and locate the saved .vcf file. When the .vcf is selected and opened, VirtualDub will load the three filters exactly as I used them.
All three .vdf plugins described above must be in your VDub plugins folder or the .vcf will simply display errors.
The output of this processing was saved as Lagarith Yv12 with the file name "mislav1_filtered.avi"
Scripts and filters applied to mislav2.avi
Filters for processing mislav2.avi ran very slowly when chained in a single script, so I elected to filter the video usng two scripts. Script #1 was named "mislav2_Step1". It produced an avi named "mislav2_Step1.avi" and was saved as YV12 using Lagarith compression.
When running this script in VirtualDub, there are no VDub filters loaded. Follow these steps to set up Virtualdub output for this script:
* Set "Video..." -> "color depth" to "YV12".
* Set "Video..." -> "compression..." to Lagarith lossless compression.
* configure Lagarith for YV12 output.
* set "Video..." -> processing mode to "fast recompress". Save the file as "mislav2_Step1.avi".
You've seen this same or similar code in the previous script, but there are two new statements that mightv seem unfamiliar:
function is used to discard the starting and ending frames, which are the bad frames and "stopping" points that can't be used. Tweak()
is used to help calm some of the red and red-yellow bleed and oversaturation. The startHue
values designate the red and red-yellow segment of the color spectrum in YUV. You can see the numeric values of color ranges in the tables and graphs of the Wiki page that documents the Tweak command at http://avisynth.nl/index.php/Tweak
. The "sat
" parameter in Tweak() manages saturation levels, which here are lowered to 85% of normal, just enough to stop a little of the red chroma bleeding.
The YV12 Lagarith output from Script #1 is then used as input for Script #2, which I named "mislav2_Step2.avs".
fft3dfilter(sigma=2, sigma2=4, sigma3=5, sigma4=45, plane=3)
Again, you've seen most of this same code or similar statements earlier. I'll cover details on some differences:
fft3dfilter(sigma=2, sigma2=4, sigma3=5, sigma4=45, plane=3)
These two statements and filters are an attempt to calm some of the red chroma "flashing" and blinking and color dropouts that occur in mislav2.avi. They don't solve the problem completely. Consider them optional, but they do help calm things slightly.
is a temporal smoother often used to help smooth periodic luma or chroma "pumping" found in misbehaving AGC camera exposure circuits. It isn't a cure-all, but it is a good temporal smoothing filter. Here, it's used at its maximum "maxr" or radius value (meaning that it averages differences in chroma pumping over a range of 15 frames). TTempSmooth is applied to chroma-only by using MergeChroma()
-- if this high value was applied to luma the video would be badly blurred. You can get the TTempSmooth plugin at its home page, http://avisynth.nl/index.php/TTempSmooth
is a temporal denoiser, used here at very high sigma values to help curb some of the hysterical red blinking. Of course it's not extremely effective because it's not that red is actually blinking, it's that red in the toy tractor is actually trying to turn green. However, fft3dfilter also helps clean up a lot of chroma noise and blotching. The "plane=3" value tells the filter to work only on chroma, not on luma. Fft3dfilter is supplied with the QTGMC filter package as "FFT3D".
is an anti-aliasing plugin designed to smooth sawtooth edges. It also tends to slightly soften cross-hatching but doesn't eliminate it. Cross-hatching is so severe in this video that it often gives sawtooth edges to curved lines. Santiag doesn't solve the problem completely, but it helps with the interlaced version of the output. The strength value of (2,2)
could be stronger but it would greatly soften other elements of the image. Santiag_v16.zip can be downloaded at http://www.digitalfaq.com/forum/atta...1&d=1531442483
Its home page and documentation are at http://avisynth.nl/index.php/Santiag
. The plugin also requires MaskTools2, NNEDI3, EEDI2, and EEDI3. And as you might guess, all of these support plugins are available with the QTGMC package. An optional requirement is SangNom2, but you can ignore that -- it's highly unlikely that you would ever use it.
This additional tweak()
statement near the end of the script is used to repair some of the lowered saturation caused by TTempSmooth and fft2dfilter. The increase value of "sat=1.1
" is a very mild saturation increase that affects all colors.
Three VirtualDub filters were loaded and applied to the output of Script #2. They are the same VirtualDub filters discussed earlier and can be loaded with the same attached "VDub_settings.vcf" file. I saved the output of this script as Lagarith YV12 because it went straight to my mp4 encoder after filtering.
The results of these scripts and filters are attached as "mislav_1and2_50p.mp4
". It has mislav1 and mislav2 joined together in one mp4 and is 50fps progressive for PC or external drive and TV playback. Also attached is an .mpg DVD-encoded version at 25fps interlaced, "mislav_1and2_25i_DVD.mpg
You might ask, how do you create an interlaced version of the 50fps progressive output? I did that by creating a lossless interlaced version of the 50fps .avi files. I used a script to join together "mislav1_filtered.avi" and "mislav2_Step2.avi", re-interlaced them together, and saved the output as Lagarith YV12 for the encoder. Remember, the two output files involved were earlier saved as Lagarith Yv12 output.
The script uses AviSource
to open each of the two input files. Each file is given a name that I invented to identify them. You can invent your own names for things as long as the names don't conflict with real-life function or filter names. I named the two files "Vid1
" and Vid2
I invented a new name of "vid12
" and used it to hold the contents of vid1
joined together with the "+"
symbol (technically known as an "unaligned splice"!). Then, on a separate line, I type the name of the new video file, "vid12", and this line shifts the focus of operations onto the new vid12 video. The statements that follow this line will be applied to vid12.
overrides the BFF default and defines the field order priority for the operations that follow. SeparateFields()
separates each progressive frame into two half-height fields. Actually, because the original frames are progressive to begin with, each two half-height fields are duplicates of the other. Then SelectEvery(4,0,3)
specifies that for each 4 half-height fields, select the first and 4th field (frame and field numbers start with 0, so the 4 frames are numbered 0, 1, 2, and 3). Then the Weave()
function weaves each two half-height fields into a single interlaced frame.
At this point the two 50fps progressive files have been joined into a single 25fps interlaced video suitable for DVD or SD-Bluray. Then result is attached as "mislav_1and2_25i_DVD.mpg".
Now, if anyone can suggest a better way to get those videos transferred and played properly....
With better transfer and playback you could get much better results, and with far less work and effort.