So here are more answers to your questions that you likely anticipated. But why fool around? Some learning to do here, so we may as well get down to it. You have a typical damaged piece of VIdeo garbage that requires more than a passing filter. Attached is an mp4 version of the rework I'm suggesting ("output_25i_playback_4x3DAR.mp4
") and an MPEG/DVD version ("output_25i_playback_4x3DAR.mpg
"). Below, details of how they were created.
Originally Posted by Bios
The problem here is that the mpeg2 is flagged as 25p but it contains a 50i signal.
BTW, there is no such thing as "50i" unless you're talking about a video that plays at 100 fields per second, which this doesn't. Marketing con artists recognized the mentality of their target market and started talking "50i" to trick suckers into thinking they were getting something for nothing. What they got was good old 25i, which stands for 25 interlaced frames
per second playing at 50 fields
per second. But, yes, your mpg sample was lossy re-encoded in an apparent effort at showing off someone's talent for royally borking the chroma structure and making the interlacing look like crap. On top of that, the villains didn't even give you legal signal levels, which exceed y=16-235 and with a high black level so that the borders aren't even black, they're dark gray.
And on top of that: your mpg sample is a dud in some systems. Its total bitrate including audio far exceeds the MPG/DVD max standard by several thousand databits. The more I fooled around with this critter, the more amazed I was at discovering new foul-ups (sic) in every category. A couple of my utilities wouldn't accept it. AftertEffects played the audio as pure hiss and said the mpeg didn't have DVD-compatible bitrate structure.
Does 50fps interlaced that plays 100 fields per second exist? Yes, it does. Would you call it 50i or 100i?
Originally Posted by Bios
PS: It gives me an 502 Gateway error when I upload the sample, so here
there is the link.
I would have tried DGIndex to demux the audio and video (you will get audio errors, by the way, which is why I made a WAV first in Virtualdub
). Encoding of this sample is a demo in criminal malfeasance. I took the liberty of demuxing your audio & video and posting it here as .m2v and WAV files. When offsite samples disappear, the entire thread becomes virtually useless. The script needs DGindex anyway to create a .d2v project file for opening the mpg video in Avisynth, which is the way it's supposed to be done with mpg files. If your sample was an AVI capture, we'd use a different method.
Originally Posted by Bios
Also, I don't know if there is a Chroma Upsampling Error (it's in 4:2:0)
No, it isn't. It's known as Really Stupid Processing Error. As MPEG it's supposed
to be 4:2:0 (aka YV12), just like DVD, BluRay, and broadcast TV. What do you think MPEG is, anyway?
Originally Posted by Bios
And there is still blockiness artifact from mpeg2 (At that bitrate I wasn't expecting so much of that)
The high-compression block noise existed prior to the re-encoding, in whatever screwed-up low bitrate original segment they were working with.
Originally Posted by Bios
Can someone help me restoring this files, preferably for semi-archive intensions, and possibly progressive? (Yes, I know it's lossy )
This video disaster fell below any known archive standard a long time ago. At this stage the best "archive" tactic is to save it somewhere as-is, in the hope that future developments in technology make more improvements possible. Another concept you should understand about archive standards is that interlaced sources are never archived as deinterlaced because deinterlacing is itself a destructive process.
Anyway, I don't think this sample will ever play smoothly in interlaced form. It's too damaged in that regard and will always play with sawtoothed buzzy edges and motion grunge (vague shimmer in smooth surfaces caused by data discarded in multiple stages of lossy encoding). I found that to be true even at 50p. The second shot will fare a lot worse than the first. In both scenes of the video I was forced to achieve clean edge playback and quell bad interlace effects by discarding alternate fields (QTGMC's "FPSDivisor=2"
) and processing the video as true 25p. In that case you can encode the results as interlaced for DVD or standard-def Bluray, or as progressive for other mounting. For internet mounting you have to resize to square pixel such as 640x480. Anamorphic 720x576/720x480 won't play on the internet.
Of course you can keep both fields by re-interlacing as the last step in the script (sample coded would be: "SeparateFields().SelectEvery(4,0,3).Weave()
"). Or you can just keep both fields as 50p. But I guarantee you won't like the 50p results. The data that defined smooth edges for interlaced play has been decimated and distorted in transcoding. Likely the original had some aliasing anyway.
I cut the video into two separate shots, since each required different processing and even had different levels problems. I joined the pieces in my encoder, but you can rejoin them any way you want. Your posted MPG is a real maverick; audio refused to decode properly (is it really a 7-channel sound track ?"). I managed to load the mpg into TMPGEnc Smart Renderer and saved the audio-only as a lossless WAV file. Audio is severely over modulated, so I de-amplified it by about 6db in the Avisynth scripts when opening the WAV file. Remember that the original AC3 audio is lossy, so lossy transcoding of lossy audio also damaged audio quality.
Script #1 is Part 1, or the first camera shot. It's damaged but is at least fixable. Parts of the top border appear to have a split line in some frames (or was that in scene #2 ?), fixed by cropping and adding new black borders as one of the last steps. There is some slight chroma bleed (Note: chroma bleed also occurs with s-video and HDMI input from VHS, so don't blame composite).
Script #2 is Part 2, or the 2nd camera shot. This shot is an effective display of really dumb processing choices. It would be interesting to have a history of the low-bitrate original. It looks like something encoded on PooTube. It's soaked with macroblocks and further hampered by bad tracking. The macroblocks can be smoothed but the mistracking can't be fixed. I'm surprised that this segment was retained, but i figure it had value that warranted keeping. Another problem was residual magenta chroma noise along the background wall. This appears to have been very well embedded in the original, and all the bad processing done without removing it earlier made it extremely difficult to clean all of it. FFT3D in the QTGMC filtering and BiFrost anti-rainbow is about as far as I would go -- additional chroma cleaners and Color Camcorder Denoise at full blast had zero effect and slowed processing down to less than 1 frame per second. It's unlikely that most viewers will notice the remnants.
In script #2 I used a general denoiser (MDG2) which is really a wrapper around the MVTools MDegrain2 function. It calmed a lot of the freaky edge noise in shot #2 that resulted from bad tracking and helped fixed a little of the split-line and blister effects along the top portion. I then re-applied a little ditherd film-like grain to mask lost detail and avoid a plastic look. I'd advise that adding more smoothers will just turn the whole scene into more of a blur. There is only so much that one can fix here before reaching the point of diminishing returns.
Originally Posted by Bios
Which program/filters/plugin should I use?
We'll get to that below. The program to use is 32-bit Avisynth v2.6 and 32-bit Virtualdub
(preferably v1.9.11). V.10 of VDub is kinda buggy and won't make nice with some filters.
If you've never used Avisynth you're definitely in for surprises. But i think you'll find that by browsing some usage examples it's not as mysterious as it first appears. The hardest part when getting started is downloading a bunch of starter plugins. Fortunately the same plugins are often shared by each other and are used again and again. One thing's for certain: if you want to fix beat-up videos like the one you sampled, there's no editor or NLE that comes anywhere near Avisynth's power, precision, and flexibility.
You'll need a highly popular lossless codec for intermediate AVI work files -- Lagarith lossless codec. why? Because it works with YV12, which you'll use a lot. Lagarith's handy foolproof installer v1.3.27 is on this page: https://lags.leetcode.net/codec.html
You'll be collecting Avisynth and VirtualDub plugins after you learn to work with videos. Don't download filters directly to plugin folders. Instead, create a main folder, then inside that folder create subfolders named for each filter or filter package you download. Unzip filters in their respective folders -- most filter downloads include several files and documentation. You don't want all of that in your plugins. Copy only the required filters into your main plugin folders.
The two Avisynth scripts I used are below. After each script I've posted line by line details:
### --- change the path below to match your system --- ###
### --- change the path below to match your system --- ###
The first step in running these scripts is to create a .d2v project file and to save an error-free .WAV audio file. To create the WAV file, open your MPEG in VirtualDub. If you don't have the MPEG/Dolby audio/video filters in VirtuaDub, open the mpg in the same editor you used to create your sample mpeg. In that editor, save the audio-only as an uncompressed PCM (WAV) audio file. If you still don't know how to save audio-only in your editor, I've posted the same .WAV file I used for your short .mpg sample and called it "output_demuxed.wav
". The video-only file is "output_demuxed.m2v
These attachments are your original posted mpg but demuxed and in different containers (no re-encoding):
The safest, cleanest, most frame-accurate MPEG decoder for filtering and processing is DGIndex. To create the .d2v project file from any mpg or m2v video, use the DGIndex function in the free DGMPGDec MPEG2 decoder utility (https://www.videohelp.com/software/DGMPGDec
Create a new folder in your PC and download DGMPGDec158.zip into that folder and unzip it. In that folder you'll see an Avisynth filter named DGDecode.dll. Copy DGDecode.dll into your Avisynth plugins folder. Then in the unzipped files find the main decoder app, which is DGIndex.exe. Double-click DGIndex.exe to create your .d2v workfile from any mpg-encoded video, as described in the next paragraph. You will not need a fully demuxed video copy (.m2v), you will only need a much smaller .d2v index file. Don't worry about any audio files that might be created (your sample will output audio errors anyway, so ignore them). Use the saved WAV file instead.
It takes more time and space to explain this than to do it:
When you double-click DGIndex.exe it will display a small-screen black editor window (empty) with a top menu bar. In the top menu bar, click "File
", then click "Open
". You will see a typical Windows "Open File" dialog window. Navigate to the location of your mpg video -- in this case it will be your mpg sample so that you can follow the scripts and use the saved WAV file, but it can be any mpeg you want. After you locate and select your mpg sample, Click "Open
" in the lower right-hand corner of the dialog.
You will now see a simple window that lists your selected mpg file. There are command buttons down the right-hand side, but all you need at this point is the "OK
" button in the lower right-hand corner to close this dialog. You will be returned to the main DGIndfex edit window and you will see the first frame in your selected video.
In the top menu. click "File.
" In the drop-down menu that results, click "Save project
". You will not need to enter any workfile names or locations. Just accept the defaults. DGindex will automatically start indexing and producing statistics on your selected video, all of which will be placed in the same folder as your selected video file. The processing will display in a vertical column window. Wait for the numbers to stop updating, until you see everything stop and the words "Finished
" or "complete
" appear. Then close all of the dialog boxes and the main DGindex window to end the program. In the folder with your selected video you will find a ".d2v" project file. That's the file you need for your processing scripts. Keep your source mpg and .d2v files together in that folder.
function opens a specified text-formatted Avisynth plugin and imports every line of the text into your own script during runtime. There are several reasons why some filters are published as text-formatted .avs files, usually because they contain code that is duplicated in several other filters or versions of the same filter. Importing only one version at a time avoids conflicts. The particular plugin mentioned here, RemoveDirtMC,
is discussed below.
Other Avisynth plugins are published as compiled .dll's
or as text .avsi
's. DLL and AVSI files load automatically when they are mentioned in your scripts. The .avs
fiters must be imported.
" is a name I invented to describe the audio track to Avisynth. "vid
" is another name I invented to describe the video track. You can invent your own names for entities in an Avisynth script as long as the name isn't the same as a genuine filter or Avisynth function. The "aud
" audio file is opened with the builtin WAVSource
function, and the "vid
" video file is opened with the MPEG2Souurce
is a function in the DGDecode.dll plugin that you copied earlier into your Avisynth plugins folder).
You now have to join the video "vid
" file to the audio "aud
" file using the builtin AudioDub()
function. Use AudioDub exactly as shown in the script, with the phrase (vid, aud)
in parentheses in the order shown, and with the comma. In coding terms, the words shown in parentheses are often called the syntax or the parameters format. With the AudioDub function, the intended video content is always named before the intended audio content.
At the end of the WAVSource statement you'll see a period followed by AmplifyDB(-6.0,-6.0)
. The period connects the WAVSource operation with a volume adjustment function (AmplifyDB)
which considerably lowers both audio channel volumes by 6 decibels each.
In Avisynth you can often use a period after a command to join one operation to one or more following commands. Sometimes this simplifies a compound operation, sometimes it just looks more complicated. In this case it greatly simplifies the layout of the two operations; the first operation (WAVSource) opens and decodes a WAV file, then the period "feeds" that result to the AmplifyDB operation.
Almost every Avisynjth filter and function can be examined in mind-boggling detail using Google to access the online Avisynth wiki. In a Google search bar enter the word "Avisynth" followed by the filter or function name. For example, the first operation in this script used the function WAVSource() to open a .wav audio file. So you would go to Google and enter "Avisynth WAVSource", which would give you a listing of several sources. The source at the top of the list would take you to this wiki page: http://avisynth.nl/index.php/AviSource
You don't have to know everything on these wiki pages. The general idea will suffice. Mainly you just need to know, basically, how to type the statement and/or where to get more info. Avisynth also installs its own builtin Help files into your Avisynth program folder. Access it by opeinfg your program group listings, locate the Avisynth program group, expanbd it, and click on "Avisynth documentation".
This statement uses only the first frames numbered 0 to 216, which makes up shot #1 in your sample. Note that frame numbers start with 0.
In the above statements note that #Histogram("Levels")
is preceded by a "#
" character. That character indicates that everything following it is a comment. Comments are not executed. I left that comment in the script to indicate that at one time I used an Avisynth histogram to check input levels and general luma and chroma response properties while making adjustments with ColorYUV()
, and Levels()
. Each of those last three functions performs a variety of image adjustments. The histogram was used to check that the adjustments fell within the allowed YUV values of y=16 to 235. Markings on various histograms and graphs indicate out-of-range values when they're detected. Values below y=16 will be crushed darks, where shadows and dark colors are just black blobs with no recoverable details. Values above y=235 will be clipped data, meaning overly bright hot spots, white or discolored blobs with no recoverable details in RGB.
applies a negative offset of minus 4, which is a fairly mild reduction in all pixel values from the darkest to the lightest. It corrects a slight overflow of bright values that start out just past y=235, but it also corrects overly bright or grayish-looking dark details and makes things look a little more 3-dimensional. Tweak()
adds a small but visible amount of badly needed chroma saturation. The command avoids unwanted coring
(sharp cutoffs of dark details around y-16) and adds dithering
(ordered dithering to "fill in" gaps where luma and chroma details are missing in large, solid-color areas such as sky and walls). Dithering helps to prevent block noise and hard gradient edges in areas that should appear smoothly graduated or solid.
is a final check on the preceding adjustments. This makes sure that pixel values from darks, midtones, and brights stay within the y=16-235 range. The numbers in the Level() syntax occur in a specific order and have meanings as follows: the value 20
is the target dark input range, which targets input values in a narrow range around y=20. 1.1
is a gamma or midtone value, in this case slightly brightening middle values which is where most skin tones, natural foliage and middle grays would occur. 255
targets the bright input values that would occur at the 255 brightest limit. 16
is the target dark output value. Remember that input values around 20 were targeted earlier, so here we are lowering some subtle shadow input values from around 20 to a darker 16 to avoid dark grays that might look a little too washed-out. 240
is a target for bright output values, and in fact it does darken brighter input values and brings them down to about y=-235, or just under 240 since there are no y=255 values in this particular shot - original bright values in this shot are at around y=245 before adjustment.
There are ways to find out what the brightest and darkest YUV pixel values are. The easiest is ColorYUV"s "Analyze=true
" parameter. See the ColorYUV page at http://avisynth.nl/index.php/ColorYUV
and the ling list of available options.
The numbers tell you when you exceed a desired range. They don't tell you what "looks right." That's a subjective judgment that requires some learning and a properly calibrated monitor. If you don't have a calibrated monitor, trying to adjust "correct looking" levels and chroma is a very effective means of punishing and frustrating yourself.
In Avisynth the default field order for all video is Bottomn Field First (BFF
). By placing AssumeTFF()
out there in front of everybody, you tell Avisynth to assume that the proper field order at this point and beyond is Top Field First (TFF). But note: in themselves, neither TFF nor BFF tells Avisynth if your video is interlaced, telecined, or whatever. Avisynth has no way of knowing one way or the other.
is the prime deinterlacer around these parts. The only deinterlacer that's equal or better would be your TV or playback system, and it has be pretty good to match QTGMC -- 90% of the time, anyway. QTGMC at certain settings (usually the slower ones) has the side effect of being a decent denoiser and can even do a little chroma cleanup. Mainly it tries to clean up fuzz-infected edges and some level of motion shimmer and defects in enlarging new interpolated frames. At the same time, it takes two interlaced half-height fields and ou8tputs two full-sized progressive frames. This doubles the frame count and doubles the frame rate. But in this case we're using FPSDIvisor=2
to discard that second frame and to maintain the original frame rate, for reasons described earlier.
Meanwhile QTGMC is being used with a preset of 'faster". This isn't its strongest denoising setup, so a few extra parameter defaults are strengthened a bit, such as setting ChromaNoise=true
for some extra chroma cleaning. Presets determine most default values, but QTGMC allows you override many of them.
and all of its support plugins, documentation, Windows FFTW3 system libraries, required VisualC++ runtimes and easy-as-heck short instructions are in an updated zip package at QTGMCPluginsPackageNov2017.zip
. The support files include a battery of Avisynth plugins that are also used as standalone filters in their own right. To get the .zip package, create a folder called QTGMC, download the zip into that folder, and unzip to open subfolders that contain everything you need in easy to handle form for the original QTGMC v.3.32, which runs just about everywhere in 32-bit Avisynth v2.6.n The QTGMC.zip package is a valuable asset that contains many filters you'll be using.
is run at a power of 30, which is fairly mild but visibly effective at removing what's known as "floating tape grunge" during motion, and does a little chroma and white-spot cleaning. RemoveDirtMC is an old standby and runs fairly fast. It's available as RemoveDirtMC.avs
. It's actually an .avs text file rather than a compiled .dll, so it doesn't load automatically when called from your script but must be imported as discussed earlier.
requires the following support files:
- If you're using Windows7 or later, you'll likely need two older VisualC++ 32-bit runtimes that Microsoft in their infinite wisdom forgot to furnish. See the thread "Fix for problems running Avisynth's RemoveDirtMC
", which has download attachments for Msvcp71.dll and Msvcr71.dll, which you simply copy into your Windows system folder. Instructions are in the posted thread.
- RemoveDirtMC also requires the RemoveDirt_v09.zip package
. Create a folder called "RemoveDirt v09", download the .zip into that folder, and unzip it. Copy the required .dll filters into your Avisynth plugins.
- RemoveDirt also requires either RemoveGrain_v1_0_files.zip
or the later RGTools.dll. Not to worry: both of those packages are installed with the QTGMC.zip package.
to filter chroma-only and tighten bleeding colors closer to edges. aWarShartp can also be used as a general sharpener, but it does tend to thin narrow lines, which is why it works as used here to reduce chroma bleed. MergeChroma()
is a built-in function that demonstrates that in YUV you can process luma and chroma separately without affecting the other. In this case MergeChroma works on chroma only.
is posted at digitalfaq as aWarpSharp2_2015.zip
. It requires the Microsoft 2015 VisualC++ runtimes
, which is included as links with the QTGMC.zip package.
is a sophisticated temporal sharpener that attempts to do its job without creating more sharpening noise or edge artifacts/halos. It beats ordinary sharpeners, especially those in NLE's and VCRs. It's a favorite that has been around for a long time and is still used as a support filter by some bigger complex plugins.
is available in the wiki pages at http://avisynth.nl/index.php/LSFmod
. The current version is LSFmod v1.9.avsi
, which autoloads when called from your script. LSFmod requires the following:
and its supoport files, which are included with the QTGMC .zip package.
which is included with the QTGMC .zip package.
, the current version discussed above as aWarpSharp2.
- the VariableBlur
plugin, posted as VariableBlur_070.zip
. VariableBlur requires the Windows FFTW3 system library, which is supplied with the QTGMC.zip package. It also requires the free Microsoft VisualC++ 2010 runtime
, which is used by several other plugins and Windows apps and is available at http://www.microsoft.com/en-us/downl...s.aspx?id=8328
Here's another case of joining two successive commands with a period on one line. Crop(12,2,-10,-8)
removes unwanted or dirty pixels in a specified order around the frame, as shown here: it removes 12 pixels from the left border, 2 pixels from the top border, 10 pixels from the right border, and 8 pixels from the bottom. As you can see, the circuit starts at the left and moves clockwise around the frame. AddBorders(12,4,10,6)
is configured to work along the same route, but it adds pixels in a way that gets the image more centered vertically. AddBorders adds 12 black border pixels to the left border, 4 to the top border, 10 to the right, and 6 to the bottom. Adding black pixels at or near the end of the script ensures that black borders in 4:3 images will blend seamlessly into the black backgrounds of modern displays.
The return last
statement returns the last
thing that occurred in the script, which was the border routine. We need a "return last
" here because you'll recall that early in the script we created two new clips, one an audio clip called "aud
" and the other a video clip called "vid
". At the end of the script, Avisynth wants to know which clip to return as completed. of course, the "last
' thing that was done is what we want returned.
Below, script #2 has many of the same coded lines as script #1, but it has a few that differ and has a special routine for that horrible block noise and bad tracking. The blue edge of that water tank is never going to be perfectly smooth no matter what you do, but we can blur what appears to be simmering water reflections. Unless someone comes up with new magic words from another planet, I think that's about as smooth and quiet as that blue tank edge will get. You'll need two special plugins for that ugly macroblocking: Dither Tools v1271
There are some old filters here from script #1 and some new ones. The first major change is that you'll need to downlaod a handy set of heavy duty denoisers that you'll learn to use in the future for desperate situations like this ugly video.
You can keep the specialized Dither filters in the Dither Toolset anywhere you want, but the most convenient location would be to go to your Avisynth program folder and create a new subfolder called "Dither". Actually you can create the new "Dither" folder anywhere you want. in the future you might find additional plugins for that Dither folder.
For the time being, download the attached "Dither_Toolset_v1271.zip
" file into that Dither folder. Unzip it and you'll find that it unzips into a subfolder named "Dither_Toolset_v1271". There are other Dither versions, so save that subfolder. You can see that I used that subfolder name in the script above. There are several filters in this subfolder, some of which are actually copies of dll's that you get with QTGMC. But QTGMC goes thru many changes, so keep these Dither filters together and import and load them them as 'family", as shown in the script. CAUTION: Read the "libfftw3f-3 - WHERE TO PUT ME.txt
" file carefuilly. It refers to a windows syslib file named "libfftw3f-3.dll", which is in the subfolder. NOTE: libfftw3f-3.dll is NOT an Avistynth filter. Never place it in your AVisynth plugins folder. If you have installed the files for the QTGMC.zip plugin package, you already have this .dll in your Windows system. To make sure, check the instructions in the "libfftw3f-3 - WHERE TO PUT ME.txt
Note: the Toolset folder includes extensive documentation in the "documentation" subfolder. It's rather heavy-handed html at first sight. But after you have used similar tools for a while it make more sense. Fortunately you will never
have to use all of the functions in that toolset.
plugin is posted as Contrastmask.avs
. Copy contrastmask .avs into your Avisynth plugins folder. It requires the same VariableBlur plugin
discussed and linked earlier in script #1.
Earlier I mentioned MDG2
, which is really a wrapper around the MVTools MDegrain2 function. It calmed a lot of the buzzy, broken-edge noise in this shot. MDG2 is a customized .avs plugin posted as MDG2.avs
. It requires MvTools2.dll
and its support files, which are included with the QTGMC.zip package. it also requires aWarpSharp2
, discussed and linked above for Script #1. See? I told you you'd be using theseesamemfilters all over the place.
These same lines occur in the start of Script #1. The difference here is that the frames for Part 2
start at frame 217 and extend to the end of yhour sample video.
, described above, is used to bring up shadow detail that gets dark and lost when camera AGC's darken a video because of bright objects entering the frame. Bright reflections off the water apparently set off AGC in this shot, causing the background figures to look rather dim. In ContrastMask, the "enhance
" value is the brightening strength, whose max value is 10. The one disadvantage with Contrastmask is that it does tend to creep a little too high into the low midtones.
is used to bring the unsafe brights down a bit (off_y=-3
). It lowers a surplus of blue (off_u=-3
) and balances it with an increase in middle reds and yellows (off_v=2
) to keep the background figures from looking too ghostly gray and colorless. Tweak()
increases saturation and Levels()
acts pretty much the way it did in Script #1.
appear as before, this time with QTGMC using the stronger and pokey "slow
" preset. This slows things down quite a bit, to just about 3 fps processing. Not unusual for botched video. The tried and true deBlock.dll
, plugin is used to hammer down and smooth macroblocks, which are really quite thick. I'm surprised that DeBlock left so much detail remaining. the "quant" parameter is the main strength setting, used here at near vmax. MDG2
has been described and linked earlier.
is posted as DeBlock_13.zip
. This filter is also used by another heavy plugin called MCTempoerqaldebnoise (aka "MCTD") which you'll encounter if you keep getting nightmare videos like this one.
Now we come to GradFun3
, an industrial=strength smoother used here at a very high value of "thr=1.0" (the default is 0.3). This is part of the DitherTools
package, which upgrades 8-bit color/detail to 16-bit, does its job, then deflates everything back to normnal 8-bit. GradFun3 is a strong smoother for hard edge banding and block noise. Part of its action is to partially mask hard edges with ultra fine film-type grain. If you want to read more (lots more) about why removing all the grain from video usually makes things worse, try the original doom9 thread where Dither tools was introduced: Color banding and noise removal
is an anti-rainbow\anti-chroma noise filter. It helps to clean color stains and blotches. Not 100% effective because of all the previous transcoding garbage and VCR noise, but every little bit helps. Posted as Bifrost_v2.zip
The last lines perform similarly to the last lines of script #1. What's added is a touch of AddgrainC(1.5,1.5)
, an old standy used to help mask defects, here at mild settings. AddGrainC.dll
is part of the QTGMC.zip package and is well documented in the package subfolders.
The output from these scripts were saved as YV12 workfiles lossslessly compressed with Lagarirth. Then Parts 1 and 2 were joined in my encoder and encoded to a 4:3 display ratio with interlace flags (even though they are physically progressive). The h.264 encode is attached as "output_25i_playback_4x3DAR.mp4". The MPEG2/DVD version is attached as "output_25i_playback_4x3DAR.mpg".
The only reason for spending time with videos made by people who don't know WTH they're doing is to teach yourself how not to process video. If you ever have to get into strong color correction, you can have a great time with VirtuaolDub filters and asdd-ons. But that's for later.
It gives me an 502 Gateway error when I upload the sample, so here
there is the link.