digitalFAQ.com Forum

digitalFAQ.com Forum (https://www.digitalfaq.com/forum/)
-   Restore, Filter, Improve Quality (https://www.digitalfaq.com/forum/video-restore/)
-   -   Suggestions on restoration process for Hi8 tapes? (https://www.digitalfaq.com/forum/video-restore/10595-suggestions-restoration-process.html)

cicaesar 05-03-2020 04:25 AM

Suggestions on restoration process for Hi8 tapes?
 
18 Attachment(s)
Hi everyone. I'm back after an initial tape capture which this forum helped me achieve with this thread. This post is really a continuation of that same work: after looking at the great results you guys obtain I now want to take a step up and start restoring my tapes, building on the avisynth script that Sanlyn drew up for me at the time.

First of all I want to thank the people in this forum, especially Sanlyn and Lordsmurf, not only for your help on my old thread but for all the useful information that you put in this forum.
Since I asked here about how to capture my video8 cassettes more than a year has passed. I did complete my capture in due time, thanks to you. Now I have some time to spare and I'd like to proceed with restoring, which is the focus of this post; after I'm done restoring I'll think about the conversion to DVD and x264 (maybe this time I'll manage not to wait an another year for the next step...).

Before posting here I prepared myself the best I could: I've put in almost a month reading dozens of forum posts, installing the needed software, experimenting by myself and trying to find a process to do this restoration. I went through avisynth's wiki, color theory, video formats, etc., and with every guide I came out with 2 answers and 5 new questions... some of them are still needing answers, hence this post.
Please bear with me for the incoming wall of text: I try to be thorough to understand better, and so that while I'm asking for help maybe I can contribute what little progress I have made to others.

Context
I have many Video8\Hi8 25 fps PAL cassettes that I captured in Huffyuv. The cassettes were originally taped with a Sony Handycam Video8, and were digitalized via VirtualDub with a Sony Handicam Digital8 DCR-TRV230E with integrated TBC, and an EZGrabber2 USB converter. I'm working on a 10 year old 4-core i5 CPU with 12 GB of RAM, so not the fastest machine on Earth; nevertheless, I don't mind waiting. It is not connected to a monitor but to a LG TV (model 57LM620S-ZE). Progressing from my first post, I managed to install both Huffyuv and Lagarith in their 32 bit versions, so for the editing I will be using the recommended virtualdub 1.9.11 (32 bit) and avisynth 2.6.0 (32 bit).

What I want to do
I want to restore this videos with avisynth and virtualdub, and in particular:
  • Make them "cleaner" (at least reasonably); for instance, reducing noise or eliminating jagged lines
  • Crop the headswitching noise
  • Minimize the visual impact of the right green border that pesters every one of my videos
  • Stabilize the image, and wherever there isn't the timestamp overlayed, deshaken it
  • Adjust color levels and remove color casts
I know there isn't a one-size-fits-all solution, but I guess that since all my cassettes were taped with the same camera I can separate the activity in 2 main steps: one to apply filters that only depend on the camera and can be replicated for every video (eg: cropping) and one with filters that depend on the scene and thus must be tailored every time (eg: color levels).

Regarding colors, I just want to make slight adjustments. I'm not a professional unfortunately and I do not have a professionally calibrated monitor. In many a post people strongly advised not to eyeball colors, so I'm going to rely on histograms to align the luma\chroma spectrum and I will make only slight adjustments regarding saturation. I've tried dabbling with virtualdub's graduation curves like Sanlyn explained but I'm always concerned that I'm overdoing things... I'd prefer not to use it if possibile.
I did however adjust my TV colors with simple visual calibration (twice actually, the first time I messed it up): I set Windows' and my video card's color adjustments as neutral, I have removed all kinds of dynamic color regulation from my TV, and I have followed the Lagom calibrating guide to manually set the TV color parameters. I can't pass the sharpness and the gradient test though (albeit the latter by a small margin).

So, I will describe the process I've come up with to have your input. I will provide attachments. I will list my questions in the text.

The sample
Attached is an Huffyuv video I've cut from a cassette clip in YUY format: 01 - Restoration sample - Cut Huffyuv.avi.

I've excluded the sound. It's shot indoors at night. Aside from the headswitching noise and the godforsaken right green vertical band, it looks to me to have a red cast.
At frame 141, Csample gives me a value of R:230 G:160 B:166 from the front of the man's sleeve, which should be white.
At frame 309, Csample gives me a value of R:255 G:162 B:144 inside of the lower half of the "eight" candle for a color that should be whitish.

Setting input levels
This is the first roadblock I am encountering. I couldn't find any post online that settled the problem of regulating the input level of the video. What I did find out is that is very important to set levels before using avisynth plugins, and that is better to set levels first and regulate contrast \ brightness after.
So my first step is to open the video with virtualdub, load a crop filter to remove the right green edge and the black borders, load the levels filter and try to eyeball the correct input levels that I will use later in avisynth. I noticed that the midpoint value doesn't change when I move the left and right sliders, so I think I should leave it at 1.0. I reeeeeeally don't know if this is correct to be honest.

Using sliders, I chose a (0,1.0,215) input:
Attachment 11709

I attach the virtualdub filter chain: 02 - Restoration sample - Levels initial reading.vcf.

Preprocessing with avisynth
I have separated avisynth processing in 2 scripts: the first is a preprocess that should be camera-dependent and thus applies filters that should be useful for all my videos without the need of changing it on a per-video basis. It's the very same avisynth script that Sanlyn suggested (I definitely am not able to identify image defects and choose which plugins to use).
Separating image quality processing from color correction has also the benefit of making faster adjustments in the color correction phase, because this processing is extremely slow on my machine.
I compress the video in Lagarith YV12. In the video color depth options I specify Autoselect for the input and 4:2:0 planar YbCbCr (YV12) for the output, in case it's not automatically setted just by choosing Lagarith YV12 in the compressing options. Seems to me that this settings gets resetted every time.

This is my Lagarith configuration:
Attachment 11711

Many questions here:
  • Is my Lagarith configuration correct?
  • The process runs at 1.5 fps, it's very very slow. As I said, I don't mind waiting if the quality gets better, but I do wonder if I'm doing something wrong to be honest.
  • In the script I tried adding an antialiasing filter (Santiag()) because I have very bad jagged lines, especially with subjects in motion, and I thought the antialiasing could help. I still see jagged lines though, I don't know how to remove them, and they really (REALLY) bother me. What can I use?
  • Maybe it's not very apparent here, but in other longer clips the processed video looks "clunkier" to me than the original one, as if motion were not as smooth. Am I doing something wrong? Is it just an impression?
  • Is it normal that the Lagarith file occupies way less space than the Huffyuv one (75 MB vs 100 MB)?
  • Is it normal that the Lagarith video has way less bitrate than the Huffyuv one? GSpot gives me 50 Mbps for Lagarith and 65 Mbps for Huffyuv. I mean these are still very high bitrate values but I wonder if I'm doing something wrong and if this can have an impact on the smoothness of the video in motion.
  • I've seen many times here the "align chroma" suggestion (ChromaShift(C=2, L=-8)): should I add this too?
  • I've also seen a lot this line: MergeChroma(awarpsharp2(depth=20)). I understand it sharpens the chroma channel. Should I use it?
  • Is there a way to clean the right border a little more, with a more aggressive "de-greener"? It's still too green after the chubbyrain routine; for now I tweak a little of its brightness down in the color adjustment step (more on this later), but maybe there's a better way?
  • I've seen that Lordsmurf made a version of Stab() for VHS and tapes (name is Stabmod()): should I use that with Video8 cassettes?
Here is the script, which I also attach (03 - Restoration sample - AviSynth preprocessing.avs):

Code:

Import("C:\Program Files (x86)\AviSynth\plugins\ChubbyRain2.avsi")
Import("C:\Program Files (x86)\AviSynth\plugins\MDG2.avs")

/* Variables */
green_border_offset=690                                                                        # The green border offset
dark_input_level=0                                                                        # The dark input level
bright_input_level=215                                                                        # The bright input level

AviSource("01 - Restoration sample - Cut Huffyuv.avi")                                        # Source file


Levels(dark_input_level, 1.0, bright_input_level, 16, 235, dither=true, coring=false)        # Map input levels to output levels [how to set input parameters though?]
Tweak(cont=1.3, sat=1.2, dither=true, coring=false)                                        # Initial bump in contrast and saturation for filters, to be corrected after

AssumeTFF()                                                                                # Top field first
ConvertToYV12(interlaced=true)                                                                # YV12 conversion to work on filters

# -- chubbyrain2 for interlaced right border cleanup -- #
Separatefields                                                                                # Deinterlace       
a=last                                                                                        # Set variable "a" as the full frame
a                                                                                        # Use stream a for creating the clean right border
Cnr2(mode="ooo", scdthr=255.0, ln=255, lm=222, un=255, um=255, vn=255, vm=255)                # Chroma noise reduction (should luma be 222 or 255?)
chubbyrain2()                                                                                # De-rainbow
Smoothuv(radius=7)                                                                        # Smoother
BiFrost(interlaced=false)                                                                # De-rainbow
Tweak(Hue=4, dither=true, coring=false)                                                        # Smooth green color
Crop(green_border_offset,0,-0,-0,true)                                                        # Crop out the image, selecting only the green band
b=last                                                                                        # Set variable "b" as the clean right border
Overlay(a,b,x=green_border_offset)                                                        # Overlap the clean right border "b" over the full frame "a"
Weave()                                                                                        # Reinterlace
# -- end of chubbyrain2 routine -- #

QTGMC(preset="very fast",border=true,ChromaNoise=true)                                        # Deinterlace and clean
vInverse2()                                                                                # Smoothing
#Stab(range=4)                                                                                # Mild stabilizer
MDG2()                                                                                        # De-grain
Santiag(strh=2,strv=2)                                                                        # Anti aliasing
SeparateFields().SelectEvery(4,0,3).Weave()                                                # Reinterlace
Crop(10,2,-16,-10).AddBorders(12,6,14,6)                                                # Crop out bottom head switching noise

return last

Attached is the resulting video: 04 - Restoration sample - Preprocessed Lagarith.avi

Analysis for color adjustment
For color correction I use an avisynth script to apply changes and analyze results, so that I can work on colors faster. As I mentioned I don't really want to do color correction, for which maybe virtualdub with colormill and curves would be more appropriate, I just want to "balance" the histograms. I don't feel confident eyeballing it with my limited skills and uncalibrated monitor. It seems though that I do need to apply saturation if I don't want everything to look washed out.

This is the script I use (05 - Restoration sample - AviSynth analysis.avs):
Code:

# Source file
AviSource("04 - Restoration sample - Preprocessed Lagarith.avi")       

green_border_width=690                                                # The green border width

crop(720-green_border_width,10,green_border_width-720,-10)        # Crop to exclude borders from histogram analysis

original=last                                                        # Set "original" as the original video

Tweak(bright=1.0, cont=0.85, dither=true, coring=false)                # Luma spectrum alignment with brightness and contrast
Tweak(sat=1.8, hue=1.0)                                                # Saturation and hue contol
ColorYUV(off_y=-10, gain_y=10)                                        # Luma spectrum shift and amplitude gain
ColorYUV(gain_v=-40, gain_u=15)                                        # Color alignment

Histogram("levels")                                                # Histogram YUV analysis
StackHorizontal (original, last)                                # Horizontal comparison

ConvertToRGB32(interlaced=true,matrix="Rec601")                        # RGB conversion for final VirtualDub processing
RGBAdjust(rb=-20, gb=10, bb=10)                                        # RGB spectrum alignment

HistogramRGBLevels(factor=1.5)                                        # Histogram RGB analysis

return last

I'll explain my reasoning for you to critique (some more details are in the script's comments):
  • I start with the luma histogram: usually the Levels() function in preprocessing makes it stay inside the the allowed range, but I try to "center" it a bit more by regulating contrast\brightness and alter offset and amplitude with ColorYUV.
  • I try to center a little bit more the u and v channels to eliminate color casts (red in this case).
  • In RGB, I try to shift every channel so that it stays inside the allowed values
  • Finally, I change the saturation level. This is the only change that I apply by eyeball, I would have avoided it but without this change everything looks washed out.
This is frame 190 before color adjustment:
Attachment 11713

This is frame 190 after color adjustment:
Attachment 11714

Questions:
  • Should I have the luma histogram expand a little more to the sides? How would I do it? Would that make colors clip?
  • Even after centering the RGB channel, their spectrums are still outside of the allowed range, so I guess I am losing details there. Is there a way to "shrink" those spectrums in order to make them all stay inside the range?
  • If you happen to have a calibrated monitor, do you think the resulting colors are decent? Are they oversaturated maybe?

Applying color adjustment and virtualdub filters
After the analysis I copy the results in another script.
This script doesn't include histograms and the StackHorizontal function.
This script includes a section to try and minimize the impact of the right green border, by not including it in the saturation bump, removing some of its brightness and centering a little more the v channel.
After this script I apply virtualdub filters and I save the video as a Lagarith YV12.

This is the script (also attached: 06 - Restoration sample - AviSynth color correction.avs)

Code:

# Source file
AviSource("04 - Restoration sample - Preprocessed Lagarith.avi")       

green_border_width=690                                                # The green border width

original=last                                                        # Set "original" as the original video

#-- Green edge processing --#
crop(green_border_width,0,-0,-0,true)                                # Exclude the green edge from color adjustment
Tweak(bright=0.1)                                                # Reduce the brightness of the edge
ColorYUV(gain_v=3)                                                # Move the green-red spectrum towards the center
green_edge=last                                                        # Set "green_edge" as the green edge
#-- Green edge processing end --#


original                                                        # Work on the original video

/* Edit here: */
Tweak(bright=1.0, cont=0.85, dither=true, coring=false)                # Luma spectrum alignment with brightness and contrast
Tweak(sat=1.8, hue=1.0)                                                # Saturation and hue contol
ColorYUV(off_y=-10, gain_y=10)                                        # Luma spectrum shift and amplitude gain
ColorYUV(gain_v=-40, gain_u=15)                                        # Color alignment

overlay(last,green_edge,x=green_border_width)                        # Map the unmodified green edge on the color corrected video

ConvertToRGB32(interlaced=true,matrix="Rec601")                        # RGB conversion for final VirtualDub processing

/* Edit here: */
RGBAdjust(rb=-20, gb=10, bb=10)                                        # RGB spectrum alignment

return last

I open this script with avisynth and I apply the filters which Sanlyn suggested, minus the color correction bits. This is the filter chain (also attached: 07 - Restoration sample - Virtualdub filters.vcf):
Attachment 11717

Attached is the resulting video (08 - Restoration sample - Postprocessed Lagarith.avi)

Questions:
  • Are the virtualdub filters too strong? For instance, at frame 238 if you look at the right part of the cake, isn't it too blurred?
  • Is it correct to save the video in YV12 at this point, being that I just converted in RGB32 in avisynth? Shouldn't I just save it in RGB32?
  • The right border bothers me SO MUCH, it's still too visible and I really hate it. Moreover I'm not sure that the alterations I made to it are producing a better or an even worse result. Is there something different that I can do about it? Should I crop it entirely?

Deshaken
I loved what Sanlyn showed me about the Deshaker plugin in virtualdub, so I've decided to apply it whenever the videos didn't have a timestamp (not many of them unfortunately). The fluttering borders distract me a lot, so I'm going with the fixed zoom option, even if in many occasions it feels like it applies a bit too much zoom. I've gone through Deshaker's documentation and tried out the other options but I'm not able to reduce the zoom without having (albeit slightly) moving borders. I have configured the Deshaker plugin using this Lordsmurf's guide, even if some options are different in my plugin version (v3.1).

Questions:
  • There is a huge problem here, which I have to admit is the first time I see happening: as soon as the scene change (from the one with the presents to the one with the cake, frame 145), the borders start to shake A LOT. The first scene is perfect but the second sucks. I've tried splitting the 2 scenes and deshaken them separately but nothing changed. I fiddled with options to no avail. Maybe I should apply some sort of zoom filter manually to the second scene? How could I do that while maintaining the same resolution?
  • The file size of the final Lagarith video is 50 MB, half of the original Huffyuv video (100 MB). Is this correct or should I worry?
Here is my configuration.
Pass 1:
Attachment 11720

Pass 2:
Attachment 11722

I attach the vcf filter chains:
09 - Restoration sample - Deshaker pass 1.vcf
09 - Restoration sample - Deshaker pass 2.vcf


Final version
Here is the final version of the video: 10 - Restoration sample - Final Lagarith.avi

If you had the patience to read until here: thank you.
Any help \ comment \ critique will mean a lot to me.

sanlyn 05-03-2020 04:26 PM

What a lotta questions, LOL! Working on it. Will report later.
Members have plenty to chime in on, here.

sanlyn 05-05-2020 09:49 AM

10 Attachment(s)
I'll use a following post to reply to your earlier questions and comments. Sorry for taking so long. I was starting to feel as if I had a huge sign on the PC room door that says, "Please Interrupt Me At Any Time"!

You certainly had your work cut out for you with the two scenes in this short sample clip. The filter choices I made would likely be modified based on longer scenes, different lighting, etc. But it didn't take long for me to discover that whatever you were using for a camera was your worst enemy. These are some of the toughest color correction problems I've seen in a while, with noisy reds that are uniquely warped.

Step 1:

I divided the clip into two parts, A and B, each of which required different color correction and denoising. I started by correcting levels and color in part A. The first 144 frames of the sample clip is about hands working with ribbons, one darker skinned hand with part of a finger missing that apparently identifies a particular person.

I have no template script for videos, but I keep two very long .txt files loaded with hundreds of samples of potboiler text for many filters, such as 10 different command strings for QTGMC and copies of quick routines from the DitherTools package, etc. Here, I wasted over an hour until it finally became clear that a tough problem in Part A was fluctuating levels. Brightness affects one's perception of color, saturation, and contrast. It might not look like it at first, but there's also a low contrast problem. The camera's color response and auto-gain circuit (the work of Satan) twisted up the image histogram in ways I couldn't believe.

I attacked Part A with an autogain plugin, AutoAdjust.dll (https://forum.doom9.org/showthread.php?t=167573). It's an adjustable filter. The adjustments in its read-me doc are self-explanatory. It won't work in YUY2 but it works in YV16, which is another version of YUY2. I used it to level the luminance pumping between light and dark. This type of filter often doesn't work so well, doing exactly what you don't want it to do (which is what your camera's AGC was doing), but it did decent work here.

Next came red. After almost an hour it was apparent that YUV was not the tool for correcting this maverick red, wherever it came from. I made a basic correction to tame mostly the low end and high end somewhat, but the rest was left up to normal RGB controls, where some rather simple but time consuming steps straightened things out in Step 2.

Below are an image and histograms from a darker section of luma pumping in part A: The top left image is frame 102 with borders removed and ColorYUV/Analyze column numbers overlaying it. At top right is a YUV histogram of that original frame. Bottom image: at lower left, an RGB ColorTools histogram of the same frame; at lower right, an RGB saturation vectorscope.


http://www.digitalfaq.com/forum/atta...1&d=1588689270http://www.digitalfaq.com/forum/atta...1&d=1588689284

The ColorYUV(Analyze=true) numbers reveal a high black point at y=28 or so, and specular highlights beyond y=235 -- but most of the data in the numbers and in the YUV histogram are left of middle. The frame is a picture of mostly midtones but the data shown is darker than that. As many users know, the YUV bands and numbers for the U and V channel are not that accurate; RGB is more informative, and there we find a hard red peak in the midrange. But the bulk of color information is left of the middle. The vectorscope shows that most saturation is in the Red quadrant, although flesh tones should be lying along the slanted line in the upper left quadrant.

The Avisynth code that shows the Analyze numbers and the YUV histogram is pretty standard stuff that I usually run -- when necessary. The code below also shows the addition of the AutoAdjust plugin:

Code:

AviSource("I:\forum5\faq\cicaesar\01 - Restoration sample - Cut Huffyuv.avi")
Trim(0,144)
ConvertToYV16(interlaced=true)
AutoAdjust(high_quality=true,auto_gain=true,gain_mode=0,chroma_process=200,\
    auto_balance=false)
ColorYUV(off_y=-8,off_v=-8,off_U=3)
Levels(16,1.0,255,16,235,dither=true,coring=false)
ConvertToYV12(interlaced=true)
Crop(10,2,-26,-10)
ColorYUV(off_y=-8,off_v=-8,off_U=3)
Levels(16,1.0,255,16,235,dither=true,coring=false)
ConvertToYV12(interlaced=true)
#ColorYUV(Analyze=true)
#Histogram("Levels")
return last

ColorYUV and Histogram are disabled in the above text, because they must be run separately so that one doesn't affect the other. What I actually did in practice was to run two copies of this script (one copy with Analyze=true activated, and one copy with the Histogram commands activated) in two instances of VirtualDub at the same time, while I made adjustments in ColorYUV and Levels. With the above script running, I loaded RGB filters to tweak the YUV output. This was not the final script for Part A; it has no denoising. I saved the output to a temporary Lagarith file for color only and kept running it and making adjustments until I was satisfied with the color and levels. I then saved the script, saved the .vcf file for RGB settings to be used in Step 2, and deleted the color work file.

Keeping the YUV settings but saving the RGB filters for later, the script below ran a stabilizer and made new borders. The stabilizer step was run by itself as the only filter in Step 1 because running all of the denoise filters at the same time would be far too slow (less than 1.5 fps running speed). The results were saved in YV12 for the next step.

Code:

AviSource("I:\forum5\faq\cicaesar\01 - Restoration sample - Cut Huffyuv.avi")
Trim(0,144)
ConvertToYV16(interlaced=true)
AutoAdjust(high_quality=true,auto_gain=true,gain_mode=0,chroma_process=200,\
    auto_balance=false)

ColorYUV(off_y=-8,off_v=-8,off_U=3)
Levels(16,1.0,255,16,235,dither=true,coring=false)
ConvertToYV12(interlaced=true)
stab()
Crop(12,4,-26,-12).AddBorders(18,8,20,8)
return last

Step 2:

The results of step 1 are input to Step 2, the cleanup and RGB step. This step loaded the RGB color settings determined and saved from Step 1. The reason for saving RGB color until this step was because cleanup filters often require a tweak of color and levels.
Code:

AviSource("I:\forum5\faq\cicaesar\avs\samplePartA_02_stb.avi")
AssumeTFF()
QTGMC(preset="super fast",border=true,TR2=2,GrainRestore=0.3)
vInverse2()
MDG2()
Dfttest()
TemporalSoften(4,4,8,15,2)
MergeChroma(aWarpSharp2(depth=20).aWarpSharp2(depth=10))
LSFmod()
ConvertToRGB32(interlaced=true,matrix="Rec601")
return last

Note that the output is 59.94fps progressive. This file can be re-interlaced for DVD, or remain progressive and be resized for web mounting, or whatever, as shown later.

The image below shows frame 102 after RGB color work, with its RGB histogram. This is brighter than the original. The histogram shows that the black level has been moved to the left a bit, the huge red peak is tamed, and other colors have spread rightward into the midrange. The brighter frames that precede and follow this one were brought down earlier by AutoAdjust to a more reasonable level so that the "pumping" effect is minimized.
http://www.digitalfaq.com/forum/atta...1&d=1588689491

The VDub RGB filters used were ColorCamcorderDenoise, gradation curves, ColorMill, and ColorTools v1.5 (https://sourceforge.net/projects/vdf...1.5%20update1/). I've included a PartA_VirtualDub_Settings.vcf so you can see how the filters were configured. The image below is the gradation curves RGB Red panel that controlled RGB Red. At the top of the slanted line, the line curves to the right to gently lower the bright Reds to stay within RGB=255. At the lower left, there's a short "notch" filter that keeps dark reds below RGB-8 at RGB-zero (to keep red out of the black borders).
http://www.digitalfaq.com/forum/atta...1&d=1588689644

Step 3:

This step is for Part B, from frame 145 of the original sample to the end (the cake cutting scene). This scene, too, put me through a few hours of trial and error, step by step, until levels and color adjustments gave this scene some contrast snap and dynamic range. The original is dominated by dull red and a constricted luma. It was touchy going to get good contrast without burning out bright detail in the cake. The positive Contrast setting in "ColorYUV(cont_y=40)" works by extending values from the middle outward in both directions -- darks get darker, brights get brighter. A negative contrast setting works in reverse: values contract inward from both ends toward the middle. However, contrast in Tweak() works on more conventional lines -- positive contrast extends only the bright end, negative contrast contracts it. If you want to extend or constrict black levels in Tweak(), use Tweak's brightness setting.

I began by working with YUV and slowly adding RGB adjustments, jockeying back and forth between YUV and RGB, saving the color work file and running it countless times between short breaks. I didn't apply contrast to red, which already had too much. When I tried lowering red contrast, everything turned green. To balance red I also added more Green and Blue, which raised brightness a bit (in RGB, adding color raises brightness, removing color darkens. That's because RGB, unlike YUV, stores brightness and color data in the same pixel). When I was ready to move ahead, I did the same thing as in Step 1 -- I kept the YUV settings, saved RGB settings in a .vcf for later use in Step 4, deleted the color work file, ran the stabilizing script below and saved the output as Lagarith YV12:

Code:

AviSource("I:\forum5\faq\cicaesar\01 - Restoration sample - Cut Huffyuv.avi")
Trim(145,0)
ColorYUV(cont_y=40,off_u=3)
Tweak(cont=1.15,dither=true,coring=false)
Levels(16,1.0,255,16,235,dither=true,coring=false)
ConvertToYV12(interlaced=true)
stab()
Crop(12,4,-26,-12).AddBorders(18,8,20,8)
return last

Step 4:

Output from Step 3 was used as input into the denoising and RGB correction step for Part B:
Code:

AviSource("I:\forum5\faq\cicaesar\avs\samplePartB_02_stb.avi")
AssumeTFF()
QTGMC(preset="super fast",border=true,TR2=2,GrainRestore=0.3)
vInverse2()
MergeChroma(aWarpSharp2(depth=20).aWarpSharp2(depth=10))
MDG2()
TemporalSoften(4,4,8,15,2)
LSFmod()
ConvertToRGB32(interlaced=true,matrix="Rec601")
return last

Note that denoising for Part B was slightly different than denoising for Part A. In the QTGMC line, "TR2=2" adds a touch of extra shimmer repair to help with the bad motion noise on the cake letters. Again, output is 59.94fps progressive, saved as Lagarith YV12. The VirtualDub RGB filters used were ColorCamcorderDenoise, gradation curves, ColorMill, and ColorTools v1.5. RGB settings for Part B were different from Part A. The settings are attached as PartB_VirtualDub_Settings.vcf.


Below: images from Part B Before (left) vs After (right) B:
http://www.digitalfaq.com/forum/atta...1&d=1588689788


Last steps:

First, I made a combo file of Parts A and B joined in a 720x576 progressive 59.94 format, resized to 640x480 for web or streaming. The final encode is attached as sample_4x3_5994p.mp4.
Code:

vidpath="I:\forum5\faq\cicaesar\avs\"
vA=AviSource(vidpath+"samplePartA_02_stb_Q_5994p.avi")
VB=AviSource(vidpath+"samplePartB_02_stb_Q_5994p.avi")
VidAB=VA+VB
VidAB=VidAB.Spline36Resize(640,480)
return VidAB

Then I made an mpg that I re-interlaced, formatted and encoded for DVD. The final encode is attached as sample_2972i_for_DVD.mpg.
Code:

vidpath="I:\forum5\faq\cicaesar\avs\"
vA=AviSource(vidpath+"samplePartA_02_stb_Q_5994p.avi")
VB=AviSource(vidpath+"samplePartB_02_stb_Q_5994p.avi")
VidAB=VA+VB
VidAB
AssumeTFF()
SeparateFields().SelectEvery(4,0,3).Weave()
return last

Finally, I made a demo encode to show how to make 59.94 progressive eo play as 29.97 progressive by discarding alternate frames. This is a last resort way to make poorly interlaced 29.97i play without excessive consumer camera aliasing -- it also shows how fluttery and jumpy the final video will play compared to properly interlaced or double-rate progressive. The final encode is attached as sample_2972p.mpg.
Code:

vidpath="I:\forum5\faq\cicaesar\avs\"
vA=AviSource(vidpath+"samplePartA_02_stb_Q_5994p.avi")
VB=AviSource(vidpath+"samplePartB_02_stb_Q_5994p.avi")
VidAB=VA+VB
VidAB
SelectEven()
return last

I don't know about you, but levels and color on this critter were tough. I'll comment in a following post on your earlier remarks and questions.

sanlyn 05-06-2020 01:53 PM

Quote:

Originally Posted by cicaesar (Post 68433)
I couldn't find any post online that settled the problem of regulating the input level of the video.

That's odd, because it's mentioned and illustrated in the capture guide. In the Virtualdub capture guide, post #3 illustrates how to set up and view the YUV capture histogram for safe input levels. Post #4 shows what a typical proc amp dialog looks like and explains and illustrate how to set up cropping for measuring with the histogram. In another thread, post #6 has many images and a long discussion of correcting levels and color for a picnic video after a capture has been made with improper levels.

Quote:

Originally Posted by cicaesar (Post 68433)
What I did find out is that is very important to set levels before using avisynth plugins, and that is better to set levels first and regulate contrast \ brightness after.

No. Preferably you set levels as best as you can during capture, not after. I'm also trying to figure out how you can set levels without using brightness and contrast settings! Brightness controls black levels, contrast controls highlight levels.

Quote:

Originally Posted by cicaesar (Post 68433)
So my first step is to open the video with virtualdub, load a crop filter to remove the right green edge and the black borders, load the levels filter and try to eyeball the correct input levels that I will use later in avisynth. I noticed that the midpoint value doesn't change when I move the left and right sliders, so I think I should leave it at 1.0. I reeeeeeally don't know if this is correct to be honest.

You can't use an RGB levels control to test or adjust YUV levels. The main reason is that YUV levels y=16-to-235 changes to RGB 0-255 on display. If you want to test or adjust YUV levels, do it in YUV. The other reason is that YUV and RGB are two different color systems; they store luma and chroma data differently, and they don't behave in the same way in many respects.
The middle levels midpoint value doesn't change but the middle slider does move when you move the other sliders, and you can move it manually yourself.
The RGB levels filter has no effect on your YUV source. It affects RGB only. It does nothing to help correct clipping in either YUV or RGB.

Quote:

Originally Posted by cicaesar (Post 68433)
I have separated avisynth processing in 2 scripts: the first is a preprocess that should be camera-dependent and thus applies filters that should be useful for all my videos without the need of changing it on a per-video basis.

On a per-video basis you'll almost always have to change it. You don't filter for the camera; you filter for the scene the camera is delivering, which always changes.
Some of the initial operations I disagree with -- for instance cropping off borders permanently and adding a new border so early in the process. For one thing, it's followed by color correction in YUV, which more often than not will change the color of your border. It will be most obvious on TV. If the border color doesn't change in YUV, there's a good chance it can change later in RGB. There are various ways to adjust for that RGB change, but you can more easily use the BorderControl v2.40 plugin (https://sourceforge.net/projects/bor...atest/download).

Quote:

Originally Posted by cicaesar (Post 68433)
Is my Lagarith configuration correct?

You have specified multithreading. Are you using an older single-core Pentium multithreading CPU? Multithreading and multi-core are two different things. If your PC is multi-core you don't need this option.

Quote:

Originally Posted by cicaesar (Post 68433)
Regarding colors, I just want to make slight adjustments. I'm not a professional

Neither am I.

Quote:

Originally Posted by cicaesar (Post 68433)
unfortunately and I do not have a professionally calibrated monitor.

That's a serious problem, even with histograms. What you did for monitor adjustment, however, is far more than most people do.

Quote:

Originally Posted by cicaesar (Post 68433)
So my first step is to open the video with virtualdub, load a crop filter to remove the right green edge and the black borders, load the levels filter and try to eyeball the correct input levels that I will use later in avisynth. I noticed that the midpoint value doesn't change when I move the left and right sliders, so I think I should leave it at 1.0. I reeeeeeally don't know if this is correct to be honest.

The middle value doesn't change, but the middle slider does move when the other slider(s) move. You can move the middle slider (gamma) manually if you want, if things just don't look right.

Quote:

Originally Posted by cicaesar (Post 68433)
I specify Autoselect for the input and 4:2:0 planar YbCbCr (YV12) for the output, in case it's not automatically setted just by choosing Lagarith YV12 in the compressing options. Seems to me that this settings gets resetted every time

.
Every time you close VDUb and open it, the output settings are reset. However, every time you set an output option in Lagarith it will be remembered the the next time you use Lagarith.

Quote:

Originally Posted by cicaesar (Post 68433)
The process runs at 1.5 fps, it's very very slow. As I said, I don't mind waiting if the quality gets better, but I do wonder if I'm doing something wrong to be honest.

I wish I had the definitive answer for that. The PC you describe doesn't seem to be particularly puny. My primary processing PC is an XP/SP3 Intel i5 3500, not especially powerful. It does get pokey because of Avisynth memory swapping hangups if I run too many heavy filters at one time, in which case I just split the proceedings into two successive scripts.

Quote:

Originally Posted by cicaesar (Post 68433)
In the script I tried adding an antialiasing filter (Santiag()) because I have very bad jagged lines, especially with subjects in motion, and I thought the antialiasing could help. I still see jagged lines though, I don't know how to remove them, and they really (REALLY) bother me. What can I use?

They bother me, too. Realize that most analog consumer cameras had what looks like aliasing because of the way their interlacing circuitry and shutters behave. They were designed for the CRT era where the image flickers 50 to 60 times per second. What interlacing looks like on non-flickering LCD's isn't aliasing but what appears to be "unmatched edges" between interlaced fields. You'll note that when you deinterlace the video the aliasing almost always goes away (if it doesn't, then you have true aliasing in the source and Santiag will likely help). It often helps to deinterlace with QTGMC and then later re-interlace after cleanup work; QTGMC rebuilds many fields when it deinterlaces. In effect, by the time you reinterlace many problems have been cleaned up. When that doesn't work, there's the last resort of encoding as double-rate progressive or discarding alternate frames and leaving the video progressive. The latter method makes motion look choppy on fast action, jittery videos or long camera pans because 50% of the temporal resolution has been discarded.

Quote:

Originally Posted by cicaesar (Post 68433)
Maybe it's not very apparent here, but in other longer clips the processed video looks "clunkier" to me than the original one, as if motion were not as smooth. Am I doing something wrong? Is it just an impression?

I didn't have that impression. Maybe it was the improper black levels and discoloration that made me miss other problems.

Quote:

Originally Posted by cicaesar (Post 68433)
Is it normal that the Lagarith file occupies way less space than the Huffyuv one (75 MB vs 100 MB)?

Generally, Lagarith uses slightly less room for the same colorspace. When it comes to the difference between YUY2 and YV12, YV12 always makes a smaller file because it has only 50% of the chroma information as YUY2 and 25% of the chroma data as RGB.

Quote:

Originally Posted by cicaesar (Post 68433)
Is it normal that the Lagarith video has way less bitrate than the Huffyuv one? GSpot gives me 50 Mbps for Lagarith and 65 Mbps for Huffyuv. I mean these are still very high bitrate values but I wonder if I'm doing something wrong and if this can have an impact on the smoothness of the video in motion.

Are you comparing the same colorspaces? Huffyuv can't compress YV12, by the way.
If a compresor can compress data into a smaller space (fewer data bits) than another lossless compressor, which is what many compressors can do, then the bitrate tells you something about the amount of compression. Lossless compressors can operate along different lines, and Lagarith is higher compression than Huffyuv, although both are still lossless. Decompressing (playing back) the same video that uses different compressors is another thing: if you have a slow or bottlenecked PC, motion rendering on playback can be affected.

Quote:

Originally Posted by cicaesar (Post 68433)
I've seen many times here the "align chroma" suggestion (ChromaShift(C=2, L=-8)): should I add this too?

Do you need it? What that ChromaShift code does is shift chroma pixels 2 pixels to the right and 8 pixels upward in the frame. I don't see that much chroma shift in your sample. If you don't need it, don't use it. If you need different values, change them.

Quote:

Originally Posted by cicaesar (Post 68433)
I've also seen a lot this line: MergeChroma(awarpsharp2(depth=20)). I understand it sharpens the chroma channel. Should I use it?

Basically, there's a line of Avisynth sharpeners that thins lines and areas around them. It helps to fix chroma that is smeared outside of edges and it tightens color closer to the objects they belong to. In your sample you have some colors, especially red, that bleed over or smear outside the edges of objects. The aWarpSharp2 routine helps to clean up those messy edges. Do you want that cleanup effect?
https://www.animemusicvideos.org/gui...tml#sharpening (click on the filter names under the image and watch the image change. There many other filters on the AMV website).

Quote:

Originally Posted by cicaesar (Post 68433)
Is there a way to clean the right border a little more, with a more aggressive "de-reener"?

Sorry, no. It isn't really a stain. In this case it's an area that lacks color in one of the UV bands. You can't increase a color value if the color doesn't exist. You can somewhat modulate the hue or brightness, but that's about it. There are other tricks. One is to "borrow" (i.e, copy) color from somewhere near the edge and overlay that copy onto the stain. But that can often create some bizarre effects. Another trick is to use a different VCR, but often that doesn't help. Many people just live with it or crop it off.

Quote:

Originally Posted by cicaesar (Post 68433)
I've seen that Lordsmurf made a version of Stab() for VHS and tapes (name is Stabmod ()): should I use that with Video8 cassettes?

Hmm. There's something you might not understand about Avisynth plugins. You can use them on any video that Avisynth opens and decodes. Avisynth filters can be applied to any decoded video that AVisynth works with. The filter is designed to help stabilize jittery or jumpy frames. Those frames can occur in any video, but most certainly in hand-held consumer camera videos, or in any other source, including video8 players, DVD, BluRay, etc. I haven't noticed any difference in lordsmurf's version.

Quote:

Originally Posted by cicaesar (Post 68433)
maybe virtualdub with colormill and curves would be more appropriate, I just want to "balance" the histograms.

There are things you can do in RGB that you can't do in YUV, and vice versa. In RGB you can target a specific range in the spectrum without affecting other ranges or colors. In YUV you can often recover detail that looks lost because it lies in the crushing or clipping zone, but you can't recover it after it gets clipped in RGB. On the other hand, can you add more green in YUV? How could you, when there is no green UV channel? The color green isn't stored in YUV: you get YUV green by subtracting red and blue, which means you've changed the red and/or blue channel. In RGB you just add or subtract green, period, which doesn't change the other colors. It takes a while to get used to, but YUV and RGB behave differently in many important respects.
I don't understand what you mean by "balancing" the histogram. You don't make the histogram look balanced, centered, or take on particular shapes. You correct for the image. The histogram just tells you what's currently happening. Just because the shapes in the histogram are symmetrical doesn't mean the color balance is correct for the image.

Quote:

Originally Posted by cicaesar (Post 68433)
I don't feel confident eyeballing it with my limited skills and uncalibrated monitor.

An uncalibrated monitor is a serious disadvantage. You have to depend on various histogram functions.

Quote:

Originally Posted by cicaesar (Post 68433)
It seems though that I do need to apply saturation if I don't want everything to look washed out.

Your current video project looks washed out because black levels are too high. You have no really definitive darks. They are mostly just medium-dark grays and a narrow contrast range.

Quote:

Originally Posted by cicaesar (Post 68433)
Should I have the luma histogram expand a little more to the sides? How would I do it? Would that make colors clip?

The left side indicates dark pixels, the right side contains the brights. Move pixels to the dark left by using Tweak() to lower brightness or by setting off_y (y offset) to several negative points, as in "off_y=-10". You increase pixels to the right by increasing Tweak's contrast setting or by increasing y gain ("gain_y=15"). You can use ColorYUV's contrast to expand from the middle toward both ends at the same time. You can also perform those operations in reverse. What you should do is get an image up with a YUV histogram, read up on YUV's ColorYUV and Tweak controls, and experiment with settings while you observe the histogram.
Pixels that overflow into the shaded side panels on the histograms will clip in RGB.
VirtualDub's controls have zero effect on your YUV source. They affect the RGB image only.

Quote:

Originally Posted by cicaesar (Post 68433)
Even after centering the RGB channel, their spectrums are still outside of the allowed range, so I guess I am losing details there. Is there a way to "shrink" those spectrums in order to make them all stay inside the range?

You can apply contrast, gain and offset controls to YUV bands. A negative contrast setting contracts data toward the middle from both ends. When I tried that with your sample avi's oversupply of reds, everything turned green. This told me that I needed to go into RGB for greater control of the extremes, and that your camera had rendered more YUV reds than RGB could handle. The preferred data range for YUV digital video is 16-235, but a YUV colorspace can contain a far wider analog and digital range than that.
I don't understand what you mean by "centering". The center of a YUV histogram corresponds to YUV 129 in the middle of the YUV range, or to RGB 128 in the middle of the 0-255 RGB range. If you wanted to create a neutral, colorless middle gray, all pixels would be on that center line.

Quote:

Originally Posted by cicaesar (Post 68433)
If you happen to have a calibrated monitor, do you think the resulting colors are decent? Are they oversaturated maybe?

They're oversaturated, especially the reds, but they look washed out because of low contrast and high black levels.

Quote:

Originally Posted by cicaesar (Post 68433)
Are the virtualdub filters too strong? For instance, at frame 238 if you look at the right part of the cake, isn't it too blurred?

Everything in the yellow-orange of the cake looks blurred because the colors in the RGB range 170-225 are too bright. You do seem to have a fuzzy-edges problem on the right side of the frame (look at the Fanta soda bottle and the paper cup edges in front of it). I don't see the same thing in the first camera shot.

Quote:

Originally Posted by cicaesar (Post 68433)
Is it correct to save the video in YV12 at this point, being that I just converted in RGB32 in avisynth? Shouldn't I just save it in RGB32?

It's neither correct nor incorrect. It depends on how you want to save it. If you save it as RGB32 the file size will be almost 4 times the same file as YV12. In any case, no matter how you save it your encoder will encode it as YV12 if in standard MPEG or h.264.

Quote:

Originally Posted by cicaesar (Post 68433)
The right border bothers me SO MUCH, it's still too visible and I really hate it. Moreover I'm not sure that the alterations I made to it are producing a better or an even worse result. Is there something different that I can do about it? Should I crop it entirely?

You're allowed. I didn't see any important info over there. I cropped it off.

Quote:

Originally Posted by cicaesar (Post 68433)
Deshaker...

I don't use Deshaker much, and the strongest I've used is about half power. I don't use the zoom function. What good is setting it if it doesn't entirely eliminate or correct the insanely comical way amateurs use zoom lenses? I disable the zoom corrections.

Quote:

Originally Posted by cicaesar (Post 68433)
There is a huge problem here, which I have to admit is the first time I see happening: as soon as the scene change (from the one with the presents to the one with the cake, frame 145), the borders start to shake A LOT...

I don't have that section of video and I didn't use Deshaker (I probably wouldn't use it on this video anyway--or maybe I would, can't say), so I can't comment.

Quote:

Originally Posted by cicaesar (Post 68433)
The file size of the final Lagarith video is 50 MB, half of the original Huffyuv video (100 MB). Is this correct or should I worry?

Lagarith files tend to be slightly but measurably smalleer than huffyuv . Also, your original sample is YUY2, but the final avi sample is YV12. You do know the differences between colorspaces, I hope. http://avisynth.nl/index.php/Convert. Another main difference is that while YUY2 contains twice the color information as YV12, color in YUY2 is interleaved -- that is, YUY2 data for both color channels is stored in the same pixel. In YV12, with 50% less color info, the U and V channel info is stored in separate channels.
Also remember that your original sample contains some noise. if you filter out the noise, you have less data than you started with. The output will use fewer data bits, which will reduce the file size.


All times are GMT -5. The time now is 04:53 AM

Site design, images and content © 2002-2024 The Digital FAQ, www.digitalFAQ.com
Forum Software by vBulletin · Copyright © 2024 Jelsoft Enterprises Ltd.