Suggestions on restoration process for Hi8 tapes?
18 Attachment(s)
Hi everyone. I'm back after an initial tape capture which this forum helped me achieve with this thread. This post is really a continuation of that same work: after looking at the great results you guys obtain I now want to take a step up and start restoring my tapes, building on the avisynth script that Sanlyn drew up for me at the time.
First of all I want to thank the people in this forum, especially Sanlyn and Lordsmurf, not only for your help on my old thread but for all the useful information that you put in this forum. Since I asked here about how to capture my video8 cassettes more than a year has passed. I did complete my capture in due time, thanks to you. Now I have some time to spare and I'd like to proceed with restoring, which is the focus of this post; after I'm done restoring I'll think about the conversion to DVD and x264 (maybe this time I'll manage not to wait an another year for the next step...). Before posting here I prepared myself the best I could: I've put in almost a month reading dozens of forum posts, installing the needed software, experimenting by myself and trying to find a process to do this restoration. I went through avisynth's wiki, color theory, video formats, etc., and with every guide I came out with 2 answers and 5 new questions... some of them are still needing answers, hence this post. Please bear with me for the incoming wall of text: I try to be thorough to understand better, and so that while I'm asking for help maybe I can contribute what little progress I have made to others. Context I have many Video8\Hi8 25 fps PAL cassettes that I captured in Huffyuv. The cassettes were originally taped with a Sony Handycam Video8, and were digitalized via VirtualDub with a Sony Handicam Digital8 DCR-TRV230E with integrated TBC, and an EZGrabber2 USB converter. I'm working on a 10 year old 4-core i5 CPU with 12 GB of RAM, so not the fastest machine on Earth; nevertheless, I don't mind waiting. It is not connected to a monitor but to a LG TV (model 57LM620S-ZE). Progressing from my first post, I managed to install both Huffyuv and Lagarith in their 32 bit versions, so for the editing I will be using the recommended virtualdub 1.9.11 (32 bit) and avisynth 2.6.0 (32 bit). What I want to do I want to restore this videos with avisynth and virtualdub, and in particular:
Regarding colors, I just want to make slight adjustments. I'm not a professional unfortunately and I do not have a professionally calibrated monitor. In many a post people strongly advised not to eyeball colors, so I'm going to rely on histograms to align the luma\chroma spectrum and I will make only slight adjustments regarding saturation. I've tried dabbling with virtualdub's graduation curves like Sanlyn explained but I'm always concerned that I'm overdoing things... I'd prefer not to use it if possibile. I did however adjust my TV colors with simple visual calibration (twice actually, the first time I messed it up): I set Windows' and my video card's color adjustments as neutral, I have removed all kinds of dynamic color regulation from my TV, and I have followed the Lagom calibrating guide to manually set the TV color parameters. I can't pass the sharpness and the gradient test though (albeit the latter by a small margin). So, I will describe the process I've come up with to have your input. I will provide attachments. I will list my questions in the text. The sample Attached is an Huffyuv video I've cut from a cassette clip in YUY format: 01 - Restoration sample - Cut Huffyuv.avi. I've excluded the sound. It's shot indoors at night. Aside from the headswitching noise and the godforsaken right green vertical band, it looks to me to have a red cast. At frame 141, Csample gives me a value of R:230 G:160 B:166 from the front of the man's sleeve, which should be white. At frame 309, Csample gives me a value of R:255 G:162 B:144 inside of the lower half of the "eight" candle for a color that should be whitish. Setting input levels This is the first roadblock I am encountering. I couldn't find any post online that settled the problem of regulating the input level of the video. What I did find out is that is very important to set levels before using avisynth plugins, and that is better to set levels first and regulate contrast \ brightness after. So my first step is to open the video with virtualdub, load a crop filter to remove the right green edge and the black borders, load the levels filter and try to eyeball the correct input levels that I will use later in avisynth. I noticed that the midpoint value doesn't change when I move the left and right sliders, so I think I should leave it at 1.0. I reeeeeeally don't know if this is correct to be honest. Using sliders, I chose a (0,1.0,215) input: Attachment 11709 I attach the virtualdub filter chain: 02 - Restoration sample - Levels initial reading.vcf. Preprocessing with avisynth I have separated avisynth processing in 2 scripts: the first is a preprocess that should be camera-dependent and thus applies filters that should be useful for all my videos without the need of changing it on a per-video basis. It's the very same avisynth script that Sanlyn suggested (I definitely am not able to identify image defects and choose which plugins to use). Separating image quality processing from color correction has also the benefit of making faster adjustments in the color correction phase, because this processing is extremely slow on my machine. I compress the video in Lagarith YV12. In the video color depth options I specify Autoselect for the input and 4:2:0 planar YbCbCr (YV12) for the output, in case it's not automatically setted just by choosing Lagarith YV12 in the compressing options. Seems to me that this settings gets resetted every time. This is my Lagarith configuration: Attachment 11711 Many questions here:
Code:
Import("C:\Program Files (x86)\AviSynth\plugins\ChubbyRain2.avsi") Analysis for color adjustment For color correction I use an avisynth script to apply changes and analyze results, so that I can work on colors faster. As I mentioned I don't really want to do color correction, for which maybe virtualdub with colormill and curves would be more appropriate, I just want to "balance" the histograms. I don't feel confident eyeballing it with my limited skills and uncalibrated monitor. It seems though that I do need to apply saturation if I don't want everything to look washed out. This is the script I use (05 - Restoration sample - AviSynth analysis.avs): Code:
# Source file
Attachment 11713 This is frame 190 after color adjustment: Attachment 11714 Questions:
Applying color adjustment and virtualdub filters After the analysis I copy the results in another script. This script doesn't include histograms and the StackHorizontal function. This script includes a section to try and minimize the impact of the right green border, by not including it in the saturation bump, removing some of its brightness and centering a little more the v channel. After this script I apply virtualdub filters and I save the video as a Lagarith YV12. This is the script (also attached: 06 - Restoration sample - AviSynth color correction.avs) Code:
# Source file Attachment 11717 Attached is the resulting video (08 - Restoration sample - Postprocessed Lagarith.avi) Questions:
Deshaken I loved what Sanlyn showed me about the Deshaker plugin in virtualdub, so I've decided to apply it whenever the videos didn't have a timestamp (not many of them unfortunately). The fluttering borders distract me a lot, so I'm going with the fixed zoom option, even if in many occasions it feels like it applies a bit too much zoom. I've gone through Deshaker's documentation and tried out the other options but I'm not able to reduce the zoom without having (albeit slightly) moving borders. I have configured the Deshaker plugin using this Lordsmurf's guide, even if some options are different in my plugin version (v3.1). Questions:
Pass 1: Attachment 11720 Pass 2: Attachment 11722 I attach the vcf filter chains: 09 - Restoration sample - Deshaker pass 1.vcf 09 - Restoration sample - Deshaker pass 2.vcf Final version Here is the final version of the video: 10 - Restoration sample - Final Lagarith.avi If you had the patience to read until here: thank you. Any help \ comment \ critique will mean a lot to me. |
What a lotta questions, LOL! Working on it. Will report later.
Members have plenty to chime in on, here. |
10 Attachment(s)
I'll use a following post to reply to your earlier questions and comments. Sorry for taking so long. I was starting to feel as if I had a huge sign on the PC room door that says, "Please Interrupt Me At Any Time"!
You certainly had your work cut out for you with the two scenes in this short sample clip. The filter choices I made would likely be modified based on longer scenes, different lighting, etc. But it didn't take long for me to discover that whatever you were using for a camera was your worst enemy. These are some of the toughest color correction problems I've seen in a while, with noisy reds that are uniquely warped. Step 1: I divided the clip into two parts, A and B, each of which required different color correction and denoising. I started by correcting levels and color in part A. The first 144 frames of the sample clip is about hands working with ribbons, one darker skinned hand with part of a finger missing that apparently identifies a particular person. I have no template script for videos, but I keep two very long .txt files loaded with hundreds of samples of potboiler text for many filters, such as 10 different command strings for QTGMC and copies of quick routines from the DitherTools package, etc. Here, I wasted over an hour until it finally became clear that a tough problem in Part A was fluctuating levels. Brightness affects one's perception of color, saturation, and contrast. It might not look like it at first, but there's also a low contrast problem. The camera's color response and auto-gain circuit (the work of Satan) twisted up the image histogram in ways I couldn't believe. I attacked Part A with an autogain plugin, AutoAdjust.dll (https://forum.doom9.org/showthread.php?t=167573). It's an adjustable filter. The adjustments in its read-me doc are self-explanatory. It won't work in YUY2 but it works in YV16, which is another version of YUY2. I used it to level the luminance pumping between light and dark. This type of filter often doesn't work so well, doing exactly what you don't want it to do (which is what your camera's AGC was doing), but it did decent work here. Next came red. After almost an hour it was apparent that YUV was not the tool for correcting this maverick red, wherever it came from. I made a basic correction to tame mostly the low end and high end somewhat, but the rest was left up to normal RGB controls, where some rather simple but time consuming steps straightened things out in Step 2. Below are an image and histograms from a darker section of luma pumping in part A: The top left image is frame 102 with borders removed and ColorYUV/Analyze column numbers overlaying it. At top right is a YUV histogram of that original frame. Bottom image: at lower left, an RGB ColorTools histogram of the same frame; at lower right, an RGB saturation vectorscope. http://www.digitalfaq.com/forum/atta...1&d=1588689270http://www.digitalfaq.com/forum/atta...1&d=1588689284 The ColorYUV(Analyze=true) numbers reveal a high black point at y=28 or so, and specular highlights beyond y=235 -- but most of the data in the numbers and in the YUV histogram are left of middle. The frame is a picture of mostly midtones but the data shown is darker than that. As many users know, the YUV bands and numbers for the U and V channel are not that accurate; RGB is more informative, and there we find a hard red peak in the midrange. But the bulk of color information is left of the middle. The vectorscope shows that most saturation is in the Red quadrant, although flesh tones should be lying along the slanted line in the upper left quadrant. The Avisynth code that shows the Analyze numbers and the YUV histogram is pretty standard stuff that I usually run -- when necessary. The code below also shows the addition of the AutoAdjust plugin: Code:
AviSource("I:\forum5\faq\cicaesar\01 - Restoration sample - Cut Huffyuv.avi") Keeping the YUV settings but saving the RGB filters for later, the script below ran a stabilizer and made new borders. The stabilizer step was run by itself as the only filter in Step 1 because running all of the denoise filters at the same time would be far too slow (less than 1.5 fps running speed). The results were saved in YV12 for the next step. Code:
AviSource("I:\forum5\faq\cicaesar\01 - Restoration sample - Cut Huffyuv.avi") The results of step 1 are input to Step 2, the cleanup and RGB step. This step loaded the RGB color settings determined and saved from Step 1. The reason for saving RGB color until this step was because cleanup filters often require a tweak of color and levels. Code:
AviSource("I:\forum5\faq\cicaesar\avs\samplePartA_02_stb.avi") The image below shows frame 102 after RGB color work, with its RGB histogram. This is brighter than the original. The histogram shows that the black level has been moved to the left a bit, the huge red peak is tamed, and other colors have spread rightward into the midrange. The brighter frames that precede and follow this one were brought down earlier by AutoAdjust to a more reasonable level so that the "pumping" effect is minimized. http://www.digitalfaq.com/forum/atta...1&d=1588689491 The VDub RGB filters used were ColorCamcorderDenoise, gradation curves, ColorMill, and ColorTools v1.5 (https://sourceforge.net/projects/vdf...1.5%20update1/). I've included a PartA_VirtualDub_Settings.vcf so you can see how the filters were configured. The image below is the gradation curves RGB Red panel that controlled RGB Red. At the top of the slanted line, the line curves to the right to gently lower the bright Reds to stay within RGB=255. At the lower left, there's a short "notch" filter that keeps dark reds below RGB-8 at RGB-zero (to keep red out of the black borders). http://www.digitalfaq.com/forum/atta...1&d=1588689644 Step 3: This step is for Part B, from frame 145 of the original sample to the end (the cake cutting scene). This scene, too, put me through a few hours of trial and error, step by step, until levels and color adjustments gave this scene some contrast snap and dynamic range. The original is dominated by dull red and a constricted luma. It was touchy going to get good contrast without burning out bright detail in the cake. The positive Contrast setting in "ColorYUV(cont_y=40)" works by extending values from the middle outward in both directions -- darks get darker, brights get brighter. A negative contrast setting works in reverse: values contract inward from both ends toward the middle. However, contrast in Tweak() works on more conventional lines -- positive contrast extends only the bright end, negative contrast contracts it. If you want to extend or constrict black levels in Tweak(), use Tweak's brightness setting. I began by working with YUV and slowly adding RGB adjustments, jockeying back and forth between YUV and RGB, saving the color work file and running it countless times between short breaks. I didn't apply contrast to red, which already had too much. When I tried lowering red contrast, everything turned green. To balance red I also added more Green and Blue, which raised brightness a bit (in RGB, adding color raises brightness, removing color darkens. That's because RGB, unlike YUV, stores brightness and color data in the same pixel). When I was ready to move ahead, I did the same thing as in Step 1 -- I kept the YUV settings, saved RGB settings in a .vcf for later use in Step 4, deleted the color work file, ran the stabilizing script below and saved the output as Lagarith YV12: Code:
AviSource("I:\forum5\faq\cicaesar\01 - Restoration sample - Cut Huffyuv.avi") Output from Step 3 was used as input into the denoising and RGB correction step for Part B: Code:
AviSource("I:\forum5\faq\cicaesar\avs\samplePartB_02_stb.avi") Below: images from Part B Before (left) vs After (right) B: http://www.digitalfaq.com/forum/atta...1&d=1588689788 Last steps: First, I made a combo file of Parts A and B joined in a 720x576 progressive 59.94 format, resized to 640x480 for web or streaming. The final encode is attached as sample_4x3_5994p.mp4. Code:
vidpath="I:\forum5\faq\cicaesar\avs\" Code:
vidpath="I:\forum5\faq\cicaesar\avs\" Code:
vidpath="I:\forum5\faq\cicaesar\avs\" |
Quote:
Quote:
Quote:
The middle levels midpoint value doesn't change but the middle slider does move when you move the other sliders, and you can move it manually yourself. The RGB levels filter has no effect on your YUV source. It affects RGB only. It does nothing to help correct clipping in either YUV or RGB. Quote:
Some of the initial operations I disagree with -- for instance cropping off borders permanently and adding a new border so early in the process. For one thing, it's followed by color correction in YUV, which more often than not will change the color of your border. It will be most obvious on TV. If the border color doesn't change in YUV, there's a good chance it can change later in RGB. There are various ways to adjust for that RGB change, but you can more easily use the BorderControl v2.40 plugin (https://sourceforge.net/projects/bor...atest/download). Quote:
Quote:
Quote:
Quote:
Quote:
Every time you close VDUb and open it, the output settings are reset. However, every time you set an output option in Lagarith it will be remembered the the next time you use Lagarith. Quote:
Quote:
Quote:
Quote:
Quote:
If a compresor can compress data into a smaller space (fewer data bits) than another lossless compressor, which is what many compressors can do, then the bitrate tells you something about the amount of compression. Lossless compressors can operate along different lines, and Lagarith is higher compression than Huffyuv, although both are still lossless. Decompressing (playing back) the same video that uses different compressors is another thing: if you have a slow or bottlenecked PC, motion rendering on playback can be affected. Quote:
Quote:
https://www.animemusicvideos.org/gui...tml#sharpening (click on the filter names under the image and watch the image change. There many other filters on the AMV website). Quote:
Quote:
Quote:
I don't understand what you mean by "balancing" the histogram. You don't make the histogram look balanced, centered, or take on particular shapes. You correct for the image. The histogram just tells you what's currently happening. Just because the shapes in the histogram are symmetrical doesn't mean the color balance is correct for the image. Quote:
Quote:
Quote:
Pixels that overflow into the shaded side panels on the histograms will clip in RGB. VirtualDub's controls have zero effect on your YUV source. They affect the RGB image only. Quote:
I don't understand what you mean by "centering". The center of a YUV histogram corresponds to YUV 129 in the middle of the YUV range, or to RGB 128 in the middle of the 0-255 RGB range. If you wanted to create a neutral, colorless middle gray, all pixels would be on that center line. Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Also remember that your original sample contains some noise. if you filter out the noise, you have less data than you started with. The output will use fewer data bits, which will reduce the file size. |
Site design, images and content © 2002-2024 The Digital FAQ, www.digitalFAQ.com
Forum Software by vBulletin · Copyright © 2024 Jelsoft Enterprises Ltd.