digitalFAQ.com Forum

digitalFAQ.com Forum (https://www.digitalfaq.com/forum/)
-   Restore, Filter, Improve Quality (https://www.digitalfaq.com/forum/video-restore/)
-   -   VHS restoration, AviSynth optimization workflow ideas? (https://www.digitalfaq.com/forum/video-restore/9068-vhs-restoration-avisynth.html)

adinbied 10-09-2018 08:31 PM

VHS restoration, AviSynth optimization workflow ideas?
 
Hi there,

So I know there are many (hundreds?) of posts asking about how to configure AviSynth scripts for restoring VHS captures, but I was hoping I could get some feedback/input on my workflow. So I've got a Panasonic AG-1960 VHS deck (manually cleaned heads after I got it) hooked up via S-Video to a Startech USB Capture device, which then goes to CyberLink PowerDirector and finally is recorded in AVI uncompressed with a video bitrate of ~377 Mb/s. Once I've got it digital, I then feed the video into this AviSynth script:

Code:

AVISource("\\10.0.1.242\Media\96VideoProjects\Capture.avi")
Crop(4,20,-12,-12)

### --- levels fix ---
ColorYUV(gain_y=-15,cont_u=-20)
Levels(20,1.15,255,16,245,dither=true,coring=false)

### --- first stage denoise an smoothing ---
AssumetFF()
ConvertToYV12(interlaced=true)
QTGMC(preset="medium",ChromaMotion=false,border=true,ChromaNoise=true,\
DenoiseMC=false,NoiseDeint="Generate",StabilizeNoise=true,GrainRestore=0.2)
vInverse2()

### --- Work with edges, aliasing, line twitter ---
DeHalo_Alpha(rx=2.0)
Santiag(strh=3,strv=3,type="Sangnom")

### --- 2nd stage motion/noise smoothing ----
RemoveDirtMC(30,false)
MergeChroma(aWarpSharp2(depth=30))

### --- more edge work ---
ConvertToYUY2(interlaced=false)
FixVHSOversharp(20,16,12)
FixVHSOversharp(20,8,4)

##Spot Reduction/Stage 3 Smoothing
fields=AssumeTFF().SeparateFields() # or AssumeBFF
super = MSuper(fields)
backward_vec2 = MAnalyse(super, isb = true, delta = 3, overlap=2)
forward_vec2 = MAnalyse(super, isb = false, delta = 3, overlap=2)
MDegrain1(fields, super, backward_vec2,forward_vec2,thSAD=4000)
Weave()

### --- Reinterlace, prepare for RGB color work ----
SeparateFields().SelectEvery(4,0,3).Weave()

It's been long enough now that I don't remember where I got this from, IIRC it was a combination of scripts from here. Anyway, using Mulder's x264 Launcher, I run this at x264 2-Pass at a target bitrate of 18 Mb/s and preset 'High'. That averages about 2.5-3fps on my OC'ed i5-7600K, so for a 90 minute tape, it takes a while. Anyway, once that is done, I take the lossless LPCM audio from the AVI and turn that into a .wav, which I then feed into Izotope RX Audio Editor 6. From there, I apply various filters depending on what needs doing (often a de-hum, de-clip, de-crackle, and a manual EQ) - then export that as a WAV as well. Finally, I bring the AviSynth output video and processed audio into Adobe Premiere Pro CC 2017, and fix any audio sync issues as well as add a pass of Lumetri Color Correction, adjusting white balances and saturation where needed. I then put that through Adobe Media Encoder at H264 2-Pass with a target bitrate of 6 Mb/s.

Now, I know there are several lossy->lossy transcoding steps in there, but my goal isn't to retain every ounce of detail possible, my goal is to make it look good to the eye. Anyway, there have been several issues with this setup over the last few weeks, and I've realized that maybe this isn't the best solution, idk. I'm curious what everyone on the forum has to think, and hopefully some good will come of it? As far as video samples, would a losslessly trimmed section from the avi work? Or should I re-encode it?

Also, here's a screencap from the lossless capture: https://i.imgur.com/8JjBZmC.png
And here's a screencap from my "processed" version: https://i.imgur.com/52Q0dCa.jpg

Thanks so much for your time!
adinbied

sanlyn 10-09-2018 09:31 PM

Thanks for the photos.
Don't you know how to post short unfiltered capture samples to the forum? Just ask if you don't. As fas as I can tell the script doesn't seem appropriate for the photos.
Why is the first photo a non-standard 1024x768? Did you capture at that size? If so, why?
Why is the second photo a different size and why is it cropped?
Why are all the brights in the images blown away beyond RGB 255? Why didn't you use some sort of input signal level controls to maintain the signal within a safe video range of y=16-235? If no controls were used during capture, what sort of histogram or other aid did you use to check on y-levels during post processing?
Why are you capturing to uncompressed RGB? Haven't you heard of lossless YUY2 compression to guard against clipped darks and brights and to be easier on your CPU?

A great many other questions could be asked but, really, no one can work in scripting details from still photos. The only things I could conclude from the photos are that the input signal levels need serious correction, the color needs work, and the original tape must be in terrible shape. Otherwise you'll probably get much better advice if you post us some real video to work with.

And welcome to digitalfaq
:)

adinbied 10-11-2018 06:31 PM

Hi there,
Sorry it's taken me a while to get back to you -- life's been busy. As far as the source being 1024x768, that's the highest resolution my capture device/software can handle, and while I know there isn't enough detail in VHS's to warrant the resolution, my philosophy is that it's better to capture too much detail and downscale later than to do a lower res. As far as the different size and cropping, as you can see from my AviSynth script and the source photo, I'm cropping off the bottom overscan and top warping/weirdness and so that's why the different size. Don't know about the RGB 255 stuff, I'm using PowerDirector and that's what the lossless AVI setting gave.

I now have samples (uploaded to Google Drive due to being large in file size) - with the warning to turn your volume WAAAYY the hell down before playing these - the audio off of the tape is terrible and recorded ridiculously loud.

Anyway, here we are:
Losslessly Trimmed .AVI from the source capture: https://drive.google.com/uc?export=d...E01ORG8ASVT9vK

Trimmed MP4 section after my 'restoration': https://drive.google.com/uc?export=d...5JTPo_bzUshhEN

Thanks!

sanlyn 10-11-2018 11:37 PM

2 Attachment(s)
Thank you for the sample.

Quote:

Originally Posted by adinbied (Post 56732)
Sorry it's taken me a while to get back to you -- life's been busy. As far as the source being 1024x768, that's the highest resolution my capture device/software can handle, and while I know there isn't enough detail in VHS's to warrant the resolution, my philosophy is that it's better to capture too much detail and downscale later than to do a lower res.

That might be true if you were using high-grade hardware upsampling, but you're using mediocre software that's giving you poorly resized video totally devoid of fine detail. it looks almost like anime. The cropping has also altered the aspect ratio: your sample frame is 1.33:1, but the encoded mp4 is 1.369:1. Obviously web posting and disc output formats aren't in your plans.

The uncompressed sample is very poor, but I think you can see that. The lack of detail and the severe color corruption and plastic character makes things look like a second-generation source of some kind. QTGMC is misused here, basically because your video sample isn't interlaced to begin with. It's field-blend progressive, one of the most damaging capturing methods. The damage is permanent, and it looks worse because of resizing field-blended video vertically during capture. Clipped detail is also irreparable due to clipping during capture. On top of that, chroma as well as luma is clipped. There is no way to retrieve bright data that gets lost through clipping.

The horoscopes below illustrate YUV clipping (left image) and seriously oversaturated chroma clipping in RGB (right image).
http://www.digitalfaq.com/forum/atta...1&d=1539317608

I see you've spent good money for Adobe software, so you've undoubtedly seen several types of histograms and vectorscopes that are for analysis and aids for repair. Adobe also has a ton of information in its online help about valid video levels, aspect ratios, and all that.

The image below is a sample of a field-blended frame as well as the notches, wiggles, and mice-teeth in vertical edges caused by upsampling this damaged type of frame structure.
http://www.digitalfaq.com/forum/atta...1&d=1539318065

I hesitate to get into scripting details, especially since most of the damage here can't be repaired without causing even more loss in other areas, and because the capture methods and software you're using demand a lot of otherwise unnecessary post-processing work and makes poor results inevitable.

Very few readers are going to wait for 1.3 GB of uncompressed video to download. Note that when the images and samples disappear from your off-site storage, this discussion thread will be largely meaningless.

I'd advise capture with either VirtualDub or AmarecTV. Cyberlink is notoriously inferior for resizing and rendering. I don't say this just because so many others before me have said the same thing, but because I've been the Cyberlink route myself -- and regretted every minute of it. http://www.digitalfaq.com/forum/vide...-settings.html

adinbied 10-12-2018 12:06 AM

OK, so I still have access to the tape - I'll see if I can re-capture it over the weekend and upload a sample of that. Do you have any tips for setting up VirtualDub for capture? The only reason I used CyberLink PowerDirector was because my initial capture device a few years back was an ION Video 2 PC, and that was the only software that recognized it. Now that I've got a slightly nicer Startech device, it should be compatible with more stuff. As far as the aspect ratio, I was only planning on distributing the files digitally and sending them as-is. Most if not all modern software media players can support weird aspect ratios, and most online video services support different aspect ratios natively. As far as DVD, well, I'm the only person I know who has a computer with a DVD drive anymore... For more info, the analog video I'm capturing from is a second-gen source (originally recorded on VHS-C then transferred to VHS), but it's all I've got. As far as the field blending - that would be something solvable by re-capturing using correct software, right? In regards to the file size and file hosting, I didn't want to put unnecessary strain on the site's server, and I saw that the max upload size was 99mb. I didn't want to re-encode the source file I'm working with (to give people a better idea of what I'm working with) and I also didn't want to trim the length too short. As for using Google Drive, it's far from perfect, but it works for now. Finally, massive thanks for sticking with me -- I'm still learning and every bit of information is helpful!

sanlyn 10-12-2018 07:03 AM

Quote:

Originally Posted by adinbied (Post 56735)
OK, so I still have access to the tape - I'll see if I can re-capture it over the weekend and upload a sample of that. Do you have any tips for setting up VirtualDub for capture?

There is a link to an updated guide in post #4, but here it is again: http://www.digitalfaq.com/forum/vide...-settings.html.

Quote:

Originally Posted by adinbied (Post 56735)
I'm the only person I know who has a computer with a DVD drive anymore...

That's interesting. Everyone I know has one, many have two, and some have DVD and BluRay.

lordsmurf 10-12-2018 07:13 AM

Quote:

Originally Posted by sanlyn (Post 56739)
That's interesting. Everyone I know has one, many have two, and some have DVD and BluRay.

I have 3 in my main system: 2x BD-R and 1x DVD-R. :P

adinbied 10-13-2018 11:07 PM

So I've re-captured the tape using VirtualDub (had some issues with the Overlay/Preview not showing up, but I ended up disabling it because the capture worked fine anyway) here is the link to a sample encoded in HuffYUV (with the same warning about the audio) https://drive.google.com/uc?export=d...dnxDZ2kpWxFPpf .

Based on the sample, what Avisynth filters/scripting stuff do you reccomend?

Thanks!

sanlyn 10-14-2018 04:52 AM

2 Attachment(s)
Tyhanks for the samples.

These caps look much better than the earlier ones. Keep in mind that that then source video is, to be charitable, pretty thoroughly trashed and that no one can fix everything. Still, some improvements are possible. Tonight I played with levels and color,m which in itself looks vastly better. I'll be driving all day tomorrow but will have my laptop with me this week and will be able to get on with denoising and repair.

There appeared to be no effort to control input signal levels during capture, so brights were shar-0kly clipped and highlight details were destroyed. One advantage to capture with VirtualDub is that levels can be controlled to prevent hard clipping. We've seen captures made with your USB device in other posts, so it's known that the device proc amp controls are accessible in VirtualDub. You might want to consult your Adobe Pro online documentation about safe/legal video levels.

Some of the work so far:

Original frame:
http://www.digitalfaq.com/forum/atta...1&d=1539511804

Color And Levels, mild initial denoise:
http://www.digitalfaq.com/forum/atta...1&d=1539511925


Please stay tuned.....

adinbied 10-18-2018 11:38 AM

Hello,
I was just wondering if there was any updates on this - I've got a pile of ~50 VHS tapes that I need to get through in the next few weeks, and any help in figuring out scripting and stuff would be greatly appreciated.

Thanks!

lordsmurf 10-18-2018 12:14 PM

Quote:

Originally Posted by sanlyn (Post 56769)
Some of the work so far:

That looks really good. :congrats:

sanlyn 10-18-2018 05:46 PM

The laptop I brought with me on the road this week decided to die forever yesterday, and I know no other resources locally. That's what I get for trusting a Dell machine. Will return home early next week and continue in this thread.

Meanwhile your script was on the right track, more or less, but the Avisynth level and color settings you copied won't work with everything, and certainly not with the sample you posted. To summarize what I did for the images I posted, look at your video with histograms (I used Avisynth's Histogram (color2 and Levels views) and you'll see that yellow and blue are badly oversaturated and input levels extend beyond legal limits. I lowered saturation on those hues partly with the Tweak() function (set limits with startHue and endHue paramters), partly with the VirtualDub Hue/Sat/Intensity filter, and made minor tweaks with ColorMill. Of course this doesn't make a lot of sense without firm examples, but the work I did was copied to the hard drive of the crummy Dell laptop that died. Fortunately I have a copy of that work on a PC back home. The histogram images posted earlier in this thread show the oversaturated hues and areas.

I suggest that you need to clean up some of the horizontal dropouts before using QTGMC. unfortunately motion interpolation in many filters you've used will make them look worse in many respects. Your script also over-filters to try to correct edge distortion caused by the 2nd-gen tape dubbing that has serious scanline errors. Those errors can be corrected only very slightly, but at the expense of destroying a lot of other data. I'll make other suggestions once I get back to a working PC.

Sorry for the delay.

Don't ever trust a Dell computer with anything important. :mad4:

adinbied 10-18-2018 06:55 PM

Oh no! Sorry to hear that! No worries about the delay. I'll take a look at the histogram stuff, but some concrete examples would be nice. Hope you are able to recover any data from the laptop!

sanlyn 10-18-2018 10:10 PM

Thanks, but that Dell laptop will see the recycling dump after I remove and wipe the hard dive. Everything on that Dell is copied on a PC or hard drive back home, so no loss. I should have brought my old HP with me, but it's bigger and heaier. Well, live and learn.

sanlyn 10-24-2018 09:32 AM

Update and extended apology. Still stuck away from home with only the wife's tablet for the internet, and just now delayed another 3 days for wife's medical treatments. Be back home this weekend. Never fear: I'll post enough to keep you busy for quite a while, you can pick and choose how much detail you'll want to investigate.
:D

sanlyn 10-27-2018 05:44 PM

9 Attachment(s)
Your captures used the obsolete MT version of huffyuv. I don't know where people are downloading this ancient version, but apparently they confuse "mult-threading" and "multi-core". The old MT version has no optimization for multi-core CPU's or for modern MMX operations. The newer version 2.1.1 is here: https://www.videohelp.com/download/huffyuv-2.1.1.zip.

The image below is frame 142 of your original huffMT avi, with borders removed so that they won't skew histograms. Notice how bright areas and bright objects are burned out with severe hot spots and missing detail. Bright objects on the wall have a "bloom" effect that smears edges with the background. Also note the phony "neon" appearance of over-saturated yellows and blues.

http://www.digitalfaq.com/forum/atta...1&d=1540679236

The image below shows a YUV Levels histogram (left) and a YUV saturation vectorscope (right). These graphs clearly show illegal video levels that cause sharp clipping and detail destruction, indicated by arrows. In the right-hand vectorscope below, yellow is so badly over bright and saturated that it crashes against the edge of the graph.

http://www.digitalfaq.com/forum/atta...1&d=1540679291

Below, an RGB histogram showing how over saturation affects display. At the left-hand edge of the vectorscope, note the sharp edge cutoff (yellow arrow) that indicates severe clipping. In the right-hand sector, overly bright bluish hues dominate.

http://www.digitalfaq.com/forum/atta...1&d=1540679357

One of the first steps should be fixing levels to within the "legal" range range of y=16-235. I do this in Avisynth using the original YUY2 colorspace. These lines of code use Avisynth's Levels() function:

Code:

ColorYUV(off_y=-8)
Levels(10,0.90,255,16,240,dither=true,coring=false)

The above code adjusts luminance levels as shown below in the top white band of the graph:

http://www.digitalfaq.com/forum/atta...1&d=1540679456

The Levels function, Avisynth wiki: http://avisynth.nl/index.php/Levels
In the above histogram notice the big white "spike" at the right-hand margin. This indicates clipped (destroyed) data during capture.

You can use the Tweak() function to calm over saturation in certain color ranges. The startHue and endhue parameters can set the starting and ending limits for the target color range. Numeric equivalents of YUV color values form the 360-degree color wheel are shown in the chart below, which you can find on the wiki page for the Tweak function (http://avisynth.nl/index.php/Tweak):

http://www.digitalfaq.com/forum/atta...1&d=1540679677

For an overall saturation levels adjustment for your clip, and probably for most other shots on the same video, I would suggest the three Tweak lines below. You can then make fine adjustments for different individual scenes in VirtualDub.

Code:

Tweak(sat=0.80,dither=true,coring=false)
Tweak(sat=0.80,StartHue=135,EndHue=220,dither=true,coring=false)
Tweak(sat=0.85,StartHue=270,EndHue=358,dither=true,coring=false)

Below, the composite shows 4 sequential YUV vectorscope graphs that5v are the results of the above liones of code, in this order from left tom right: (1, far left) From the original unfiltered frame, (2) Result of the first Tweak statement above, which lowers overallm saturation, (3) Result of the second Tweak statement, which applies to yellow/orange and yellow, (4, far right) Result of the third Tweak statement, applying to cyan and blue.

http://www.digitalfaq.com/forum/atta...1&d=1540679755

These adjustments correct general color issues that are common to all the shots in the avi capture sample. Correction for individual scenes can be worked in VirtualDub. Additional Virtualdub adjustments for the scene shown in post #9 (http://www.digitalfaq.com/forum/vide...html#post56769) were made with ColorCamcorderDenoise, ColorMill, gradation curves, and Hue/Sat/intensity filters. I saved te settings for those filters in a VirtualDub .vcf settings file, which is attached as VDubCapture_settings1.vcf. To use a .vcf file3,, open VirtualDub and click "File..." -> click "Load processing settings...", locate the downloaded .vcf file, and click "Open". The filters with their settings will load into the VDub filters dialog.

The same filters were also used for the very first and very last short shots in your avi sample, but the same settings aren't suitable for those two shots. the settings I used for the short lead-in and lead-out shots in your sample are attached as VDubCapture_settings2.vcf.

For a .vcf to work, you must have the mentioned filters in your VDub plugins. CamcorderColorDenoise ("CCD") v1.7 can be downloaded at http://www.digitalfaq.com/forum/atta...ove-ccd_v17zip. GradationCurves is at http://www.digitalfaq.com/forum/atta...1&d=1489408797. ColorMill v2.1 is at http://www.digitalfaq.com/forum/atta...colormill21zip. Donald Graft's Hue/Saturation/Intensity ("HSI"() filter is attached as HueSatInt_vdf.zip.

All of these corrections were determined by eyeball, by histograms, and by pixel color readers. Various histograms and pixel readers are available in VirtualDub and Adobe Premiere. Using your eyeballs for fine judgments depends mostly on three factors -- practice, a knowledge of basic color theory, and a properly calibrated monitor. The first factor just takes time, while the second two are discussed at length in Adobe's online manual.

In the shot of the guy sitting inn front of the projection screen in the images in post #9, the projection screen was used as a near-white object to help determine color balance. Blacks, dark grays, middle grays, light grays, and white and near-whites all contain equal proportions of red, green, and blue. Very dark grays would contain values of red 32/green 32/blue 32. Middle grays in RGB would be values of 128,128,128 respectively. I set the near-white of the projection screen at average values of 214,214,214 for red, green, and blue, or somewhere close to it. When red, green and blue are in balance for shades of grays and whites, the other colors fall into place.

I ran quick Avisynth scripts without denoisers, in order to determine Avisynth and VirtualDub color settings. Then I moved on to denoising and repair (continued, next post).

sanlyn 10-27-2018 06:32 PM

1 Attachment(s)
Your posted avi is badly damaged video with bad levels during capture that make repair work tougher. At least some nominal effort at controlling levels would have made a difference in detail recovery. The uncontrolled audio level is dreadful and should have not been clipped during capture; the attached mp4 is silent.

The temptation is to throw all kinds of filters at poorly captured video, and the more the merrier. That might get somewhere sometime but usually it's a lot of work for very little return. In this case the worst of the noise, aside from a ton of multi-generation color corruption, is distortion and detail loss from scanline errors and poor tracking fdrom the original dubbing player. Sharpening doesn't help much, since detail can't be created from nothing, and sharpening distortion just makes it look worse unless you apply so many filters that what's left is over-smoothed jello.

The main filter I used is FixRipsP2, a modification of several versions of what are called median averaging filters. Basically these are drastic smoothers that create new interpolated frames by averaging the motion of a great many objects over a great many frames. One purpose is to try to clean up the kind of ugly horizontal dropouts that are in your sample. FixRipsP2 seems to work best with half-height frames built from separated interlaced fields rather than with full-size frames from fully deinterlaced video. I did use QTGMC to deinterlace afterwards, but neither that nor degrainers have any effect on dropouts except to make them look worse, and they don't do much for bad edges unless they're used at strong, destructive settings.

Four cautions: First, median filters cancel a lot of noise and can smooth a lot of distortion and bad edges -- but they can also create new distortions with certain kinds of motion, especially with camera jiggle. Secondly, in averaging-out objects they can sometimes destroy objects altogether and/or replace them with objects that either don't belong there or that are entirely invented from other pieces nearby. Third, in this parrticuylar video there are simply too many bad frames and too much extended mistracking for any filter to clean up all the distortions. Finally, these filters are CPU pigs and are slo-o-o-w.

FixripsP2 was so slow (about 2 to 3 frames per second) that I decided to use a two-stage approach, with FixRipsP2 in Step 1 and the remaining filters and VirtualDub work in Step2.

Step 1 script:

Code:

Import("D:\Avisynth 2.5\plugins\FixRipsP2.avs")

AviSource("E:\forum\faq\adinbied\A1\Virtualdub_Capture_Sample_HYMT.avi")
ColorYUV(off_y=-8)
Levels(10,0.85,255,16,240,dither=true,coring=false)
Tweak(sat=0.80,dither=true,coring=false)
Tweak(sat=0.80,StartHue=135,EndHue=220,dither=true,coring=false)
Tweak(sat=0.85,StartHue=270,EndHue=345,dither=true,coring=false)
AssumeTFF()
SeparateFields()
FixVHSOversharp(20,16,12)
FixVHSOversharp(20,8,4)
FixVHSOversharpL(20,12,8)
Weave()

ConvertToYV12(interlaced=true)
SeparateFields()
FixRipsP2()
Weave()
return last

In VirtualDub I saved the output of the above script as YV12 losslessly compressed with Lagarith, with VirtualDub processing mode set to "fast recompress". The avi was saved with the title "VDubCap_Step1.avi". it was used as input for the Step 2 script, below:


Step 2:

Code:

Import("D:\Avisynth 2.5\plugins\RemoveDirtMC.avs")

AviSource("E:\forum\faq\adinbied\A1\VDubCap_Step1.avi")
AssumeTFF()
QTGMC(preset="medium",EZDenoise=8,denoiser="dfttest",\
  border=true,ChromaNoise=true,DenoiseMC=true,GrainRestore=0.3)
vInverse2()
RemoveDirtMC(40,false)
LSFmod()
AddGrainC(1.2, 1.2)

Crop(0,0,-4,-8).AddBorders(2,4,2,4)
ConvertToRGB32(interlaced=false)
return last

The final lines of code convert to RGB32 for the VirtualDub filters, which I loaded in VDub's filter chain and applied to the output of the script. The result is progressive video attached to this post as "VDubCap_59.94p.mp4". Apparently a final version as DVD or BluRay wasn't planned here, since neither 640x480 neither progressive video are designed for DVD or SD-BluRay.

FixRipsP2.avs is here: http://www.digitalfaq.com/forum/atta...d-fixripsp2avs.
FixRipsP2 requires the following:
DePan_Tools v1.13.1 (http://www.digitalfaq.com/forum/atta...ools_1_13_1zip)
RgTools.dll: http://avisynth.nl/index.php/RgTools (also supplied with QTGMC).
Also required:
Microsoft Visual C++ 2015 Redistributable Package (x86 / x64) (https://www.microsoft.com/en-us/down....aspx?id=53587)
MVTools 2.27.21.x or later (http://www.digitalfaq.com/forum/atta...s2_27_21_22zip), also supplied with QTGMC.

For video not as badly damaged as this, a drastic filter like FixRipsP2 wouldn't be needed. QTGMC and perhaps RemoveDirtMC would probably do the trick. But remember that proper input controls during capture would vastly improve results. Capture is just as important as workflow, even moreso. I selecte3db FuixERipsP2 for the dropouts as well as for the annoyi9ng wiggly noise from scanline errors.

You could use the routine for long sequences that don't have dropouts. But, really, QTGMC and RemoveDirtMC don't help very much with the other distorted elements.

Code:

ColorYUV(off_y=-8)
Levels(10,0.90,255,16,240,dither=true,coring=false)
Tweak(sat=0.80,dither=true,coring=false)
Tweak(sat=0.80,StartHue=135,EndHue=220,dither=true,coring=false)
Tweak(sat=0.85,StartHue=270,EndHue=345,dither=true,coring=false)

AssumeTFF()
SeparateFields()
FixVHSOversharp(20,16,12)
FixVHSOversharp(20,8,4)
FixVHSOversharpL(20,12,8)
Weave()

ConvertToYV12(interlaced=true)
QTGMC(preset="slow",EZDenoise=8,denoiser="dfttest",\
  border=true,ChromaNoise=true,DenoiseMC=true,GrainRestore=0.3)
vInverse2()
RemoveDirtMC(60,false)
LSFmod()
AddGrainC(1.2, 1.2)
Crop(0,0,-8,-8).AddBorders(4,4,4,4)
ConvertToRGB32(interlaced=false)
return last


archivarious 12-01-2020 03:36 AM

Quote:

Originally Posted by sanlyn (Post 57020)
The temptation is to throw all kinds of filters at poorly captured video, and the more the merrier.

Thank you for this very detailed process. A little bit over my head, but printed it to underline the parts I would like to learn. The result, especially the first shot, looks amazing, considering the source sample was in such a bad shape. Have you since changed your process ?


All times are GMT -5. The time now is 01:53 PM

Site design, images and content © 2002-2024 The Digital FAQ, www.digitalFAQ.com
Forum Software by vBulletin · Copyright © 2024 Jelsoft Enterprises Ltd.