#1  
04-06-2020, 02:06 PM
Winsordawson Winsordawson is offline
Free Member
 
Join Date: Sep 2010
Location: Behind you
Posts: 473
Thanked 29 Times in 25 Posts
After much trepidation, I finally got around to learning Avisynth. The AfterDawn tutorial got me started, but it was largely due to the posts by Sanlyn, msgohan, themaster1, jwillis84, LS, and others, to which I am very grateful. Being locked inside for the last week due to the pandemic did not hurt either....

I have attached before and after samples of a clip. It has not gone through QTGMC, but I will add that at the end (although it was not recognizing the dfttest function, but perhaps I do not need it because of the FFT3DFilter). I tried to look into frequency filters like DeFreq for the ringing spots in the background, but did not see a difference. If I thought DeFreq's documentation was bad, that was before trying to figure out FFTQuiver, which resulted in a lot of darkened frames. FanFilter was easier to use successfully.

Please let me know if there are better filters I should be using. Should I sharpen more? I could not get RemoveDirt to work. The MDegrain2/MSuper/MAnalyze script did not seem to be much better than FFT3D, but was slower. I had to use an RGB parade for coloring as I was not as confident using the YUV histograms. Should the U and V channels be allgned on top of each other?

Also, the filtered version appears to have a blue bar on the bottom--any reason why?

Lastly, due to the file size on this site, is it better to upload longer compressed clips for suggestions or very short lossless clips?

Thanks for any help.

Code:
SetFilterMTMode("QTGMC", 2)
AVISource("Videowave2a2.avi")
AssumeBFF()
Trim(16773, 17373)
Crop(4,0,-8,-8).AddBorders(4,0,8,8)
FixChromaBleeding()

/* -------Color correction-------*/
#ColorYUV(analyze=true)
#Histogram("levels")
#HistogramRGBParade()
ColorYUV(gain_y=20, off_u=10)
Tweak(sat=1.1, dither=true, coring=false)
Levels(11, 1, 255, 16, 235, coring=false, dither=true)

#For vertical ringing
FAN(lambda=7)

/*Denoiser From Doom: "Chroma planes tolerate much stronger denoising than luma
 so it's a good idea to process them separately" */

FFT3DFilter(sigma=3, plane=0, interlaced=true, bw=16, bh=16, ow=8, oh=8)
FFT3DFilter(sigma=4, plane=3, interlaced=true, bw=16, bh=16, ow=8, oh=8)
FFT3DFilter(bt=-1, sharpen=0.4)

ConvertToYV12(interlaced=true)
SmoothUV(radius=2, field=true)
DeSpot(pwidth=25, interlaced=true, show=0, color=true, mthres=25)

separatefields()
#Fix Chromatic misalignement 
A=Last
B=A.Greyscale()
Overlay(B,A,X=0,Y=-2,Mode="Chroma")

#Chroma Bleeding
mergechroma(aWarpSharp(depth=10, thresh=0.75, blurlevel=3, cm=1))
turnright()
mergechroma(aWarpSharp(depth=5, thresh=0.75, blurlevel=2, cm=1))
turnleft()

#Rainbows and chroma denoiser
Cnr2(mode="oox", scdthr=10.0, ln=35, lm=192, un=57, um=255, vn=57, vm=255, log=false, scenechroma=false)

Weave()

#Denoiser median filter
FixRipsP2()

#Deinterlacing
#QTGMC(Preset="Slower", Edithreads=1, FPSDivisor=2)

#ColorYUV(analyze=true)
#Histogram("levels")
#HistogramRGBParade()
Prefetch(threads=1)


Attached Images
File Type: jpg BEFORE.jpg (67.2 KB, 54 downloads)
File Type: jpg AFTER.jpg (48.5 KB, 53 downloads)
Attached Files
File Type: avi comateens_before.avi (16.12 MB, 35 downloads)
File Type: avi comateens_after.avi (61.03 MB, 51 downloads)
Reply With Quote
Someday, 12:01 PM
admin's Avatar
Ads / Sponsors
 
Join Date: ∞
Posts: 42
Thanks: ∞
Thanked 42 Times in 42 Posts
  #2  
04-10-2020, 09:35 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
Since no one else seems to want to take this on.....

I don't usually work with analog to DV transfers. They're too troublesome and require strong filtering to clean up compression artifacts and other DV detritus that doesn't normally appear on purely analog source. That stuff gets added to your capture. Nevertheless I have some tips and suggestions.

Overall it's not a bad job of cleanup but the video does suffer from so much work. It's softened and looks over filtered.

AssumeBFF()
This is not necessary. BFF is the Avisynth default. Most VHS is TFF, but DV reverses the field order.

Crop(4,0,-8,-8).AddBorders(4,0,8,8)
The new borders leave the frame off-center. I would have used AddBorders(6,4,6,4).
Using Crop and AddBorders this early, the borders will be modified and discolored by all the filtering that follows. They won't be black by the time it's over.

FixChromaBleeding()
I think FixChromaBleeding works better with non-interlaced frames. Read its script. It masks edges and uses chromashift() internally.

ColorYUV(gain_y=20, off_u=10)
Is this gain necessary? I assume you know what gain does. I would have used a contrast increase in tweak for the brights, and something like ContrastMask() for the darks. Gain here raises your black levels and makes darks look murky in many setups. off_u = 10 does add some needed blue. But you need RGB to fix the color on this one. The skin tones here indicate that all the participants have terminal liver disorders.

Tweak(sat=1.1, dither=true, coring=false)
Actually some of your colors are already nearly over-saturated. Maybe you were trying to compensate for the cooked colors DV imposes on VHS.

Levels(11, 1, 255, 16, 235, coring=false, dither=true)
Again, the "11" here suggests that you don't have a calibrated monitor (?). In any case, your black levels are raised a bit high and the low end looks foggy. Much of the "snap" has gone from the image. It will look malnourished on TV, which has a different luminance curve than PC monitors.

FFT3DFilter(sigma=3, plane=0, interlaced=true, bw=16, bh=16, ow=8, oh=8)
FFT3DFilter(sigma=4, plane=3, interlaced=true, bw=16, bh=16, ow=8, oh=8)
FFT3DFilter(bt=-1, sharpen=0.4)

This softens the video, which is why a lot of people don't use FFT3D. But it's up to you. QTGMC uses it for its faster presets.

ConvertToYV12(interlaced=true)
Not necessary. DV is YV12, so your video is already YV12 and it's how you lost 50% of your VHS chroma resolution during capture.

DeSpot(pwidth=25, interlaced=true, show=0, color=true, mthres=25)
I don't think this filter is doing anything. I'd think RemoveSpotsMC() is more effective.

separatefields()
This and the mergechroma routines that follow appear copied from other scripts. The MergeChroma business does indeed "work" with SeparateFields, sort of, but for better results you really need to use this technique on deinterlaced video. Besides, I think you can see that it didn't work all that well, there's obvious blue bleeding and chroma shift in the last shot and more of it in the "after" video than in the "Before" version (look at the couple's blue-stained ears). These routines do nothing for the thick, black edge halos. You might try DeHalo_Alpha or FixVHSOversharp for that.

FixRipsP2()
Be careful with this. It didn't remove all the moire in the record album cover. It visibly softened the video further, and it distorted motion. It's a limited-use filter. The motion smoothing settings in QTGMC might have done almost as well without so much softening or distortion. I'd use it only in shots with the noisy album cover, not on the entire video. Again, it's up to you.

#QTGMC(Preset="Slower", Edithreads=1, FPSDivisor=2)
Why would you deinterlace, and why at this late point? You've already used SeparfateFields. You shouldn't need both. Let's say you didn't have a problem with QTGMC and/or dfttest -- Why are you using a slow preset? The video already looks thoroughly scrubbed with the other filters. "Slow" is pretty drastic considering all the other scrubbing. I also mention this as a flaw in logic flow: after you've spent so much time filtering the whole video, why are you using FPSDivisor=2 to throw away 50% of your work? If you have to deinterlace why not use a faster and less destructive preset? I assume you used QTGMC at this point in the script because you hinted that you couldn't use it earlier.

I guess you've seen some of the documentation on these plugins, but I invite you to have another look. Yep, it's a pain in the neck and a lot of it is discernible only to the guys that wrote it. But keep looking for other usage examples, and don't leave out RemoveDirt, RemoveSpots, MCTemporalDenoise, and the huge HTML and original text in QTGMC. The latter has a special setting for chroma noise, and something like Bifrost and chubbyrain2 are also useful for rainbows.

The final looks smoother and less "disturbed" than the original. DV is awful stuff, very thorny to work with. Keep at it. You're making progress.
Reply With Quote
The following users thank sanlyn for this useful post: Winsordawson (04-13-2020)
  #3  
04-13-2020, 03:19 PM
Winsordawson Winsordawson is offline
Free Member
 
Join Date: Sep 2010
Location: Behind you
Posts: 473
Thanked 29 Times in 25 Posts
I appreciate all the detailed advice. This video is from an U-Matic tape that was converted by a technician who does these types of things for a living. I asked him if he could output a lossless version but he said that his U-Matic setup can only output DV. He also said that since DV has a vertical resolution about double of U-Matic it is over-specified for it. Although I understand that every pixel matters when it comes to Avisynth filtering, I have few options when it comes to a professional who still converts these tapes. I do have a U-Matic deck but it broke and given its weight it would cost a fortune just to ship it somewhere.

Regarding your comments, I used AssumeBFF() just to be safe. I used a minimum of 11 for the Levels filter because that was the value of the loose minimum. Correct me please, but I don't want to affect the whole video if there are only a few stray pixels that are blown out. Contrast in Tweak did help, thanks. But the ContrastMask seemed to increase brightness, not darken it when I played with both positive and negative values. While the Histogram in Avisynth indicated that most values were within the safe range after using Levels, the Histogram in VirtualDub indicated that the RGB levels were not.

When I opened the original file in MediaInfo, it said it was YUV 4:1:1, but when I ran Info() right after opening the file in Avisynth, it said YV12. That is why I ran the ConvertToyv12. Which do I trust?

Thank you for the suggestions to replace FFT3D and DeSpot with RemoveDirtMC and RemoveSpotsMC, which made the video less cloudy. I will only use FixRipsP2() when absolutely needed (I left it out here).

I do not know how the aWarpSharp works, or why MergeChroma works without two clips, but it definitely improved the chroma bleeding more after de-interlacing. FixVHSOversharp, Bifrost, and chubbyrain2 did not seem to do much. I tried playing with a script of your from another post on chroma bleeding but it did not seem to make as much of a difference.

Code:
U = UtoY()
    U = U.BilinearResize(U.width/2, U.height).aWarpSharp(depth=30).\
     nnedi3_rpow2(4, cshift="Spline64Resize", fwidth=U.width, fheight=U.height)

V = VtoY()
    V = V.BilinearResize(V.width/2, V.height).aWarpSharp(depth=30).\
     nnedi3_rpow2(4, cshift="Spline64Resize", fwidth=V.width, fheight=V.height)

YtoUV(U, V, last)
I could not find much information on the Avisynth wiki as to the differences among the speed presets for QTGMC. Are the speeds related to the quality? Then why is slower worse?

I attached a compressed sample just for forum purposes--might you know why there is a light blue bar on the bottom?

If you have any further suggestions please let me know. It doesn't have to be perfect. If I did something that seemed strange it is probably because I don't know what I am doing. Thank you again.

Code:
SetFilterMTMode("QTGMC", 2)

Import("C:\Program Files (x86)\AviSynth+\plugins+\RemoveDirtMC.avsi")
AVISource("Videowave2a2.avi")
AssumeBFF()
Trim(16773, 17373)

/* -------Color correction-------*/
ColorYUV(off_u=10, off_v=-3)
Tweak(cont=1.08, dither=true, coring=false)
Levels(11, 1, 255, 16, 235, coring=false, dither=true)

#ColorYUV(analyze=true)
#Histogram("levels")
#HistogramRGBParade()
ConvertToYV12(interlaced=true)

#For vertical ringing
FAN(lambda=5)
QTGMC(preset="medium",EZDenoise=2,ChromaNoise=true, ChromaMotion=true,DenoiseMC=true, ShowNoise=false, Edithreads=2)
RemoveDirtMC(50, false)

FixChromaBleeding()
SmoothUV(radius=2, field=false)
Cnr2(mode="ooo", scdthr=10.0, ln=45, lm=192, un=87, um=255, vn=87, vm=255, log=false, scenechroma=false)

mergechroma(aWarpSharp(depth=55, thresh=0.5, blurlevel=3, cm=1, bm=0))
turnright()
mergechroma(aWarpSharp(depth=55, thresh=0.5, blurlevel=2, cm=1, bm=0))
turnleft()

RemoveSpotsMC()
RemoveSpotsMC()
LimitedSharpenFaster(strength=200)

Crop(4,0,-8,-8).AddBorders(6,4,6,4)
Prefetch(threads=2)
return last


Attached Files
File Type: avi comateens_after_version2.avi (15.61 MB, 12 downloads)
Reply With Quote
  #4  
04-16-2020, 04:00 AM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
Skip this post. -admin

I prepared a long answer and some samples but I was blocked from posting by the oddball software in this forum. The blocking message advised to contac7 t a moderator with "Cloudflare Ray ID: 584cb29bee9de0c6". There isn't a way to do that in this forum unless you can find the ultra-secret hidden link to contact a moderator. There goes a full day's restoration work and 45 minutes of typing a message and uploads. After another er5 minutes of changing the post I gave up.

Making another effort...
I reformatted the reply to your last post into a Rich Text (.rtf) file. .Rtf is a universal format that can be opened in Word or Wordpad. It is attached as "Reply2.zip".
The jpg image that the reply refers to is in "frame 371 before and after.zip".
The VirtualDub settings .vcf file is attached as "VirtualDub settings.zip".
The video samples mentioned are attached as "video samples.zip"
Sorry for this song and dance. I think the forum needs smarter scanning software.


Attached Files
File Type: zip frame 371 before and after.zip (68.8 KB, 4 downloads)
File Type: zip reply2.zip (4.8 KB, 5 downloads)
File Type: zip VirtualDub settings.zip (1.1 KB, 4 downloads)
File Type: zip video samples.zip (22.09 MB, 12 downloads)
Reply With Quote
  #5  
04-16-2020, 05:33 AM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
Thanks for the new capture.

Quote:
Originally Posted by Winsordawson View Post
I asked him if he could output a lossless version but he said that his U-Matic setup can only output DV.
Hmm. I've a suspicion that he captures everything to DV, but that's an educated guess. No reflection on you, of course. The version you originally posted a few days ago just doesn't look like UMatic, but I haven't seen all that much of the medium. The scene just doesn't look real; It just looks strange, and working with it is frustrating. The colors are especially screwed up, and the whole thing looks for all the world like a 2nd-gen tape dub. It even acts that way!

Be that as it may, I spent quite a long time trying to get some convincing color balance. I'm still not satisfied, but the image below is the result:

[before & after jpg is sent as a .zip file]

(Above) At left is the original frame 371. On the right, frame 371 as seen in the attached 480i mp4. I made and attached two finished versions; one is 59.94fps 4890p similar to your latest sample's format, the other is an interlaced 480i.

There must be 50 ways to denoise the original. Your earlier effort with FFT3d was pretty decent; maybe a lighter sigma would be less soft. Anyway, I used QTGMC's EZDenoise followed by mDegrain2, with some RemoveDirtMC to get rid of some spots and do some more smoothing. Getting cleaner edges is a struggle, and little blue blotches keep popping up no matter what you do. The black edge halos refuse to budge; I got tired of ruining other parts of the image with filters that worked only partially or not at all.

The script below does some chroma cleanup using SeparateFields() before running QTGMC. Seems redundant but I worked the chroma edges and bleed first as separate fields because QTGMC tended to carry some defects forward across multiple frames when it interpolated new images. Another video might not pose the same problem.

90% of color work was in VirtualDub and RGB. The filters used were ColorCamcorderDenoise, ColorMill, and further tweaking with gradation curves. I had to be careful adding blue, to avoid adding too much bright blue. In most cases I've increased color; remember that in RGB when you increase color you increase brightness, when you subtract color you decrease brightness. In gradation curves, for the general "RGB" panel there is a little hook at the bottom of the slanted line to make sure everything at RGB 5 and below is really black, avoiding discolored borders. Meanwhile the line has a slight curve that mildly brightens the range between 0RGB 10 and RGB 64 or so. Colors were readjusted many many many times, with eye rest breaks every 15 minutes. I saved the settings in a .vcf file so that you can mount the filters and see how they're set up. The .vcf is attached as comateens_trial_VDub_settings.vcf.

Quote:
Originally Posted by Winsordawson View Post
I used a minimum of 11 for the Levels filter because that was the value of the loose minimum. Correct me please, but I don't want to affect the whole video if there are only a few stray pixels that are blown out.
I agree, although I don't often go by the loose numbers. Some of the original frames give minimums of zero, others give minimums of 18 or more. The original doesn't appear to have crushed material that I can see. "Stray pixels" is probably a good description: chroma in the original is very corrupt and I can see skin tone and other colors varying second by second/ I didn't worry about levels, at least not in the original sample. Note that a histogram can be thrown off by including borders in your reading.

Quote:
Originally Posted by Winsordawson View Post
But the ContrastMask seemed to increase brightness,...
You're right. ContrastMask was an idea I threw out, but not a very good one. It's corrections go too high into the midrange for this video.

Quote:
Originally Posted by Winsordawson View Post
When I opened the original file in MediaInfo, it said it was YUV 4:1:1, but when I ran Info() right after opening the file in Avisynth, it said YV12. That is why I ran the ConvertToyv12. Which do I trust?
YV12 can be 4:1:1 or 4:2:0. The latter is common with mpeg and h.264. DV just has to be different, that's all. I'd say you were correct. And not many people would have been careful enough to look at MediaInfo. At 4:1:1 Avisynth would work until some filter or other generated a message.

Quote:
Originally Posted by Winsordawson View Post
I do not know how the aWarpSharp works, or why MergeChroma works without two clips
It always works with two clips. The clip that furnishes the original luma is the unmentioned but implicit "last". So MergeChroma's syntax when fully stated is MergeChroma(clip1, clip2). The HTML doc says that clip1 ("last", or the clip that new choma will be merged into) is "required". Well....it isn't required to be stated, really, unless you want to. The clip that most people specify when typing that command is usually clip2, being the clip that gets sharpened or otherwise modified and from which the color is taken. Confusing? Of course. If only one clip is mentioned, Avisynth assumes that the clip being specified is clip2. http://avisynth.nl/index.php/Merge

aWarpSharp and the later-and-better aWarpSharp2 actually do warp lines and edges -- it tends to tighten fuzzy edges. With chroma, it tries to tighten color nearer to the closest edge. https://www.animemusicvideos.org/gui...tml#sharpening
The filters in the AMV Guide, by the way, aren't just for anime. Most of them are old standbys for every kind of video. After all, dfttest and LSFMod are compomnents in some very heavy duty filters (QTGMC and MCTemporalDenoise, for instance). You can also use aWarpSharp2 as a sharpener. My favotite is LimitedSharpenFaster, though I don't always sharpen.

Quote:
Originally Posted by Winsordawson View Post
I could not find much information on the Avisynth wiki as to the differences among the speed presets for QTGMC. Are the speeds related to the quality? Then why is slower worse?
Some of the parameters are explained in the HTML that comes with QTGMC, but all the detail settings for each preset are in the top couple of hundred lines of text in the avsi script. The script is too long as-is for Notepad -- open it in Wordpad. Don't wrap lines. You can save it out of Wordpad as plain text (.txt) and not as DOS with markup. It can then be read in Notepad as .txt with no line wrapping. Expand Notepad to fullscreen. The faster presets do less cleanup and less frame and motion repair. The slower the preset, the stronger the denoising and repairs. It's not that slow presets are "worse" -- sometimes you need 'em. But be careful; if you're also using really drastic filters at the same time, the slow presets can scrub the hell out of the video. On the other hand, some videos are so bad it can be necessary.

I don't know where the light blue bar on the sample came from. XVid maybe? I haven't used that in 15 years. Be careful with cropping, though, which can mess up color. http://aisynth.nl/index.php/Crop

RemoveDirtMC: a power of 50 seems like overkill. This filter can remove objects at high powers so check its results carefully. Sometimes you have no choice. Powers of 20 and 30 are normal. 40 and over need a close look.

[QUOTE=Winsordawson;67989]If I did something that seemed strange it is probably because I don't know what I am doing.[quote]
Everyone here has been at that point. We learn something new every day. You're getting there. I picked stuff up by doing what you're doing: examining other work, trying things out, and struggling through the docs. Some of the docs will definitely tell you how much you don't know!

Scripting for this weird video is largely a matter of experimentation and patience. Color balance was difficult: colors are corrupt from frame to frame, and there are no clearly white or gray objects to go by. The dark colors worn by the kid on the left look black, but whenever you change other colors the darks look more like dark olive. I used skin tone as a guide. Skin is mostly red, with green at 70% of red and blue at 60 to 70% of green. If things look too red people mistakenly add more blue. But blue just makes red look pink. To balance red, add cyan (blue + green).

Most VHS isn't this complicated (home camera movies excepted). I must have tried at least a dozen variations of the following script and it could still use some work -- still a bit grainy and reddish. The following is a suggestion:

Code:
AviSource("I:\forum\faq\Windsordawson\B\comateens_before.avi")
ConvertToYV12(interlaced=true)  ###<- from 4:1:1 to 4:2:0
SeparateFields()
FixChromaBleeding()
ChromaShift(c=4)
Weave()
QTGMC(preset="medium",EZDenoise=8,denoiser="dfttest",ChromaMotion=true,border=true,\
   ChromaNoise=true,DenoiseMC=true,GrainRestore=0.3,sharpness=0.6)
vInverse()   ###<- mild smoothing of combing remnants
source=last
  super = source.MSuper(pel=2, sharp=1)
  backward_vec2 = MAnalyse(super, isb = true, delta = 2, blksize=8, overlap=4, dct=0)
  backward_vec1 = MAnalyse(super, isb = true, delta = 1, blksize=8, overlap=4, dct=0)
  forward_vec1 = MAnalyse(super, isb = false, delta = 1, blksize=8, overlap=4, dct=0)
  forward_vec2 = MAnalyse(super, isb = false, delta = 2, blksize=8, overlap=4, dct=0)
  MDegrain2(source,super, backward_vec1,forward_vec1,backward_vec2,forward_vec2,thSAD=400)
RemoveDirtMC(30,false)
DeHalo_Alpha(rx=2)
MergeChroma(aWarpSharp2(depth=20).aWarpSharp2(depth=10))
AddGrainC(1.5,1.5)    ###<- avoid over-smoothed look

SeparateFields().SelectEvery(4,0,3).Weave()  ###<- for progressive 59.94, delete this line.
Crop(6,0,-8, -8).AddBorders(8,4,6,4)
ConvertToRGB32(interlaced=true,matrix="Rec601")  ###<- for VirtualDub filters.
return last
In the attached 480i version I notice some edge ringing or "echo" on fast camera pans. I see this with DV field reversal in some VHS-DV captures.


Attached Images
File Type: jpg frame 371 before and after.jpg (68.7 KB, 10 downloads)
Attached Files
File Type: mp4 comateens_trial_480i.mp4 (11.00 MB, 3 downloads)
File Type: mp4 comateens_trial_480p.mp4 (11.11 MB, 6 downloads)
File Type: vcf VirtualDub settings.vcf (3.7 KB, 4 downloads)
Reply With Quote
  #6  
04-16-2020, 05:57 AM
admin's Avatar
admin admin is offline
Site Staff | Web Development
 
Join Date: Jul 2003
Posts: 4,310
Thanked 654 Times in 457 Posts
Quote:
Originally Posted by sanlyn View Post
I prepared a long answer and some samples but I was blocked from posting by the oddball software in this forum. The blocking message advised to contact a moderator with "Cloudflare Ray ID: 584cb29bee9de0c6". There isn't a way to do that in this forum unless you can find the ultra-secret hidden link to contact a moderator. There goes a full day's restoration work and 45 minutes of typing a message and uploads. After another er5 minutes of changing the post I gave up.
Quote:
Originally Posted by sanlyn View Post
Making another effort...
I reformatted the reply to your last post into a Rich Text (.rtf) file. .Rtf is a universal format that can be opened in Word or Wordpad. It is attached as "Reply2.zip".
The jpg image that the reply refers to is in "frame 371 before and after.zip".
The VirtualDub settings .vcf file is attached as "VirtualDub settings.zip".
The video samples mentioned are attached as "video samples.zip"
Sorry for this song and dance. I think the forum needs smarter scanning software.
That's not the forum software. With that error, you ran afoul of CloudFlare security (WAF) rules. That's network, not server. The text of your post is fine, and it's now posted. So it must be some error in the files being attached. We'll look into it...

EDIT: Alright, I see your errors. Thanks for posting the RayID. The rule that was tripped may not be vBulletin friendly, so it's been neutered. We'll still get soft errors on our end, for logging purposes, but it shouldn't be visible to you anymore.

If you ever run into CloudFlare issues, post about it in the General forum. Timestamps are most helpful, then RayID next. We can get your IP from the post.

EDIT2: Attachments now added fine.

Thanks.

You may now resume your regularly scheduled Avisynth discussion.

- Did this site help you? Then upgrade to Premium Member and show your support!
- Also: Like Us on Facebook for special DVD/Blu-ray news and deals!
Reply With Quote
The following users thank admin for this useful post: sanlyn (04-16-2020)
  #7  
04-16-2020, 09:09 AM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
@admin:


Thanks for your attention to this. As it is I later scanned those files myself with Kaspersky and with Malwarebytes. Nothing found amiss. In the future if it happens again (darn!) I'll post in the General area.


Reply With Quote
  #8  
04-20-2020, 02:44 PM
Winsordawson Winsordawson is offline
Free Member
 
Join Date: Sep 2010
Location: Behind you
Posts: 473
Thanked 29 Times in 25 Posts
Thank you so much for the examples and detailed information, especially with regards to how mergechroma works. You have provided me with a lot to play around with. I think this is tape (from 1982) is in fact a 2nd generation dub, but from one U-Matic tape to another. You are right that the guy who converts this probably only outputs to .dv, I believe to save hard drive space. With regards to why the tape has ringing he gave the following explanation:

Quote:
1: Ringing. Umatic is known for this. Vertical edges give rise to fine ghosting on multi-generation copies, due to a compromise in the Umatic filter design. Later high band machines were slightly less prone to this problem than earlier models, but the problem is that once you have a multi-generation copy, the ghosting is recorded onto the tape and can't readily be removed. I have found a way to reduce it slightly using one of my Digital Timebase Correctors, but it comes with a loss of resolution. You made it clear that you wanted as much resolution as possible so I didn't do that with your tapes.
2: Cross-modulation. This is where tapes were copied from one machine to another using the Composite Video cables rather than Dub cables. Not all machines have Dub connections, so this may have been unavoidable. When copying a first generation tape from a Umatic machine, modern digital timebase correctors don't introduce this effect because they include a digital comb filter, but such technology was years away when Umatic machines were in use. So each copy generation would "mix" the luminance and chroma causing this colour effect. Since it's recorded on the tape in the multi-generation copies, it's not possible to eliminate it.

Sorry if they sound like excuses, but Umatic has its limitations and multi-generation copies, particularly in NTSC, do tend to look like this.
If you could enlighten me, how do you know when to place filters within a separatefields() and weave versus after a de-interlacing? I know that some filters require it depending on whether they are spatial, temporal, or both, but you seem to know that mergechroma would work better after a de-interlace and that FixChromaBleeing would work better using a temporary de-interlacing. Is there any rhyme or reason, or just trial and error with how the filters turn out? Is there any benefit to rotating the video on its side when using Mergechroma?

Are there certain circumstances that are impossible to color correct in Avisynth with ColorYUV and Tweak and require ColorMill and Gradation curves in VirtualDub? Is it usually worth it despite the loss by converting to RGB?

The faces in the samples you provided appear a bit blown out. Is this just necessary in order to get the colors right, because of the brightness that results when you add color in RGB? Also, is there any reason why you avoided using the Fan filter?

Thanks again. I don't expect it to be perfect. I am just trying to get close as possible until LordSmurf has the availability for me to send them to him.
Reply With Quote
  #9  
04-21-2020, 09:25 AM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
Thanks for the info on UMatic media.

Quote:
Originally Posted by Winsordawson View Post
how do you know when to place filters within a separatefields() and weave versus after a de-interlacing? I know that some filters require it depending on whether they are spatial, temporal, or both, but you seem to know that mergechroma would work better after a de-interlace and that FixChromaBleeing would work better using a temporary de-interlacing.
Both of those filters work well with SeparateFields() but better with deinterlacing. But there are caveats, depending on the video. Purists would insist that "deinterlace required" means exactly that. In use, however, things get very iffy. It's obviously no problem if full deinterlace is used -- but deinterlacing itself is a problem. Software deinterlacing is a destructive process, one in which some detail is lost, one field is discarded and replaced with an new interpolated one, and that interpolation can often spread defects (such as spots or ripples) across multiple frames. Because of this, those same purists insist that you should deinterlace only when necessary.

In this case I felt chroma smearing would look worse after deinterlacing so I used SeparateFields instead. It seemed to work OK. Of course it could have made no difference or it could have looked worse, so I tested first. In the end I thought motion compensation in deinterlacing didn't make chroma repair look quite as neat -- at least, not in the frames I looked at. Frankly, I don't think anyone would notice a difference. You also have to be aware that if interlaced chroma appears shifted vertically by only 2 pixels, you can't use SeparateFields and ChromaShift to fix it, because separating into half-height fields means that in each field the chroma is shifted vertically by only 1 pixel instead of 2 -- you would need a more complex ChromaShiftSP for that, which involves shifting by single or subpixel heights and converting to RGB internally.

Other purists would insist on doing this chroma edge cleaning in YUY2. But I dislike jockeying back and forth in multiple colorspace conversions. Meanwhile there are few filters that seem to work pretty well with SeparateFields, among them RemoveDustMC and Remove Spots MC. A very popular super-filter is MCTemporalDenoise which can be used with its "Interlaced = true" parameter setting, in which case it uses SeparateFields internally. Popular filters that don't work very well with SeparateFields are dfttest and derainbow filters such as BiFrost and chubbyrain2.

Quote:
Originally Posted by Winsordawson View Post
Are there certain circumstances that are impossible to color correct in Avisynth with ColorYUV and Tweak and require ColorMill and Gradation curves in VirtualDub? Is it usually worth it despite the loss by converting to RGB?
Take the sample as a case. It requires different settings for bright red and middle red, and different values for dark, middle, and bright blue. How would you target those specific ranges in YUV, and without affecting greens?

Some very sophisticated (and expensive) video apps can be somewhat more specifiec in YUV, but they still can't match the flixibility of RGB. Yet again, a lot of video doesn't need such complex correction. As for RGB, if done correctly in Avisynth the work has greater precision than in NLE's and the damage is insignificant unless one insists on going back and forth again and again with colorspaces.

Quote:
Originally Posted by Winsordawson View Post
The faces in the samples you provided appear a bit blown out.
That deals mostly with specular highlights, which on skin can often look like hot-spots. In ColorMill, go to the "Levels" section and move the "Light" slider control downward several points.

Quote:
Originally Posted by Winsordawson View Post
Also, is there any reason why you avoided using the Fan filter?
It's been years since I used that filter. I didn't see a need for it here.

The scripts in this thread are suggestions for different fixes of different problems. Often there are multiple ways to accomplish the same thing. Other users of Avisynth and VirtualDub are always welcome to contribute to these projects. I don't have an exclusive usage license for this stuff.
Reply With Quote
The following users thank sanlyn for this useful post: Winsordawson (04-21-2020)
  #10  
04-21-2020, 04:11 PM
Winsordawson Winsordawson is offline
Free Member
 
Join Date: Sep 2010
Location: Behind you
Posts: 473
Thanked 29 Times in 25 Posts
Quote:
Take the sample as a case. It requires different settings for bright red and middle red, and different values for dark, middle, and bright blue. How would you target those specific ranges in YUV, and without affecting greens?
I could not find any definition of what tonal values Dark, Middle, and Light represent in Color Mill. I assume they stand for the shadows, midtones, and highlights, but does Color Mill define Dark, Middle, and Light as the bottom 25%, middle 50%, and top 25% of the tonal range? So when you are correcting dark blue, for example, you are making sure they stay in the bottom 25% of the waveform?

Thanks again.
Reply With Quote
  #11  
04-21-2020, 08:13 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
There is some overlap to these ranges:
RGB 0 to 64 (shadow areas, darker colors, darkest area on fairly white shirts, deep skin shadows), lower quadrant of a curves filter.
RGB 64 to 192 (skin tones from shadow to to highlight; green shrubbery, middle gray = 128); middle two quadrants.
RGB 192 to 255 (brightest areas, bright sky, light grays, RGB 255 can look pretty "hot" sometimes); top quadrant.
A little experience will show you how these ranges operate in real life.
Reply With Quote
  #12  
04-22-2020, 04:17 AM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
BTW, you should mhave a tool that reads pixel values in VirtualDub and other apps. One free no-install pixel reader tool that many users keep in a corner of their desktop is csamp.exe (http://www.digitalfaq.com/forum/atta...on-dv-csampzip). That link is to the old version 1.4. A VirtualDub ColorTools histogram is almost always used in these projects but be careful that you get the correct version. The older v1.4 won't work in Win7 or later. The new version 1.5 works everywhere and is at https://sourceforge.net/projects/vdf...1.5%20update1/.
NOTE: You can keep ColorTools 1.4 and 1.5 in your plugins together, but change one of their names to prevent conflicts. I have version 1.4 installed as "clrtools.vdf" and version 1.5 installed as "clrtools15.vdf".

Below is an image from a Cher video project showing how Csamp was used to read pixel values from a mouse cursor on Cher's nose. The Csamp readout panel is in the middle of the image. I guess you know where Cher's nose is.



The other tool shown in the image is the RGB Blue panel of the gradation curves filter.
Reply With Quote
The following users thank sanlyn for this useful post: Winsordawson (04-23-2020)
  #13  
04-22-2020, 07:48 AM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
versions 1.4 and 1.5 in the above post refer to the ColorTools vdf, not to the pixel sampler.


Another free tool is ColorPic which can be contracted or expanded on the desktop and reads continuously (http://www.iconico.com/colorpic/help.aspx).
Reply With Quote
  #14  
04-23-2020, 05:12 PM
Winsordawson Winsordawson is offline
Free Member
 
Join Date: Sep 2010
Location: Behind you
Posts: 473
Thanked 29 Times in 25 Posts
Thanks. It's hard to tell where Cher's nose is given the plastic surgery. If Csamp only works with an older version of ColorTools, it seems installing ColorPic is the easier solution, since I already have that. I would probably forget at some point later why I have two ColorTools installed. However, what would be the purpose of sampling pixels except in the case of something that you think is pure black or pure white? I would think determining if something in the video is in fact middle gray in real life would be difficult.

The video from above does in fact have color bars at the beginning of the tape. However, you mentioned that the video changes color multiple times throughout even the short clip, so I don't know if the color bars would be a good basis.

This old post of yours down the page was also useful for explaining ColorMill:

https://forum.videohelp.com/threads/...-as-VirtualDub
Reply With Quote
  #15  
04-23-2020, 08:58 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
Csamp works with all versions of VirtualDub and Windows. It's ColorTools that requires a new version for Win7 and later. I use ClrTools and ColorPic both, just for a change of pace.

If you measure the pixels in black, gray, or white objects you'll know if they're off-spec. Don't trust your eyes alone.

If you can't use a histogram for information you're at a serious disadvantage. There is more than one kind of histogram in YUV and RGB; there are vectorscopes that provide different info. Histograms measure the number and brightness of pixels in various parts of the spectrum. the pixels. Vectorscopes measure saturation, which can tell a very revealinbg story and can explain a lot about problem videos. Avisynth has both for YUV, CoorTools has both for RGB. Pick up a book for digital color processing with pro tools and see what they say about histograms, YUV, and RGB. I think you won't find a tutorial anywhere that would agree with you on those tools. Unless you understand the behavior of YUV and RGB in greater detail, you'll be frustrated. The Color Correction Handbook by Van Hurkman is a real eye opener about YUV and RGB both, and free tutorials about color correction in Photoshop Pro and AfterEffects are excellent. You can adapt the principles for use in Avisynth and VirtualDub.

The color bars on VHS tapes are usually not a good guide for achieving color balance, especially when VHS changes color and levels so frequently. Those bars are very general levels setters for bulk tape mastering machines. And you still need a pixel reader and histograms if you want to work with them. I've seen posts by people who used them to fix colors; they're really not that accurate. One of the samples was said to look great -- that is, if you like purple hair, pumpkin colored skin, and dingy shadows.

Last edited by sanlyn; 04-23-2020 at 09:16 PM.
Reply With Quote
  #16  
04-23-2020, 09:43 PM
Winsordawson Winsordawson is offline
Free Member
 
Join Date: Sep 2010
Location: Behind you
Posts: 473
Thanked 29 Times in 25 Posts
That is good to know that Csamp can be used with all versions, thank you. I know how to use histograms and vectorscopes, but I am sure I can learn a lot from a 600+ page book like the one you linked above. Sadly, my library is closed until further notice. I usually do not trust my eyes.

Regarding the pixel sampler, in theory, it should be easy to identify something that is pure white, if it is in fact pure white, like a piece of paper. But what if something that you color correct to be black in a video is really not completely black (i.e. if it is really RGB 10, 10, 0, which still looks black)? Likewise for middle gray.
Reply With Quote
  #17  
04-24-2020, 07:40 AM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
The idea with neutral colors like black, gray, white is to get close to the mark. Not every gray is exactly middle gray. There are darker and lighter variations of all colors. If there are no neutral colors in the frame, you can often assume the color balance from other, similar scenes. Skin tones vary as well, with skin highlights having more green and blue than in other, darker areas. Middle skin values are mostly red, followed by about green at about 70% of red, then by blue at 70% of green. Brown hair is mostly red and about 85% green (equals yellow) with varying amounts of blue. Thus, if your dark brown or dark brown hair is mostly blue, something's wrong. If Robert Redford's yellow hair is green, one or two of the other other colors needs adjusting, or reduce green.

Night scenes and scenes with odd lighting arrangements are a problem, of course. In that case you do the best you can, which simply takes some experience. A scene lit with blue lights is usually done on purpose (and isn't 100% blue) but fixing overall luminance levels first can be useful. It takes some time to get accustomed to color correction but after a short while of working with known principles something just goes "snap" and it all comes together in your head. In the meantime you'll understand why professional colorists are so expensive. You'll also see just how bad VHS color really is. Don't expect the same consistency and perfection you find in a decent digital source.


You might have seen the following in another recent thread:
Quote:
Originally Posted by sanlyn View Post
Scene 3 has the same color problems as a previous salesroom scene in Firestone-1. The yellowish problem seems to be from interior lighting. Fortunately there is white, gray and black in the tires on display, which is helpful even if the skin tones are still a little yellow and the whitewalls are not pure white. I didn't try corrections in YUV, which turned everything blue, especially in the blacks. Its another case of trying an RGB adjustment, letting it bake for a few hours, and checking later for another round of adjustments.

How do you know when the whites are white, and so forth? Basic color correction attempts to find a white balance and, if available, a gray balance and a black balance. All shades of white from pure white thru gray to black are formed with RGB colors Red, Green, and Blue (which is where wwe get the acronym "RGB"). The RGB numbers for those colors are often stated as RGB RRR-GGG-BBB, as below:

super white = Red 255, green 255, B 255, or = RGB 255-255-255
video white = RGB 235-235-235
light gray = RGB 192-192-192
middle gray = RGB 128-128-128
dark gray = RGB 64-64-64
digital black = RGB 16-16-16
super black = RGB 0-0-0

You'll see that each shade of white from super white to super black consists of equal chunks of each color, R, G, and B. If you can find a white or gray object in an image and make it look correct for that shade, then all RGB colors are in balance at that point and the other mixed colors will fall into place. Achieving that balance is called finding the white point, or the gray point or, when needed, a proper black point.

A pure red will have only Red, with no Green or Blue in the mix. But you're not likely to find pure color in nature, not even in a "blue" sky (which will likely contain some slight red and green).,We would say that the RGB colors in pure primary RGB terms are:
Red = RGB 255-000-000
Green = RGB 000-255-000
Blue = RGB 000-000-255

The secondary colors mix 2 basic RGB colors together:
Yellow = RGB 255-255-000
Cyan = RGB 000-255-255
Magenta = RGB 255-000-255

All other colors such as orange, hot pink, scarlet, brown, indigo, etc., are just various mixes of R, G, and B.
That quote is from post #30 in another long thread with several short video projects and scripts at http://www.digitalfaq.com/forum/vide...broadcast.html .

Last edited by sanlyn; 04-24-2020 at 08:01 AM.
Reply With Quote
  #18  
04-26-2020, 03:00 PM
Winsordawson Winsordawson is offline
Free Member
 
Join Date: Sep 2010
Location: Behind you
Posts: 473
Thanked 29 Times in 25 Posts
Thank you for directing me to that other useful thread of yours. By middle skin values, are you referring to Caucasian skin tone being red, green at 70 percent of red, and blue at 70 percent of green? For example, I have read that for a female Caucasian, maximum highlights are 50 to 75 percent on the Waveform monitor, while for a Black male it is 15 to 35%.
Reply With Quote
  #19  
04-26-2020, 07:36 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
Skin tones vary by facial position. There are shadows, midtones, and highlights. Darker areas have more red and blue, lightest areas approach white, with larger portions of each of the three colors. The guide used to study skin tones is the vectorscope, which measures saturation levels. Below is the standard vectorscope that comes with ColorTools. The slanted line in the upper left indicates the areas where skin tones locate, with some slight overrun into adjacent and opposite colors such as R, M, Y, B, (Red, Magenta, Yellow, Blue), etc. Afro Americans have more red and blue, chinese have less bleu, and so forth. Colors that extend beyond the inner circle of letters are oversaturated.




Photo-oriented websites have free color charts and samples of various skin tones, some are in RGB codess but some are in html or printer color codes so you'll want a pixel reader for those. One such site is at https://www.schemecolor.com/real-ski...or-palette.php, with many sample patches farther down on the web page.


Attached Images
File Type: png Untitled-2R.png (19.3 KB, 224 downloads)
Reply With Quote
The following users thank sanlyn for this useful post: Winsordawson (04-27-2020)
  #20  
04-27-2020, 04:05 PM
Winsordawson Winsordawson is offline
Free Member
 
Join Date: Sep 2010
Location: Behind you
Posts: 473
Thanked 29 Times in 25 Posts
Thank you. I have been able to remove most dropouts successfully with RemoveSpotsMC(). However, I couldn't in the attached sample with that filter even after applying multiple times. I tried using DePulse() but either it does not work or I am doing something wrong. I read that it is a spatio-temporal filter, which means that the even and odd fields must be worked on separately. Before this in the script most of the filters mentioned above were applied, along with RemoveSpotsMC() thrice. I then trimmed this example to try to apply further filters. Any advice is appreciated.

Code:
clip1 =Trim(0, 7680)
clip2 = Trim(7681, 7700)
clip3 = Trim(7700, 0)
clip2
ConvertToYUY2()
AssumeBFF(last)
BottomField = SeparateFields().SelectEven().DePulse(h=100, d=50)
TopField = SeparateFields().SelectOdd().DePulse(h=100, d=50)
Interleave(TopField, BottomField).AssumeBFF().Weave()

Prefetch(threads=2)
return clip2


Attached Files
File Type: avi dropout_sample_longer.avi (6.56 MB, 8 downloads)

Last edited by Winsordawson; 04-27-2020 at 04:25 PM.
Reply With Quote
Reply




Tags
avisynth, color artifact, restoration

Similar Threads
Thread Thread Starter Forum Replies Last Post
Attempt to fix AGC issues with ATI 650/750 USB? burak5 Capture, Record, Transfer 0 08-17-2016 06:46 AM
Removing dot crawl with Avisynth? (1st attempt) TheCatacomb Restore, Filter, Improve Quality 7 06-13-2016 10:35 PM
VHS restore filters using Avisynth and VirtualDub - advice? Zoink187 Restore, Filter, Improve Quality 13 07-24-2014 09:38 PM
Virtualdub and Avisynth workflow and filter advice for VHS restore Zoink187 Restore, Filter, Improve Quality 11 07-23-2014 07:31 AM
VHS footage restore Advice using VirtualDub/Avisynth Zoink187 Restore, Filter, Improve Quality 0 03-28-2014 09:14 PM

Thread Tools



 
All times are GMT -5. The time now is 12:22 PM